id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.07655 | On the diameter of semigroups of transformations and partitions | For a semigroup $S$ whose universal right congruence is finitely generated
(or, equivalently, a semigroup satisfying the homological finiteness property
of being type right-$FP_1$), the right diameter of $S$ is a parameter that
expresses how `far apart' elements of $S$ can be from each other, in a certain
sense. To be more precise, for each finite generating set $U$ for the universal
right congruence on $S,$ we have a metric space $(S,d_U)$ where $d_U(a,b)$ is
the minimum length of derivations for $(a,b)$ as a consequence of pairs in $U$;
the right diameter of $S$ with respect to $U$ is the diameter of this metric
space. The right diameter of $S$ is then the minimum of the set of all right
diameters with respect to finite generating sets. We investigate whether
various natural infinite semigroups of transformations and partitions have a
finitely generated universal right/left congruence, and for those that do, we
determine their right/left diameter. Among other results, for an arbitrary
infinite set $X$ we prove the following. Each of the monoids of all binary
relations on $X,$ of all partial transformations on $X,$ and of all full
transformations on $X,$ as well as the partition and partial Brauer monoids on
$X,$ have right diameter 1 and left diameter 1. The symmetric inverse monoid on
$X$ has right diameter 2 and left diameter 2. The monoid of all injective
mappings on $X$ has right diameter 4, and its minimal ideal (called the
Baer-Levi semigroup on $X$) has right diameter 3, but neither of these two
semigroups has a finitely generated universal left congruence. On the other
hand, the semigroup of all surjective mappings on $X$ has left diameter 4, and
its minimal ideal has left diameter 2, but neither of these semigroups has a
finitely generated universal right congruence. | James East, Victoria Gould, Craig Miller, Thomas Quinn-Gregson, Nik Ruskuc | 2023-10-11T16:58:04Z | http://arxiv.org/abs/2310.07655v2 | # On the diameter of semigroups of transformations and partitions
###### Abstract.
For a semigroup \(S\) whose universal right congruence is finitely generated (or, equivalently, a semigroup satisfying the homological finiteness property of being type right-\(FP_{1}\)), the right diameter of \(S\) is a parameter that expresses how 'far apart' elements of \(S\) can be from each other, in a certain sense. To be more precise, for each finite generating set \(U\) for the universal right congruence on \(S,\) we have a metric space \((S,d_{U})\) where \(d_{U}(a,b)\) is the minimum length of derivations for \((a,b)\) as a consequence of pairs in \(U;\) the right diameter of \(S\) with respect to \(U\) is the diameter of this metric space. The right diameter of \(S\) is then the minimum of the set of all right diameters with respect to finite generating sets. We investigate whether various natural infinite semigroups of transformations and partitions have a finitely generated universal right/left congruence, and for those that do, we determine their right/left diameter. Among other results, for an arbitrary infinite set \(X\) we prove the following. Each of the monoids of all binary relations on \(X,\) of all partial transformations on \(X,\) and of all full transformations on \(X,\) as well as the partition and partial Brauer monoids on \(X,\) have right diameter \(1\) and left diameter \(1\). The symmetric inverse monoid on \(X\) has right diameter \(2\) and left diameter \(2\). The monoid of all injective mappings on \(X\) has right diameter \(4\), and its minimal ideal (called the Baer-Levi semigroup on \(X\)) has right diameter \(3\), but neither of these two semigroups has a finitely generated universal left congruence. On the other hand, the semigroup of all surjective mappings on \(X\) has left diameter \(4\), and its minimal ideal has left diameter \(2\), but neither of these semigroups has a finitely generated universal right congruence.
_Keywords_: Transformation semigroup, partition monoid, (congruence) generating set, derivation sequence, diameter.
_Mathematics Subject Classification 2020_: 20M10, 20M20.
## 1. Introduction
This paper is concerned with the semigroup finiteness condition of the universal right congruence being finitely generated, and the related parameter of right diameter, as well as the left-right duals of these notions.
For a semigroup \(S\) whose universal right congruence is generated by a finite set \(U,\) the right diameter of \(S\) with respect to \(U\) is, informally, the supremum of the minimum lengths of derivations for pairs \((a,b)\in S\times S\) as a consequence of those in \(U.\) The right diameter of \(S\) is the minimum of the set of all right diameters with respect to finite generating sets. Thus, a semigroup has finite right diameter if its universal right congruence is finitely generated and there is a bound on the length of sequences required to relate any two elements. More precise definitions regarding the notion of diameter will be given in Section 2.
The property of having finite right (resp. left) diameter is also known as being _right_ (resp. _left_) _pseudo-finite_. Left pseudo-finite semigroups were first studied by White in [21] in the context of Banach algebras. This work was motivated by a conjecture of Dales and Zelazko, stating that a unital Banach algebra in which every maximal left ideal is finitely generated is necessarily finite dimensional. It was also noted in [21] that for weakly right cancellative monoids, which include groups, being left pseudo-finite coincides with being finite.
Dandan et al. undertook the first comprehensive study of semigroups with a finitely generated universal left congruence, with appropriate specialisions to left pseudo-finite semigroups [4]. The former class of semigroups was shown to be equivalent to a number of previously-studied classes, including those semigroups satisfying the homological finiteness property of being type left-\(FP_{1}\)[4, Theorem 3.10] (the equivalence of some of these conditions had previously been established by Kobayashi [15]). An interesting question raised in [4, Open Question 8.10] is whether every (left) pseudo-finite semigroup has a completely simple minimal ideal. The article [10] sought to address this question systematically. It found that for pseudo-finite semigroups lying in some important classes, such as orthodox semigroups, completely regular semigroups and commutative semigroups, having a completely simple minimal ideal _is_ necessary, but in general a pseudo-finite semigroup may have a minimal ideal that it not completely simple, or may have no minimal ideal at all.
The notion of right/left diameter was introduced in [10] as a useful tool for proving that certain semigroups are right/left pseudo-finite. It was observed that the property of having right diameter \(1\) is equivalent to a certain well-studied notion, namely that of the diagonal right act being finitely generated [10, Proposition 3.6]. For a semigroup \(S,\) the diagonal right \(S\)-act is the set \(S\times S\) under the right action given by \((a,b)c=(ac,bc).\) Diagonal acts first appear, implicitly, in [1], and they were then formally defined and studied in [20]. A systematic investigation into generation of diagonal acts was undertaken in [9], and some of the most intriguing results concerned certain infinite semigroups of transformations and relations [8]. In particular, it was shown that, for any infinite set \(X,\) the diagonal right and left acts are monogenic for the monoids \(\mathcal{B}_{X}\) of all binary relations on \(X,\)\(\mathcal{T}_{X}\) of all transformations on \(X,\) and \(\mathcal{PT}_{X}\) of all partial transformations on \(X.\)
Given the above findings concerning certain transformation semigroups, it is natural to consider similar kinds of semigroups when searching for semigroups with finite right/left diameter. Indeed, the first example found of a right pseudo-finite semigroup with a minimal ideal that is not completely simple was the Baer-Levi semigroup on an infinite set \(X\)[18, Remark 7.3], and another such example is the monoid of all injective mappings on \(X\)[10, Proposition 4.4]. Moreover, the first example exhibited of a right (and left) pseudo-finite semigroup with _no_ minimal ideal was a certain transformation monoid denoted \(\mathcal{U}_{X}\)[10, Example 8.1].
A class of semigroups that exhibit some similar behaviour to transformation semigroups is that of the so-called diagram monoids, which have recently come into prominence; see [6]. In particular, the partition monoid on a set \(X,\) denoted \(\mathcal{P}_{X},\) contains natural copies of many 'classical' transformation monoids, including the symmetric group \(\mathcal{S}_{X},\) the full
transformation monoid \(\mathcal{T}_{X}\) and the symmetric inverse monoid \(\mathcal{I}_{X}\). The importance of these classical monoids derives mainly from the well-known Cayley Theorems, stating that every group embeds into some \(\mathcal{S}_{X}\) and every semigroup into some \(\mathcal{T}_{X}\)[12, Theorem 1.1.2], and the Wagner-Preston Theorem, stating that every inverse semigroup embeds into some \(\mathcal{I}_{X}\)[12, Theorem 5.1.7]. Thus, a common theme in papers on partition monoids is the extent to which their behaviour resembles those of classical transformation monoids; for example, see the article [5], which classifies all congruences on \(\mathcal{P}_{X}\) and the partial Brauer monoid \(\mathcal{P}\mathcal{B}_{X}\), where \(X\) is an arbitrary infinite set. Given the aforementioned results concerning certain classical transformation monoids in relation to diameter, it is natural to explore monoids of partitions as a potential source of further examples of semigroups with finite right/left diameter.
The purpose of this article is to systematically investigate, for various infinite semigroups of transformations and partitions, whether each such semigroup has a finitely generated universal right/left congruence, and, if so, determine its right/left diameter. The main results are summarised in Table 1.
The paper is organised as follows. In Section 2 we provide the necessary prelimary material and summarise the main results of the paper. Various transformation semigroups are considered in Sections 3 and 4. Section 3 is concerned with the universal right congruence and right diameter, and Section 4 is the left-right counterpart of Section 3. In both these sections, we aim to prove general results that can be applied to a number of transformation semigroups of concern. Particularly noteworthy results are obtained for certain subsemigroups of the monoids \(\mathcal{I}\!n\!j_{X}\) of all injective mappings on \(X\) and \(\mathcal{S}\!n\!j_{X}\) of all surjective mappings \(X.\) In particular, we prove that:
* the minimal ideal \(\mathcal{BL}_{X}\) of \(\mathcal{I}\!n\!j_{X}\), called the Baer-Levi semigroup on \(X,\) has right diameter \(3\) (Theorem 3.14);
* a submonoid \(S\) of \(\mathcal{I}\!n\!j_{X}\) containing the symmetric group \(\mathcal{S}_{X}\) has right diameter \(4\) if and only if it contains \(\mathcal{BL}_{X}\) (Theorem 3.16);
* the minimal ideal \(\mathcal{DBL}_{X}\) of \(\mathcal{S}\!n\!j_{X}\), called the dual Baer-Levi semigroup on \(X,\) has left diameter \(2\) (Theorem 4.11);
* the monoid \(\mathcal{S}\!n\!j_{X}\) has left diameter \(4\) (Theorem 4.17).
Finally, in Section 5 we prove that both the partition monoid \(\mathcal{P}_{X}\) and the partial Brauer monoid \(\mathcal{P}\mathcal{B}_{X}\) have right diameter \(1\) and left diameter \(1\).
## 2. Notation and Summary of Results
In this section we provide the necessary preliminary material on semigroups and summarise the main results of the article. We refer the reader to [12] for a more comprehensive introduction to the basic semigroup concepts defined here.
### Diameter of semigroups
Let \(S\) be a semigroup. We denote by \(S^{1}\) the monoid obtained from \(S\) by adjoining an identity if necessary (if \(S\) is already a monoid, then \(S^{1}=S\)).
A _right ideal_ of \(S\) is a subset \(I\) such that \(IS\subseteq I.\) A subset \(U\) of a right ideal \(I\) is a _generating set_ for \(I\) if \(I=US^{1};\)\(I\) is said to be _finitely generated_ if it has a finite generating set. Of course, \(S\) is a right ideal of itself. When considering \(S\) being generated by a set _as a right ideal_, we shall write \(S\) as \(S^{r}\). Thus, '\(S^{r}\) is generated by \(U\)' means \(S=US^{1}.\)
An equivalence relation \(\rho\) on \(S\) is a _right congruence_ if \((a,b)\in\rho\) implies \((as,bs)\in\rho\) for all \(s\in S\). For \(U\subseteq S\times S,\) the _right congruence generated by \(U\)_ is the smallest right congruence on \(S\) containing \(U;\) we denote this right congruence by \(\langle U\rangle.\)
**Lemma 2.1**.: _[_14_, Lemma I. 4. 37]_ _Let \(S\) be a semigroup, let \(U\) be a subset of \(S\times S,\) and let \(\rho\) be the right congruence generated by \(U.\) For any \(a,b\in S,\) we have \((a,b)\in\rho\) if and only if either \(a=b\) or there exists a sequence_
\[a=u_{1}s_{1},\ v_{1}s_{1}=u_{2}s_{2},\ \ldots,\ v_{n}s_{n}=b\]
_for some \(n\in\mathbb{N},\) where \((u_{i},v_{i})\in U\) or \((v_{i},u_{i})\in U,\) and \(s_{i}\in S^{1},\) for each \(i\in\{1,\ldots,n\}.\)_
A sequence of the form given in Lemma 2.1 is referred to as a \(U\)_-sequence from \(a\) to \(b\) of length \(n\)_. If \(a=b,\) we consider that \(a\) and \(b\) are related by a \(U\)-sequence of length \(0\). If the generating set \(U\) consists of a single pair \((u,v),\) we may speak of \((u,v)\)-sequences rather than \(U\)-sequences.
The universal relation \(\omega_{S}=S\times S\) is certainly a right congruence on \(S.\) When viewing this relation as a right congruence, we shall denote it by \(\omega_{S}^{r}\). If \(U\) is a generating set for \(\omega_{S}^{r},\) we shall write \(\omega_{S}^{r}=\langle U\rangle.\)
Consider a set \(U\subseteq S\times S\) such that \(\omega_{S}^{r}=\langle U\rangle\). For any \(a,b\in S,\) let \(d_{U}^{r}(a,b)\) denote the least non-negative integer \(n\) such that there is a \(U\)-sequence of length \(n\) from \(a\) to \(b.\) It is easy to see that \(d_{U}^{r}:S\times S\rightarrow\{0,1,2,\ldots\}\) is a metric on \(S.\)
**Definition 2.2**.: Let \(S\) be a semigroup.
* If \(\omega_{S}^{r}=\langle U\rangle,\) we call the diameter of the metric space \((S,d_{U}^{r})\) the _right \(U\)-diameter_ of \(S\) and denote it by \(D_{r}(U,S);\) that is, \[D_{r}(U,S)=\sup\{d_{U}^{r}(a,b)\::\:a,b\in S\}.\]
* If \(\omega_{S}^{r}\) is finitely generated, we define the _right diameter_ of \(S\) to be \[D_{r}(S)=\min\{D_{r}(U,S):\omega_{S}^{r}=\langle U\rangle,|U|<\aleph_{0}\}.\]
Note that if \(U\) and \(U^{\prime}\) are two finite generating sets for \(\omega_{S}^{r},\) then \(D_{r}(U,S)\) is finite if and only if \(D_{r}(U^{\prime},S)\) is finite [4, Lemma 2.5]. We make the following easy observation.
**Lemma 2.3**.: _Let \(S\) be a non-trivial semigroup. If \(\omega_{S}^{r}=\langle U\rangle\) then, letting_
\[V=\{v\in S:\exists u\in S\text{ such that }(u,v)\in U\text{ or }(v,u)\in U\},\]
_we have \(\omega_{S}=\langle V\times V\rangle\) and \(D_{r}(V\times V,S)\leq D_{r}(U,S).\) Furthermore, we have \(S=VS^{1}\). In particular, if \(\omega_{S}^{r}\) is finitely generated then so is \(S^{r}\)._
We shall often abuse terminology by saying that \(\omega_{S}^{r}\) is generated by a subset \(V\) of \(S\) to mean that \(\omega_{S}^{r}\) is generated by \(V\times V\), and also write \(D_{r}(V,S)\) in place of \(D_{r}(V\times V,S)\).
It follows from Lemma 2.3 that if \(\omega_{S}^{r}\) is finitely generated then there exists a finite subset \(V\subseteq S\) such that \(D_{r}(S)=D_{r}(V,S).\)
We now provide some results that will be useful later in the paper.
**Lemma 2.4**.: _Let \(S\) be a monoid and let \(I\) be a right ideal of \(S.\) If \(\omega_{I}^{r}\) is finitely generated, then \(\omega_{S}^{r}\) is finitely generated. Moreover, we have \(D_{r}(S)\leq D_{r}(I)+2.\)_
Proof.: This result essentially follows from the proof of [4, Lemma 2.11]. We provide a proof here for completeness.
Let \(U\subseteq I\) be a finite generating set for \(\omega_{I}^{r}\) such that \(D_{r}(U,I)=D_{r}(I)\). Choose any \(u\in U.\) For any \(a,b\in S,\) since \(ua,ub\in I,\) there exists a \(U\)-sequence
\[ua=u_{1}s_{1},\ v_{1}s_{1}=u_{2}s_{2},\ \ldots,\ v_{k}s_{k}=ub\]
in \(I\), where \(k\leq D_{r}(I).\) Thus, letting \(V=U\cup\{1\},\) we have a \(V\)-sequence
\[a=1a,\ ua=u_{1}s_{1},\ v_{1}s_{1}=u_{2}s_{2},\ \ldots,\ v_{k}s_{k}=ub,\ 1b=b\]
from \(a\) to \(b\) of length \(k+2\leq D_{r}(I)+2.\) We conclude that \(\omega_{S}^{r}\) is generated by \(V,\) and \(D_{r}(S)\leq D_{r}(V,S)\leq D_{r}(I)+2.\)
**Corollary 2.5**.: _If \(S\) is a monoid with a left zero, then \(D_{r}(S)\leq 2.\)_
Green's relations \(\mathcal{L},\)\(\mathcal{R},\)\(\mathcal{H},\)\(\mathcal{D}\) and \(\mathcal{J}\) are standard tools for describing the ideal structure of a semigroup. The relation \(\mathcal{L}\) on \(S\) is given by \((a,b)\in\mathcal{L}\) if and only if \(S^{1}a=S^{1}b,\) i.e. if \(a\) and \(b\) generate the same principal left ideal. The relations \(\mathcal{R}\) and \(\mathcal{J}\) are defined analogously in terms of principal right ideals and principal two-sided ideals, respectively. Finally, we have \(\mathcal{H}=\mathcal{L}\cap\mathcal{R}\) and \(\mathcal{D}=\mathcal{L}\vee\mathcal{R}\) (\(=\mathcal{L}\circ\mathcal{R}=\mathcal{R}\circ\mathcal{L}\)).
We call \(S\)_left/right simple_ if it has a single \(\mathcal{L}/\mathcal{R}\)-class, and _simple_ if it has a single \(\mathcal{J}\)-class. There is a natural partial order on the set of \(\mathcal{J}\)-classes of \(S,\) given by \(J_{a}\leq J_{b}\) if and only \(S^{1}aS^{1}\subseteq S^{1}bS^{1}\). There is at most one minimal \(\mathcal{J}\)-class under this ordering; if it exists, it is called the _minimal ideal_ of \(S,\) and is a simple subsemigroup of \(S.\)
The equivalence relation \(\mathcal{L}^{*}\) on \(S\) is defined by the rule that \((a,b)\in\mathcal{L}^{*}\) if and only \(a,b\) are \(\mathcal{L}\)-related in some oversemigroup \(T\), i.e. \(aT^{1}=bT^{1}\). We say that \(S\) is \(\mathcal{L}^{*}\)_-simple_ if it has a single \(\mathcal{L}^{*}\)-class. We dually define the relation \(\mathcal{R}^{*}\) and the notion of being \(\mathcal{R}^{*}\)-simple. By [10, Proposition 3.4], an \(\mathcal{L}^{*}\)-simple semigroup has finite right diameter if and only if it is finite. We provide a proof of this result here using a more general argument, which also shows that, for any \(\mathcal{L}^{*}\)-simple semigroup, being countable is necessary for the universal right congruence to be finitely generated.
**Proposition 2.6**.: _Let \(S\) be an \(\mathcal{L}^{*}\)-simple semigroup. If \(\omega_{S}^{r}\) is finitely generated, then \(S\) is countable. Moreover, \(S\) has finite right diameter if and only if it is finite._
Proof.: Since \(S\) is \(\mathcal{L}^{*}\)-simple, by [19, Theorem 1] there exists an oversemigroup \(T\) of \(S\) such that \(S\) is contained in a single \(\mathcal{L}\)-class of \(T.\) (One can take \(T\) to be the dual of the full transformation monoid on \(S^{1},\) in which maps are composed from right to left.)
Now, suppose that \(\omega_{S}^{r}\) is finitely generated, and let \(U\subseteq S\) be a finite generating set for \(\omega_{S}^{r}\) such that \(D_{r}(S,U)=D_{r}(S).\) For each pair \(u,v\in U,\) since \(u\) and \(v\) are \(\mathcal{L}\)-related in
we can choose \(\alpha(u,v)\in T\) such that \(u=\alpha(u,v)v.\) Fix \(b\in S.\) Let
\[V=\{\alpha(u_{1},v_{1})\ldots\alpha(u_{k},v_{k})b:u_{i},v_{i}\in U,k\leq D_{r}(S) \}\subseteq T.\]
Clearly \(V\) is countable, and if \(D_{r}(S)\) is finite then so is \(V.\) We claim that \(S\subseteq V\); then \(S\) is countable, and it is finite if it has finite right diameter. So, let \(a\in S.\) Then there exists a \(U\)-sequence
\[a=u_{1}s_{1},\ v_{1}s_{1}=u_{2}s_{2},\ \ldots,\ v_{k}s_{k}=b\]
where \(k\leq D_{r}(S).\) Letting \(\alpha_{i}=\alpha(u_{i},v_{i})\), we have
\[a=u_{1}s_{1}=\alpha_{1}v_{1}s_{1}=\alpha_{1}u_{2}s_{2}=\alpha_{1}\alpha_{2}v_ {2}s_{2}=\cdots=\alpha_{1}\ldots\alpha_{k}v_{k}s_{k}=\alpha_{1}\ldots\alpha_{ k}b\in V,\]
as required. Clearly, if \(S\) is finite then it has finite right diameter.
**Remark 2.7**.: By a slight modification of the proof of Proposition 2.6, an \(\mathcal{L}^{*}\)-simple semigroup \(S\) is countable if and only if \(\omega_{S}^{r}\) is countably generated (where _countably generated_ means being generated by a countable set).
The above definitions and results have obvious left-right duals, and we use analagous nomenclature and notation: left ideal, \(S^{l}\), \(\omega_{S}^{l}\), left diameter, etc.
### Semigroups of transformations and relations
In this subsection we introduce the transformation semigroups of concern in this article. First, we recall some basic terminology regarding relations and mappings.
Throughout the paper \(X\) will stand for an arbitrary infinite set.
A (_binary_) _relation_ on \(X\) is a subset of \(X\times X\). We denote the identity relation \(\{(x,x):x\in X\}\) by \(1_{X}\). For a relation \(\alpha\) on \(X\) and a subset \(Y\subseteq X,\) we define
\[Y\alpha=\{x\in X:(y,x)\in\alpha\ \text{for some}\ y\in Y\},\]
and we abbreviate \(\{y\}\alpha\) to \(y\alpha.\) The _domain_ and _image_ of \(\alpha\) are, respectively,
\[\operatorname{dom}\alpha=\{x\in X:(x,y)\in\alpha\ \text{for some}\ y\in X\}\quad \text{and}\quad\operatorname{im}\alpha=X\alpha,\]
and the _inverse_ of \(\alpha\) is the relation
\[\alpha^{-1}=\{(y,x):(x,y)\in\alpha\}.\]
The _composition_ of two relations \(\alpha\) and \(\beta\) on \(X\) is the relation
\[\alpha\beta=\{(x,y):\exists z\in X\ \text{such that}\ (x,z)\in\alpha\ \text{and}\ (z,y)\in\beta\}.\]
A _partial transformation_ on \(X\) is a relation \(\alpha\) on \(X\) satisfying the condition
\[(x,y),(x,z)\in\alpha\Rightarrow y=z.\]
Let \(\alpha\) be a partial transformation on \(X.\) For each \(x\in\operatorname{dom}\alpha,\) we interpret \(x\alpha\) as an element of \(X\) (rather than a singleton subset of \(X\)). Note that for \(Y\subseteq X\) we have \(Y\alpha^{-1}=\{x\in X:x\alpha\in Y\}.\) The _kernel_ of \(\alpha\) is
\[\ker\alpha=\{(x,y)\in\operatorname{dom}\alpha\times\operatorname{dom}\alpha:x \alpha=y\alpha\}.\]
Observe that \(\alpha\alpha^{-1}=\ker\alpha\) and \(\alpha^{-1}\alpha=\{(x,x):x\in\operatorname{im}\alpha\}.\) It follows that
\[\alpha\alpha^{-1}=1_{X}\,\Leftrightarrow\,[\operatorname{dom}\alpha=X\ \text{and}\ \alpha\ \text{is injective}]\quad\text{and}\quad\alpha^{-1}\alpha=1_{X}\, \Leftrightarrow\,\alpha\ \text{is surjective}.\]
We now define the semigroups of transformations and relations that will be considered in this paper, with some relevant additional information.
\(\mathcal{B}_{X}\)**:** the monoid of all binary relations on \(X\) under composition, with identity \(1_{X}\).
\(\mathcal{PT}_{X}\)**- the partial transformation monoid on \(X\):** the submonoid of \(\mathcal{B}_{X}\) consisting of all partial transformations on \(X.\)
\(\mathcal{I}_{X}\)**- the symmetric inverse monoid on \(X\):** the submonoid of \(\mathcal{PT}_{X}\) consisting of all injective partial transformations (also known as _partial bijections_).
\(\mathcal{T}_{X}\)**- the full transformation monoid on \(X\):** the submonoid of \(\mathcal{PT}_{X}\) consisting of all (full) transformations on \(X,\) i.e. \(\mathcal{T}_{X}=\{\alpha\in\mathcal{PT}_{X}\::\:\operatorname{dom}\alpha=X\}.\)
\(\mathcal{S}_{X}\)**- the symmetric group on \(X\):** the subgroup of \(\mathcal{T}_{X}\) consisting of all bijections.
\(\mathcal{F}_{X}\)**:** the submonoid of \(\mathcal{T}_{X}\) consisting of all finite-to-one mappings, i.e. \(\mathcal{F}_{X}=\{\alpha\in\mathcal{T}_{X}\::|x\alpha^{-1}|<\infty\text{ for all }x\in X\}.\)
\(\mathcal{I}\!n\!j_{X}\)**:** the submonoid of \(\mathcal{T}_{X}\) consisting of all injective mappings.
- \(\mathcal{I}\!n\!j_{X}\) is right cancellative (that is, \(\beta\alpha=\gamma\alpha\) implies that \(\beta=\gamma\)), and hence \(1_{X}\) is its only idempotent.
\(\mathcal{B}\!\mathcal{L}_{X,q}\)**- the Baer-Levi semigroup of type \(q\) on \(X\):** for an infinite cardinal \(q\leq|X|,\) it is the subsemigroup of \(\mathcal{I}\!n\!j_{X}\) defined by
\[\mathcal{B}\mathcal{L}_{X,q}=\{\alpha\in\mathcal{I}\!n\!j_{X}:|X\backslash \operatorname{im}\alpha|=q\}.\]
- Each \(\mathcal{B}\mathcal{L}_{X,q}\) is right cancellative, right simple and has no idempotents [3, Theorem 8.2].
\(\mathcal{B}\!\mathcal{L}_{X}\)**- the Baer-Levi semigroup on \(X\):**\(\mathcal{B}\mathcal{L}_{X,q}\) for \(q=|X|.\)
- For any \(\alpha,\beta\in\mathcal{I}\!n\!j_{X},\) we have \(\alpha\in\beta(\mathcal{I}\!n\!j_{X})\) if and only if \(\alpha\in(\mathcal{I}\!n\!j_{X})\beta(\mathcal{I}\!n\!j_{X})\) if and only if \(|X\backslash\operatorname{im}\alpha|\geq|X\backslash\operatorname{im}\beta|.\) Thus, the \(\mathcal{J}(=\!\mathcal{R})\)-classes of \(\mathcal{I}\!n\!j_{X}\) form a chain
\[\mathcal{S}_{X}>J_{1}>J_{2}>\cdots>J_{n}>\cdots>\mathcal{B}\mathcal{L}_{X, \aleph_{0}}>\mathcal{B}\mathcal{L}_{X,\aleph_{1}}>\cdots>\mathcal{B}\mathcal{L} _{X,|X|}=\mathcal{B}\mathcal{L}_{X},\]
where \(J_{n}=\{\alpha\in\mathcal{I}\!n\!j_{X}:|X\backslash\operatorname{im}\alpha|=n\}\) (\(n\in\mathbb{N}\)), and \(\mathcal{B}\mathcal{L}_{X}\) is the minimal ideal of \(\mathcal{I}\!n\!j_{X}\)[16, Proposition 2.2, Theorem 2.3, Remark 2.4].
\(\mathcal{B}\!\mathcal{L}_{X}^{1}\)**:** the Baer-Levi semigroup with an identity adjoined.
\(\mathcal{S}_{X}\cup\mathcal{B}\mathcal{L}_{X}\)**:** the submonoid of \(\mathcal{T}_{X}\) consisting of all bijections and all Baer-Levi elements.
- For any subgroup \(G\) of \(\mathcal{S}_{X},\) the set \(G\cup\mathcal{B}\mathcal{L}_{X}\) is a submonoid of \(\mathcal{I}\!n\!j_{X};\) see [3, Exercise 8.1.10] for more information about such monoids.
\(\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\!\mathcal{S}\! \mathcal{S}\!\mathcal{S}\!
\(\mathcal{DBL}_{X}\) - the dual Baer-Levi semigroup on \(X\): \(\mathcal{DBL}_{X,q}\) for \(q=|X|\).
- \(\mathcal{DBL}_{X}\) is the minimal ideal of \(\mathcal{S}\!\mathit{urj}_{X}\)[17, Theorem 3.2].
\(\mathcal{DBL}_{X}^{1}\)**:**: the dual Baer-Levi semigroup with an identity adjoined.
\(\mathcal{S}_{X}\cup\mathcal{DBL}_{X}\)**:**: the submonoid of \(\mathcal{T}_{X}\) consisting of all bijections and all dual Baer-Levi elements.
\(\mathcal{T}_{X}\backslash\mathcal{Inj}_{X}\)**:**: the subsemigroup of \(\mathcal{T}_{X}\) consisting of all non-injective mappings.
\(\mathcal{T}_{X}\backslash\mathcal{S}\!\mathit{urj}_{X}\)**:**: the subsemigroup of \(\mathcal{T}_{X}\) consisting of all non-surjective mappings.
\(\mathcal{H}_{X}\)**:**: the submonoid of \(\mathcal{T}_{X}\) defined by
\[\mathcal{H}_{X}=\{\alpha\in\mathcal{T}_{X}:|Y\alpha|=|X|\text{ for all }Y\subseteq X \text{ with }|Y|=|X|\}.\]
* \(\mathcal{H}_{X}\) is _bisimple_, meaning that it has a single \(\mathcal{D}\)-class. It was introduced by Higgins in [11] as a means of proving that every semigroup embeds into some bisimple monoid.
* The following are subsemigroups of \(\mathcal{H}_{X}\): \(\mathcal{F}_{X}\); \(\mathcal{Inj}_{X}\) (and hence \(\mathcal{S}_{X}\), \(\mathcal{BL}_{X,q}\) where \(\aleph_{0}\leq q\leq|X|\), \(\mathcal{BL}_{X}^{1}\), and \(\mathcal{S}_{X}\cup\mathcal{BL}_{X}\)); and \(\mathcal{DBL}_{X,q}\) where \(\aleph_{0}\leq q<|X|.\) This is clear in the case of \(\mathcal{Inj}_{X}\); for the other semigroups we provide a brief explanation. Suppose that \(S\) is either \(\mathcal{F}_{X}\) or \(\mathcal{DBL}_{X,q}\) with \(q<|X|\), and consider \(\alpha\in S\) and \(Y\subset X\) such that \(|Y\alpha|<|X|.\) By definition, \(|x\alpha^{-1}|\leq q\) for all \(x\in X\) (in fact, \(|x\alpha^{-1}|<\aleph_{0}\) if \(S=\mathcal{F}_{X}\)). Therefore, using the fact that \(Y\subseteq(Y\alpha)\alpha^{-1}\), we have \[|Y|\leq|(Y\alpha)\alpha^{-1}|=\Big{|}\bigcup_{x\in Y\alpha}x\alpha^{-1}\Big{|} \leq\max(|Y\alpha|,q)<|X|.\]
\(\mathcal{K}_{X}\)**:**: the submonoid of \(\mathcal{H}_{X}\) defined by
\[\mathcal{K}_{X}=\{\alpha\in\mathcal{H}_{X}:|X\backslash\operatorname{im} \alpha|<|X|\}.\]
All the semigroups in the above list are subsemigroups of \(\mathcal{T}_{X}\), with the exception of \(\mathcal{B}_{X}\), \(\mathcal{PT}_{X}\) and \(\mathcal{I}_{X}\). All these subsemigroups of \(\mathcal{T}_{X}\) have the following 'transitivity' properties, which will play a key role in the paper.
**Definition 2.8**.: Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\).
* Let \(\kappa\leq|X|\) be a cardinal. We say that \(S\) is _\(\kappa\)-transitive_ if for any partial bijection \(\lambda\in\mathcal{I}_{X}\) with \(|\mathrm{dom}\,\lambda|\leq\kappa\) and \(|X\backslash\mathrm{dom}\,\lambda|=|X\backslash\operatorname{im}\lambda|=|X|\), there exists some \(\theta\in S\) extending \(\lambda\), i.e. \(\theta|_{\mathrm{dom}\,\lambda}=\lambda\).
* We call \(S\)_finitely transitive_ if it is \(\kappa\)-transitive for every finite cardinal \(\kappa<\aleph_{0}\).
**Remark 2.9**.: Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\), and let \(\kappa\leq|X|\).
1. The semigroup \(S\) is \(\kappa\)-transitive if and only if it is \(\mu\)-transitive for every \(\mu\leq\kappa\).
2. If \(\kappa<|X|\), then \(S\) is \(\kappa\)-transitive if and only if for any partial bijection \(\lambda\in\mathcal{I}_{X}\) with \(|\mathrm{dom}\,\lambda|\leq\kappa\) there exists some \(\theta\in S\) extending \(\lambda\).
3. If \(S\) contains a \(\kappa\)-transitive (resp. finitely transitive) subsemigroup \(T,\) then \(S\) is also \(\kappa\)-transitive (resp. finitely transitive).
### Summary of results, and diagonal acts
Our main goal in this paper is to answer the following questions for each semigroup \(S\) listed in Section 2.2, as well as the partition monoid \(\mathcal{P}_{X}\) and the partial Brauer monoid \(\mathcal{P}\mathcal{B}_{X}\), which will be defined in Section 5.
1. Is \(S\) finitely generated as a right ideal, i.e. is \(S^{r}\) finitely generated?
2. Is the universal right congruence on \(S\) finitely generated, i.e. is \(w_{S}^{r}\) finitely generated?
3. If \(w_{S}^{r}\) is finitely generated, what is the right diameter \(D_{r}(S)\)?
4. Is \(S^{l}\) finitely generated?
5. Is \(w_{S}^{l}\) finitely generated?
6. If \(w_{S}^{r}\) is finitely generated, what is the left diameter \(D_{l}(S)\)?
Our main results are summarised in Table 1.
\begin{table}
\begin{tabular}{|c||c|c|c||c|c|c|} \hline Semigroup \(S\) & \(S^{r}\) f.g.? & \(\omega_{S}^{r}\) f.g.? & \(D_{r}(S)\) & \(S^{l}\) f.g.? & \(\omega_{S}^{l}\) f.g.? & \(D_{l}(S)\) \\ \hline \hline \(\mathcal{B}_{X}\) & Yes & Yes & 1 & Yes & Yes & 1 \\ \(\mathcal{PT}_{X}\) & Yes & Yes & 1 & Yes & Yes & 1 \\ \(\mathcal{I}_{X}\) & Yes & Yes & 2 & Yes & Yes & 2 \\ \(\mathcal{T}_{X}\) & Yes & Yes & 1 & Yes & Yes & 1 \\ \(\mathcal{S}_{X}\) & Yes & No & n.a. & Yes & No & n.a. \\ \(\mathcal{F}_{X}\) & Yes & Yes & 1 & Yes & No & n.a. \\ \(\mathcal{I}nj_{X}\) & Yes & Yes & 4 & Yes & No & n.a. \\ \(\mathcal{BL}_{X,q},q<|X|\) & Yes & No & n.a. & No & No & n.a. \\ \(\mathcal{BL}_{X}\) & Yes & Yes & 3 & No & No & n.a. \\ \(\mathcal{BL}_{X}^{1}\) & Yes & Yes & 3 & Yes & No & n.a. \\ \(\mathcal{S}_{X}\cup\mathcal{BL}_{X}\) & Yes & Yes & 4 & Yes & No & n.a. \\ \(\mathcal{S}nj_{X}\) & Yes & No & n.a. & Yes & Yes & 4 \\ \(\mathcal{DBL}_{X,q},q<|X|\) & No & No & n.a. & Yes & No & n.a. \\ \(\mathcal{DBL}_{X}\) & No & No & n.a. & Yes & Yes & 2 \\ \(\mathcal{DBL}_{X}^{1}\) & Yes & No & n.a. & Yes & Yes & 3 \\ \(\mathcal{S}_{X}\cup\mathcal{DBL}_{X}\) & Yes & No & n.a. & Yes & Yes & 4 \\ \(\mathcal{T}_{X}\setminus\mathcal{I}nj_{X}\) & No & No & n.a. & Yes & Yes & 2 \\ \(\mathcal{T}_{X}\setminus\mathcal{S}nj_{X}\) & Yes & Yes & 2 & No & No & n.a. \\ \(\mathcal{H}_{X}\) & Yes & Yes & 1 & Yes & No & n.a. \\ \(\mathcal{K}_{X}\) & Yes & No & n.a. & Yes & No & n.a. \\ \(\mathcal{P}_{X}\) & Yes & Yes & 1 & Yes & Yes & 1 \\ \(\mathcal{P}\mathcal{B}_{X}\) & Yes & Yes & 1 & Yes & Yes & 1 \\ \hline \end{tabular}
\end{table}
Table 1. Summary of results.
For certain transformation semigroups \(S,\) we can quickly answer questions (Q1)-(Q6) using known results regarding diagonal acts.
For a semigroup \(S,\) the _diagonal right \(S\)-act_ is the set \(S\times S\) on which \(S\) acts on the right via \((a,b)c=(ac,bc).\) It is said to be _generated_ by a set \(U\subseteq S\times S\) if \(S\times S=US^{1},\) and it is _finitely generated_ or _monogenic_ if it is generated by a finite set or a singleton, respectively. Of course, one can dually define the diagonal left \(S\)-act and its finite generation/monogenicity.
The importance of diagonal acts in relation to the notion of diameter is expressed in the following result.
**Proposition 2.10**.: _[_10_, Proposition 3.6]_ _For a non-trivial semigroup \(S,\) the diagonal right \(S\)-act is finitely generated if and only if \(S\) has right diameter 1._
From the substantial body of results on generation of diagonal acts [7, 8, 9], the main findings concerning natural semigroups of transformations and relations are summarised in Table 2.
We immediately deduce from Table 2 and Proposition 2.10 that \(\mathcal{B}_{X},\)\(\mathcal{PT}_{X}\) and \(\mathcal{T}_{X}\) each have both right diameter 1 and left diameter 1, that \(\mathcal{F}_{X}\) has right diameter 1 but not left diameter 1, and the remaining semigroups appearing in Table 2 have neither right diameter 1 nor left diameter 1. Since \(\mathcal{I}_{X}\) has a zero (the empty map), we deduce, using Corollary 2.5 and its left-right dual, that \(\mathcal{I}_{X}\) has right diameter 2 and left diameter 2.
## 3. Transformation Semigroups: Right Diameter
This section naturally divides into three parts, corresponding to questions (Q1), (Q2) and (Q3) of Section 2.3. Specifically, we first determine for which of the transformation
\begin{table}
\begin{tabular}{|c||c|c|} \hline Semigroup & Diagonal right act & Diagonal left act \\ \hline \hline \(\mathcal{B}_{X}\) & Monogenic & Monogenic \\ \(\mathcal{PT}_{X}\) & Monogenic & Monogenic \\ \(\mathcal{I}_{X}\) & Not f.g. & Not f.g. \\ \(\mathcal{T}_{X}\) & Monogenic & Monogenic \\ \(\mathcal{S}_{X}\) & Not f.g. & Not f.g. \\ \(\mathcal{F}_{X}\) & Monogenic & Not f.g. \\ Infinite subsemigroup of \(\mathcal{I}n\!j_{X}\) & Not f.g. & Not f.g. \\ Infinite subsemigroup of \(\mathcal{S}n\!j_{X}\) & Not f.g. & Not f.g. \\ \(\mathcal{T}_{X}\backslash\mathcal{I}n\!j_{X}\) & Not f.g. & Not f.g. \\ \(\mathcal{T}_{X}\backslash\mathcal{S}n\!j_{X}\) & Not f.g. & Not f.g. \\ \hline \end{tabular}
\end{table}
Table 2. Generation of diagonal acts of certain transformation semigroups. See [7, Theorem 7.1 and Lemma 7.2] for the results on infinite subsemigroups of \(\mathcal{S}n\!j_{X}\) and \(\mathcal{I}n\!j_{X},\) and [9, Table 1.2] for the remaining semigroups.
semigroups \(S\) in Table 1 we have \(S^{r}\) is _not_ finitely generated (and hence \(\omega_{S}^{r}\) is not finitely generated). We then find a number of semigroups \(S\) with \(S^{r}\) finitely generated but \(\omega_{S}^{r}\) not finitely generated. Finally, for each of the remaining semigroups \(S,\) we prove that \(\omega_{S}^{r}\)_is_ finitely generated and determine the right diameter of \(S\) (which turns out to be finite).
Now, it is certainly the case that \(S^{r}\) is finitely generated if \(S\) is a monoid or a right simple semigroup. Moreover, it is straightforward to show that \(\mathcal{T}_{X}\backslash\mathcal{S}\mathit{urj}_{X}\) is generated as a right ideal of itself by any \(\alpha\in\mathcal{I}\mathit{nj}_{X}\backslash\mathcal{S}_{X}\). So, we are left to consider only \(\mathcal{T}_{X}\backslash\mathcal{I}\mathit{nj}_{X}\) and \(\mathcal{DBL}_{X,q}\) (\(\aleph_{0}\leq q\leq|X|\)). It turns out that these are not finitely generated as right ideals of themselves. In fact, we prove a stronger result:
**Theorem 3.1**.: _If \(S\) is a finitely transitive subsemigroup of \(\mathcal{T}_{X}\backslash\mathcal{I}\mathit{nj}_{X}\), then \(S^{r}\) is not finitely generated. In particular, the semigroups \(\mathcal{T}_{X}\backslash\mathcal{I}\mathit{nj}_{X}\) and \(\mathcal{DBL}_{X,q}\) (\(\aleph_{0}\leq q\leq|X|\)) are not finitely generated as right ideals of themselves._
Proof.: Consider any finite subset \(U\subseteq S.\) For each \(\alpha\in U,\) choose \((x_{\alpha},y_{\alpha})\in\ker\alpha\) with \(x_{\alpha}\neq y_{\alpha}\) (such a pair exists because \(\alpha\) is not injective). Since \(S\) is finitely transitive, there exists \(\theta\in S\) such that \(x_{\alpha}\theta=x_{\alpha}\) and \(y_{\alpha}\theta=y_{\alpha}\) for all \(\alpha\in U.\) Thus, we have \((x_{\alpha},y_{\alpha})\notin\ker\theta\) for all \(\alpha\in U,\) and hence \(\theta\notin\alpha S^{1}\) for any \(\alpha\in U,\) so \(S\neq US^{1}\). Hence, \(S^{r}\) is not finitely generated.
We now move on to find certain transformation semigroups \(S\) for which \(S^{r}\) is finitely generated but \(\omega_{S}^{r}\) is _not_ finitely generated. To this end, we first establish a general result regarding generation of the universal right congruence on a subsemigroup of \(\mathcal{T}_{X}\).
Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\). For a subset \(U\subseteq S,\) we define
\[\Sigma(U)=\{\alpha_{1}\beta_{1}^{-1}\ldots\alpha_{k}\beta_{k}^{-1}:k\in\mathbb{ N},\,\alpha_{i},\beta_{i}\in U\,(1\leq i\leq k)\}\subseteq\mathcal{B}_{X}.\]
Observe that for any \(\theta,\varphi\in\mathcal{T}_{X},\) in \(\mathcal{B}_{X}\) we have
\[\theta\varphi^{-1}=\{(x,y)\in X\times X:x\theta=y\varphi\}.\]
**Proposition 3.2**.: _Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\), and let \(U\subseteq S\) be a generating set for the universal right congruence \(\omega_{S}^{r}\). Then for any \(\theta,\varphi\in S\) there exists \(\sigma\in\Sigma(U)\) with \(\sigma\subseteq\theta\varphi^{-1}.\)_
Proof.: Let \(\theta,\varphi\in S.\) Suppose first that \(\theta=\varphi.\) Since \(\omega_{S}^{r}\) is generated by \(U,\) there exists \(\alpha\in U\) such that \(\theta\in\alpha S^{1}.\) Then \(\alpha\alpha^{-1}\in\Sigma(U)\) and
\[\alpha\alpha^{-1}=\ker\alpha\subseteq\ker\theta=\theta\theta^{-1}=\theta \varphi^{-1}.\]
Now suppose that \(\theta\neq\varphi.\) Then there exists a \(U\)-sequence
\[\theta=\alpha_{1}\gamma_{1},\,\beta_{1}\gamma_{1}=\alpha_{2}\gamma_{2},\,\ldots,\,\beta_{k}\gamma_{k}=\varphi.\]
Let \(\sigma=\alpha_{1}\beta_{1}^{-1}\ldots\alpha_{k}\beta_{k}^{-1}\in\Sigma(U).\) We claim that \(\sigma\subseteq\theta\varphi^{-1}.\) So, let \((x,y)\in\sigma.\) Then there exist \(u_{1},v_{1},\ldots,u_{k-1},v_{k-1},u_{k}\in X\) such that
\[(x,u_{1})\in\alpha_{1},\,(u_{1},v_{1})\in\beta_{1}^{-1},\,(v_{1},u_{2})\in \alpha_{2},\,\ldots,\,(v_{k-1},u_{k})\in\alpha_{k},\,(u_{k},y)\in\beta_{k}^{-1}.\]
Therefore, we have
\[x\alpha_{1}=u_{1}=v_{1}\beta_{1},\,v_{1}\alpha_{2}=u_{2}=v_{2}\beta_{2},\, \ldots,\,v_{k-1}\alpha_{k}=u_{k}=y\beta_{k}.\]
We then have
\[x\theta=x\alpha_{1}\gamma_{1}=v_{1}\beta_{1}\gamma_{1}=v_{1}\alpha_{2}\gamma_{2}=v _{2}\beta_{2}\gamma_{2}=\cdots=v_{k-1}\alpha_{k}\gamma_{k}=y\beta_{k}\gamma_{k} =y\varphi.\]
Thus \((x,y)\in\theta\varphi^{-1},\) as required.
For the next result, recall that the monoid \(\mathcal{K}_{X},\) defined in Section 2.2, is a submonoid of \(\mathcal{H}_{X},\) and observe that \(\mathcal{K}_{X}\cap\mathcal{I}\!n\!j_{X}=\mathcal{I}\!n\!j_{X}\backslash \mathcal{BL}_{X}\). Note that \(\mathcal{I}\!n\!j_{X}\backslash\mathcal{BL}_{X}\) contains the symmetric group \(\mathcal{S}_{X}\) and the Baer-Levi semigroups \(\mathcal{BL}_{X,q}\) where \(\aleph_{0}\leq q<|X|.\)
**Theorem 3.3**.: _If \(S\) is an \(\aleph_{0}\)-transitive subsemigroup of \(\mathcal{K}_{X}\), then \(\omega_{S}^{r}\) is not finitely generated. In particular, the universal right congruence is not finitely generated for \(\mathcal{K}_{X}\) or for any \(\aleph_{0}\)-transitive subsemigroup of \(\mathcal{I}\!n\!j_{X}\backslash\mathcal{BL}_{X}\) (which includes \(\mathcal{S}_{X}\) and \(\mathcal{BL}_{X,q}\) where \(\aleph_{0}\leq q<|X|\))._
Proof.: First, we claim that for any \(\delta\in\Sigma(S)\) and \(Y\subseteq X\) with \(|Y|=|X|\) we have \(|Y\delta|=|X|.\) Indeed, consider \(\delta=\alpha_{1}\beta_{1}^{-1}\ldots\alpha_{k}\beta_{k}^{-1}\in\Sigma(S)\) (where \(\alpha_{i},\beta_{i}\in S\)) and \(Y\subseteq X\) with \(|Y|=|X|.\) Define \(\delta_{i}=\alpha_{1}\beta_{1}^{-1}\ldots\alpha_{i}\beta_{i}^{-1}\) for \(i=0,\ldots,k,\) interpreting \(\delta_{0}=1_{X}\). We have \(|Y\delta_{0}|=|Y|=|X|.\) Now let \(i\in\{1,\ldots,k\},\) and assume that \(|Y\delta_{i-1}|=|X|.\) Then, since \(\alpha_{i}\in\mathcal{K}_{X}\subseteq\mathcal{H}_{X},\) we have \(|Y\delta_{i-1}\alpha_{i}|=|X|.\) It is straightforward to show that \(Y\delta_{i-1}\alpha_{i}\beta_{i}^{-1}\beta_{i}=Y\delta_{i-1}\alpha_{i}\cap \operatorname{im}\beta_{i}.\) Since
\[Y\delta_{i-1}\alpha_{i}=(Y\delta_{i-1}\alpha_{i}\cap\operatorname{im}\beta_{i} )\cup(Y\delta_{i-1}\alpha_{i}\!\setminus\!\operatorname{im}\beta_{i})=(Y \delta_{i-1}\alpha_{i}\beta_{i}^{-1}\beta_{i})\cup(Y\delta_{i-1}\alpha_{i}\! \setminus\!\operatorname{im}\beta_{i})\]
and \(|Y\delta_{i-1}\alpha_{i}\!\setminus\!\operatorname{im}\beta_{i}|\leq|X\! \setminus\!\operatorname{im}\beta_{i}|<|X|,\) it follows that \(|Y\delta_{i-1}\alpha_{i}\beta_{i}^{-1}\beta_{i}|=|X|.\) Clearly \(|Y\delta_{i}|=|Y\delta_{i-1}\alpha_{i}\beta_{i}^{-1}|\geq|Y\delta_{i-1}\alpha_ {i}\beta_{i}^{-1}\beta_{i}|,\) so \(|Y\delta_{i}|=|X|.\) Hence, by finite induction, we have \(|Y\delta|=|Y\delta_{k}|=|X|.\) This establishes the claim.
Now suppose for a contradiction that \(\omega_{S}^{r}\) is generated by a finite subset \(U\subseteq S,\) and let \(\Sigma=\Sigma(U).\) Since \(\Sigma\) is countable, we may write it as \(\Sigma=\{\sigma_{i}:i\in\mathbb{N}\},\) noting that the \(\sigma_{i}\) need not be distinct. Certainly each \(\sigma_{i}\) belongs to \(\Sigma(S),\) so it satisfies the condition of the above claim. Observe that this implies that \(\sigma_{i}\neq\varnothing.\) We claim that there exist pairs \((x_{i},y_{i})\in\sigma_{i}\) (\(i\in\mathbb{N}\)) such that \(|X\!\setminus\!\{x_{i}:i\in\mathbb{N}\}|=|X\!\setminus\!\{y_{i}:i\in\mathbb{N}\} |=|X|.\) This is clear if \(X\) is uncountable: for each \(i\in\mathbb{N},\) we can choose any pair \((x_{i},y_{i})\in\sigma_{i}.\) Suppose then that \(X\) is countably infinite; we may assume that \(X=\mathbb{N}.\) We choose the pairs \((x_{i},y_{i})\) (\(i\in\mathbb{N}\)) inductively as follows. Choose any \((x_{1},y_{1})\in\sigma_{1}.\) For \(i\geq 2,\) since \(|\{x\in X:x\geq x_{i-1}+2\}\sigma_{i}|=|X|,\) by the above claim, we can choose \((x_{i},y_{i})\in\sigma_{i}\) with \(x_{i}\geq x_{i-1}+2\) and \(y_{i}\geq y_{i-1}+2.\) Then clearly \(X\!\setminus\!\{x_{i}:i\in\mathbb{N}\}\) and \(X\!\setminus\!\{y_{i}:i\in\mathbb{N}\}\) are infinite, as desired.
We now choose injections \(\lambda:\{x_{i}:i\in\mathbb{N}\}\to X\) and \(\mu:\{y_{i}:i\in\mathbb{N}\}\to X\) such that \(\operatorname{im}\lambda\cap\operatorname{im}\mu=\varnothing.\) Since \(S\) is \(\aleph_{0}\)-transitive, there exist \(\theta,\varphi\in S\) extending \(\lambda\) and \(\mu,\) respectively. Then \((x_{i},y_{i})\notin\theta\varphi^{-1}\) for all \(i\in\mathbb{N}.\) Now, by Proposition 3.2 there exists some \(i\in\mathbb{N}\) such that \(\sigma_{i}\subseteq\theta\varphi^{-1}.\) But then \((x_{i},y_{i})\in\sigma\subseteq\theta\varphi^{-1},\) and we have a contradiction.
**Remark 3.4**.: The statement and proof of Theorem 3.3 would still hold if we replaced 'finitely generated' with 'countably generated'. This is due to the fact that \(\Sigma(U)\) is countable for any countable generating set \(U\) of \(\omega_{S}^{r}.\)
It is well known that \(\mathcal{S}\!\mathit{urj}_{X}\) coincides with the \(\mathcal{L}\)-class of the identity of \(\mathcal{T}_{X}\). It follows that every subsemigroup of \(\mathcal{S}\!\mathit{urj}_{X}\) is \(\mathcal{L}^{*}\)-simple. Thus, by Proposition 2.6, we have:
**Theorem 3.5**.: _If \(S\) is a subsemigroup of \(\mathcal{S}\!\mathit{urj}_{X}\) such that \(\omega_{S}^{r}\) is finitely generated, then \(S\) is countable. Thus, the universal right congruence on each of the following semigroups is not finitely generated: \(\mathcal{S}\!\mathit{urj}_{X}\); \(\mathcal{DBL}_{X,q}\) where \(\aleph_{0}\leq q\leq|X|\); \(\mathcal{DBL}_{X}^{1}\); \(\mathcal{S}_{X}\cup\mathcal{DBL}_{X}\); and \(\mathcal{S}_{X}\)._
The semigroups left to consider in this section are \(\mathcal{H}_{X}\), \(\mathcal{T}_{X}\backslash\mathcal{S}\!\mathit{urj}_{X},\mathcal{BL}_{X}\), \(\mathcal{BL}_{X}^{1}\), \(\mathcal{S}_{X}\cup\mathcal{BL}_{X}\) and \(\mathcal{I}\!\mathit{nj}_{X}\). For each of these semigroups, we will show that the universal right congruence is finitely generated and determine the right diameter.
First, we establish certain mappings that will be used repeatedly in the remainder of this section. These were introduced in [8, Section 2] (in the form of binary relations) to prove that \(\mathcal{B}_{X}\), \(\mathcal{PT}_{X}\), \(\mathcal{T}_{X}\) and \(\mathcal{F}_{X}\) each has a monogenic diagonal right act. We use the 'hat' notation to distinguish these mappings from other transformations.
So, let \(\widehat{\alpha},\widehat{\beta}\in\mathcal{T}_{X}\) be two fixed injections such that \(\mathrm{im}\,\widehat{\alpha}\cap\mathrm{im}\,\widehat{\beta}=\varnothing\) and \(\mathrm{im}\,\widehat{\alpha}\cup\mathrm{im}\,\widehat{\beta}=X.\) Note that \(\widehat{\alpha},\widehat{\beta}\in\mathcal{BL}_{X}\). For each pair \(\theta,\varphi\in\mathcal{T}_{X}\), we define a map
\[\widehat{\gamma}(\theta,\varphi):X\to X,x\mapsto\begin{cases}(x\widehat{ \alpha}^{-1})\theta&\text{ if }x\in\mathrm{im}\,\widehat{\alpha}\\ (x\widehat{\beta}^{-1})\varphi&\text{ if }x\in\mathrm{im}\,\widehat{\beta}. \end{cases}\]
Observe that \(\mathrm{im}\,\widehat{\gamma}(\theta,\varphi)=\mathrm{im}\,\theta\cup\mathrm{ im}\,\varphi\), and \(\widehat{\alpha}\widehat{\gamma}(\theta,\varphi)=\theta\) and \(\widehat{\beta}\widehat{\gamma}(\theta,\varphi)=\varphi.\) It follows immediately that \(\mathcal{T}_{X}\times\mathcal{T}_{X}=(\widehat{\alpha},\widehat{\beta})\, \mathcal{T}_{X}\,.\) Moreover, clearly \(\widehat{\alpha},\widehat{\beta}\in\mathcal{F}_{X}\), and if \(\theta,\varphi\in\mathcal{F}_{X}\) then \(\widehat{\gamma}(\theta,\varphi)\in\mathcal{F}_{X}\), so we have \(\mathcal{F}_{X}\times\mathcal{F}_{X}=(\widehat{\alpha},\widehat{\beta})\, \mathcal{F}_{X}\,.\)
We fix the maps \(\widehat{\alpha}\), \(\widehat{\beta}\) and \(\widehat{\gamma}(\theta,\varphi)\)\((\theta,\varphi\in\mathcal{T}_{X})\) for the remainder of this section.
**Definition 3.6**.: Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\) such that \(\widehat{\alpha},\widehat{\beta}\in S,\) let \(\theta,\varphi\in S,\) and let \(k\in\mathbb{N}.\) By an \((\widehat{\alpha},\widehat{\beta},k)\)_-inducing sequence from \(\theta\) to \(\varphi\)_ in \(S,\) we mean a sequence
\[\theta=\psi_{1},\psi_{2},\ldots,\psi_{k+1}=\varphi\]
of elements of \(S\) where \(\widehat{\gamma}(\psi_{i},\psi_{i+1})\in S\) for each \(i\in\{1,\ldots,k\}.\)
An \((\widehat{\alpha},\widehat{\beta},k)\)-inducing sequence gives rise to a special kind of \((\widehat{\alpha},\widehat{\beta})\)-sequence of length \(k,\) and vice versa:
**Lemma 3.7**.: _Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\) such that \(\widehat{\alpha},\widehat{\beta}\in S,\) let \(\theta,\varphi\in S,\) and let \(k\in\mathbb{N}.\) Then the following statements are equivalent._
1. _There exists an_ \((\widehat{\alpha},\widehat{\beta},k)\)_-inducing sequence_ \[\theta=\psi_{1},\psi_{2},\ldots,\psi_{k+1}=\varphi\] _from_ \(\theta\) _to_ \(\varphi\) _in_ \(S.\)__
2. _There exists an_ \((\widehat{\alpha},\widehat{\beta})\)_-sequence_ \[\theta=\widehat{\alpha}\gamma_{1},\,\widehat{\beta}\gamma_{1}=\widehat{ \alpha}\gamma_{2},\,\ldots,\,\widehat{\beta}\gamma_{k-1}=\widehat{\alpha} \gamma_{k},\,\widehat{\beta}\gamma_{k}=\varphi\] _from_ \(\theta\) _to_ \(\varphi\) _of length_ \(k\) _in_ \(S.\)__
Proof.: (1)\(\Rightarrow\)(2). By the definition of an \((\widehat{\alpha},\widehat{\beta},k)\)-inducing sequence, we have \(\widehat{\gamma}(\psi_{i},\psi_{i+1})\in S\) for each \(i\in\{1,\ldots,k\}.\) Letting \(\gamma_{i}=\widehat{\gamma}(\psi_{i},\psi_{i+1}),\) we have \(\widehat{\alpha}\gamma_{i}=\psi_{i}\) and \(\widehat{\beta}\gamma_{i}=\psi_{i+1}.\) Hence, there is an \((\widehat{\alpha},\widehat{\beta})\)-sequence
\[\theta=\widehat{\alpha}\gamma_{1},\,\widehat{\beta}\gamma_{1}=\widehat{\alpha }\gamma_{2},\,\ldots,\,\widehat{\beta}\gamma_{k-1}=\widehat{\alpha}\gamma_{k},\,\widehat{\beta}\gamma_{k}=\varphi\]
in \(S.\)
(2)\(\Rightarrow\)(1). By the definition of an \((\widehat{\alpha},\widehat{\beta})\)-sequence in \(S,\) we have \(\gamma_{i}\in S\) for \(1\leq i\leq k.\) Let \(\psi_{i}=\widehat{\alpha}\gamma_{i}\) for each \(i\in\{1,\ldots,k\},\) and let \(\psi_{k+1}=\varphi.\) Then \(\widehat{\gamma}(\psi_{i},\psi_{i+1})=\widehat{\gamma}(\widehat{\alpha}\gamma _{i},\widehat{\beta}\gamma_{i})\) for each \(i\in\{1,\ldots,k\}.\) Consider any \(x\in X.\) Then \(x\in\operatorname{im}\widehat{\alpha}\) or \(x\in\operatorname{im}\widehat{\beta}.\) If \(x\in\operatorname{im}\widehat{\alpha}\) then
\[x\widehat{\gamma}(\psi_{i},\psi_{i+1})=x\widehat{\gamma}(\widehat{\alpha} \gamma_{i},\widehat{\beta}\gamma_{i})=(x\widehat{\alpha}^{-1})\widehat{\alpha }\gamma_{i}=x\gamma_{i}.\]
Similarly, if \(x\in\operatorname{im}\widehat{\beta}\) then \(x\widehat{\gamma}(\psi_{i},\psi_{i+1})=x\gamma_{i}\). Thus \(\widehat{\gamma}(\psi_{i},\psi_{i+1})=\gamma_{i}\in S.\) Hence, there is an \((\widehat{\alpha},\widehat{\beta},k)\)-inducing sequence
\[\theta=\psi_{1},\psi_{2},\ldots,\psi_{k+1}=\varphi\]
in \(S.\)
Lemma 3.7 yields the following result.
**Proposition 3.8**.: _Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\) such that:_
1. \(\widehat{\alpha},\widehat{\beta}\in S\)_;_
2. _there exists_ \(n\in\mathbb{N}\) _such that for any pair_ \(\theta,\varphi\in S\) _there is an_ \((\widehat{\alpha},\widehat{\beta},k)\)_-inducing sequence from_ \(\theta\) _to_ \(\varphi\) _in_ \(S\) _for some_ \(k\leq n.\)__
_Then \(\omega_{S}^{r}\) is generated by the pair \((\widehat{\alpha},\widehat{\beta})\) and \(D_{r}(S)\leq n.\) Furthermore, if \(n=1\) (so that \(\widehat{\gamma}(\theta,\varphi)\in S\) for any \(\theta,\varphi\in S\)), then the diagonal right \(S\)-act is generated by \((\widehat{\alpha},\widehat{\beta})\) (and is hence monogenic)._
Using Proposition 3.8, we show that the diagonal right act of \(\mathcal{H}_{X}\) is monogenic.
**Theorem 3.9**.: _The diagonal right act of \(\mathcal{H}_{X}\) is generated by \((\widehat{\alpha},\widehat{\beta}),\) and consequently \(\mathcal{H}_{X}\) has right diameter 1._
Proof.: Clearly \(\widehat{\alpha},\widehat{\beta}\in\mathcal{H}_{X}\). Let \(\theta,\varphi\in\mathcal{H}_{X},\) and write \(\gamma=\widehat{\gamma}(\theta,\varphi).\) By Proposition 3.8, it suffices to prove that \(\gamma\in\mathcal{H}_{X}\). So, let \(Y\subseteq X\) with \(|Y|=|X|.\) We have
\[Y\gamma=(Y\cap\operatorname{im}\widehat{\alpha})\widehat{\alpha}^{-1}\theta \cup(Y\cap\operatorname{im}\widehat{\beta})\widehat{\beta}^{-1}\varphi.\]
Since \(Y=(Y\cap\operatorname{im}\widehat{\alpha})\cup(Y\cap\operatorname{im} \widehat{\beta}),\) and \(\widehat{\alpha},\widehat{\beta}\) are bijections, it follows that at least one of \((Y\cap\operatorname{im}\widehat{\alpha})\widehat{\alpha}^{-1}\) and \((Y\cap\operatorname{im}\widehat{\beta})\widehat{\beta}^{-1}\) has cardinality \(|X|.\) Since \(\theta,\varphi\in\mathcal{H}_{X},\) we conclude that at least one of \((Y\cap\operatorname{im}\widehat{\alpha})\widehat{\alpha}^{-1}\theta\) and \((Y\cap\operatorname{im}\widehat{\beta})\widehat{\beta}^{-1}\varphi\) has cardinality \(|X|,\) and hence \(|Y\gamma|=|X|.\) Thus \(\gamma\in\mathcal{H}_{X},\) as required.
By the proof of [11, Corollary 1], any semigroup can be embedded in some \(\mathcal{H}_{X}\). This fact, together with Theorem 3.9, yields:
**Corollary 3.10**.: _Any semigroup can be embedded in a bisimple monoid whose diagonal right act is monogenic._
We now move on to consider \(\mathcal{T}_{X}\backslash\,\mathcal{S}\mbox{\it surj}_{X}.\)
**Theorem 3.11**.: _The semigroup \(\mathcal{T}_{X}\backslash\,\mathcal{S}\mbox{\it surj}_{X}\) has right diameter 2._
Proof.: Let \(S=\mathcal{T}_{X}\backslash\,\mathcal{S}\mbox{\it surj}_{X}\). By Table 2 and Proposition 2.10, \(S\) does not have right diameter 1. Using Proposition 3.8, we show that \(\omega_{S}^{r}=\langle(\widehat{\alpha},\widehat{\beta})\rangle\) with \(D_{r}(S)\leq 2,\) and hence \(D_{r}(S)=2.\)
Clearly \(\widehat{\alpha},\widehat{\beta}\in S.\) Consider any \(\theta,\varphi\in S.\) Since \(\theta\) and \(\varphi\) are not surjective, we can choose \(y\in X\) such that \(\mbox{im}\,\theta\cup\{y\}\neq X\) and \(\mbox{im}\,\varphi\cup\{y\}\neq X.\) Letting \(c_{y}\) denote the constant map with image \(y\), we have \(\mbox{im}\,\widehat{\gamma}(\theta,c_{y})=\mbox{im}\,\theta\cup\{y\}\neq X\) and \(\mbox{im}\,\widehat{\gamma}(c_{y},\varphi)=\mbox{im}\,\varphi\cup\{y\}\neq X,\) so that \(\widehat{\gamma}(\theta,c_{y}),\widehat{\gamma}(c_{y},\varphi)\in S.\) Thus, we have an \((\widehat{\alpha},\widehat{\beta},2)\)-inducing sequence \(\theta,\,c_{y},\,\varphi,\) as required.
We now turn our attention to \(\mathcal{BL}_{X},\,\mathcal{BL}_{X}^{1},\,\mathcal{S}_{X}\cup\mathcal{BL}_{X}\) and \(\mathcal{I}\mbox{\it nj}_{X}\). In fact, we will obtain results concerning a larger class of subsemigroups of \(\mathcal{I}\mbox{\it nj}_{X}.\) We begin with the following technical lemma.
**Lemma 3.12**.: _For any \(\theta,\varphi\in\mathcal{BL}_{X}\) such that \(|X\backslash(\mbox{im}\,\theta\cup\mbox{im}\,\varphi)|=|X|,\) there exists an \((\widehat{\alpha},\widehat{\beta})\)-sequence from \(\theta\) to \(\varphi\) of length 2 (in \(\mathcal{BL}_{X}\))._
Proof.: Let \(S=\mathcal{BL}_{X}\). By Lemma 3.7, it suffices to show that there exists an \((\widehat{\alpha},\widehat{\beta},2)\)-inducing sequence from \(\theta\) to \(\varphi\); that is, there exists \(\lambda\in S\) such that \(\widehat{\gamma}(\theta,\lambda),\widehat{\gamma}(\lambda,\varphi)\in S.\)
Let \(Z=\mbox{im}\,\theta\cup\mbox{im}\,\varphi.\) Then \(|X\backslash\,Z|=|X|\) by assumption. Let \(\lambda:X\to X\,\backslash\,Z\) be an injection such that \(|X\setminus(Z\cup\mbox{im}\,\lambda)|=|X|.\) Clearly \(\lambda\in S.\) Let \(\gamma_{1}=\widehat{\gamma}(\theta,\lambda)\) and \(\gamma_{2}=\widehat{\gamma}(\lambda,\varphi).\) It is straightforward to show that \(\gamma_{1},\gamma_{2}\in\mathcal{I}\mbox{\it nj}_{X}\). Moreover, we have
\[\mbox{im}\,\gamma_{1}=\mbox{im}\,\theta\cup\mbox{im}\,\lambda\subseteq Z\cup \mbox{im}\,\lambda,\]
so \(|X\backslash\mbox{im}\,\gamma_{1}|\geq|X\setminus(Z\cup\mbox{im}\,\lambda)|=|X|.\) Thus \(\gamma_{1}\in S,\) and similarly \(\gamma_{2}\in S,\) as desired.
The following result provides several equivalent characterisations for an \(|X|\)-transitive subsemigroup of \(\mathcal{I}\mbox{\it nj}_{X}\) to have right diameter 3 or 4.
**Proposition 3.13**.: _For a subsemigroup \(S\) of \(\mathcal{I}\mbox{\it nj}_{X}\), the following are equivalent:_
1. \(S\) _is_ \(|X|\)_-transitive,_ \(\omega_{S}^{r}\) _is finitely generated and_ \(D_{r}(S)\in\{3,4\}\)_;_
2. \(S\) _is_ \(|X|\)_-transitive and_ \(\omega_{S}^{r}\) _is finitely generated;_
3. \(S\) _is_ \(|X|\)_-transitive,_ \(S^{r}\) _is finitely generated and_ \(S\cap\mathcal{BL}_{X}\neq\varnothing\)_;_
4. \(S^{r}\) _is finitely generated and_ \(S\) _contains_ \(\mathcal{BL}_{X}\)_._
Proof.: (1)\(\Rightarrow\)(2) is trivial.
(2)\(\Rightarrow\)(3). By Lemma 2.3, \(S^{r}\) is finitely generated, and it follows from Theorem 3.3 that \(S\cap\mathcal{BL}_{X}\neq\varnothing.\)
(3)\(\Rightarrow\)(4). Fix any \(\alpha\in S\cap\mathcal{BL}_{X}\), and consider an arbitrary \(\beta\in\mathcal{BL}_{X}\). Then \(\alpha^{-1}\beta\in\mathcal{I}_{X}\). We have \(\mbox{dom}\,\alpha^{-1}\beta=\mbox{im}\,\alpha\) and \(\mbox{im}\,\alpha^{-1}\beta=\mbox{im}\,\beta\), so that
\[|X\backslash\mbox{dom}\,\alpha^{-1}\beta|=|X\backslash\mbox{im}\,\alpha|=|X| \quad\mbox{ and }\quad|X\backslash\mbox{im}\,\alpha^{-1}\beta|=|X\backslash\mbox{im}\,\beta|=|X|.\]
Since \(S\) is \(|X|\)-transitive, there exists \(\gamma\in S\) extending \(\alpha^{-1}\beta,\) i.e. \(\gamma|_{\operatorname{im}\alpha}=\alpha^{-1}\beta.\) Therefore, for each \(x\in X\) we have \((x\alpha)\gamma=(x\alpha)\alpha^{-1}\beta=x\beta,\) so that \(\beta=\alpha\gamma\in S.\) Thus \(\mathcal{BL}_{X}\subseteq S.\)
(4)\(\Rightarrow\)(1). Since \(S\) contains \(\mathcal{BL}_{X},\) which is \(|X|\)-transitive, \(S\) is \(|X|\)-transitive.
We now prove that \(\omega_{S}^{r}\) is finitely generated with \(D_{r}(S)\leq 4.\) By assumption, there exists a finite subset \(V\subset S\) such that \(S^{r}=VS^{1}\). Let \(K=\mathcal{BL}_{X},\) and recall that \(\widehat{\alpha},\widehat{\beta}\in K.\) Letting \(U=V\cup\{\widehat{\alpha},\widehat{\beta}\},\) we shall prove that \(\omega_{r}^{S}=\langle U\rangle\) with \(D_{r}(S,U)\leq 4.\)
So, let \(\theta,\varphi\in S.\) We claim that there exist \(\theta^{\prime},\varphi^{\prime}\in K\) such that the pairs \((\theta,\theta^{\prime}),(\varphi^{\prime},\varphi)\in\omega_{r}^{S}\) are each obtained by a single application of a pair from \(U\times U,\) and \(|X\backslash(\operatorname{im}\theta^{\prime}\cup\operatorname{im}\varphi^{ \prime})|=|X|.\)
Indeed, we have \(\theta=\gamma\sigma\) and \(\varphi=\delta\tau\) for some \(\gamma,\delta\in V\) and \(\sigma,\tau\in S^{1}.\) Let \(\theta^{\prime}=\widehat{\alpha}\sigma.\) Then clearly \((\theta,\theta^{\prime})\) is obtained by a single application of the pair \((\gamma,\widehat{\alpha})\in U\times U.\) Now, since \(\operatorname{im}\widehat{\alpha}\cap\operatorname{im}\widehat{\beta}=\varnothing\) and \(\tau\) is injective, we have \(\operatorname{im}\widehat{\alpha}\tau\cap\operatorname{im}\widehat{\beta}\tau=\varnothing.\) Thus,
\[(X\backslash\operatorname{im}\theta^{\prime})\cap\operatorname{im}\widehat{ \alpha}\tau\subseteq X\backslash(\operatorname{im}\theta^{\prime}\cup \operatorname{im}\widehat{\beta}\tau).\]
Therefore, we have
\[X\backslash\operatorname{im}\theta^{\prime} =X\backslash(\operatorname{im}\theta^{\prime}\cup\operatorname{ im}\widehat{\alpha}\tau)\cup\big{(}(X\backslash\operatorname{im}\theta^{ \prime})\cap\operatorname{im}\widehat{\alpha}\tau\big{)}\] \[\subseteq X\backslash(\operatorname{im}\theta^{\prime}\cup \operatorname{im}\widehat{\alpha}\tau)\cup X\backslash(\operatorname{im} \theta^{\prime}\cup\operatorname{im}\widehat{\beta}\tau).\]
Since \(|X\backslash\operatorname{im}\theta^{\prime}|=|X|\) (as \(\theta^{\prime}\in K\)), it follows that either
\[|X\backslash(\operatorname{im}\theta^{\prime}\cup\operatorname{im}\widehat{ \alpha}\tau)|=|X|\quad\text{or}\quad|X\backslash(\operatorname{im}\theta^{ \prime}\cup\operatorname{im}\widehat{\beta}\tau)|=|X|.\]
If \(|X\backslash(\operatorname{im}\theta^{\prime}\cup\operatorname{im}\widehat{ \alpha}\tau)|=|X|,\) we set \(\varphi^{\prime}=\widehat{\alpha}\tau;\) otherwise, we set \(\varphi^{\prime}=\widehat{\beta}\tau.\) Then \((\varphi^{\prime},\varphi)\) is obtained by a single application of either \((\widehat{\alpha},\delta)\) or \((\widehat{\beta},\delta),\) and \(|X\backslash(\operatorname{im}\theta^{\prime}\cup\operatorname{im}\varphi^{ \prime})|=|X|.\) This completes the proof of the claim.
Now, by Lemma 3.12, there exists an \((\widehat{\alpha},\widehat{\beta})\)-sequence from \(\theta^{\prime}\) to \(\varphi^{\prime}\) of length \(2\) in \(K\subseteq S.\) It follows that there is a \(U\)-sequence from \(\theta\) to \(\varphi\) of length \(4\). Hence, \(D_{r}(S,U)\leq 4,\) as desired.
Now, to prove the lower bound of \(3\) for \(D_{r}(S),\) suppose for a contradiction that \(D_{r}(S,U)\leq 2\) for some finite set \(U\subseteq S\times S.\)
We say that a pair \((\gamma,\delta)\in S\times S\) is _disjoint_ if \(\operatorname{im}\gamma\cap\operatorname{im}\delta=\varnothing,\) and _intersecting_ otherwise. We may assume that \(U\) contains an intersecting pair, for otherwise we can add such a pair to \(U.\) Let \(\{(\gamma_{i},\delta_{i}):1\leq i\leq n\}\) be the set of intersecting pairs in \(U.\) For each \(i\in\{1,\ldots,n\}\) choose \(x_{i}\in X\) such that \(x_{i}\gamma_{i}\in\operatorname{im}\delta_{i},\) and let \(y_{i}=x_{i}\gamma_{i}\delta_{i}^{-1}\). Now let
\[Q=\big{\{}(j,k):j,k\in\{1,\ldots,n\},\,y_{j}\in\operatorname{im}\delta_{k} \gamma_{k}^{-1}\big{\}}.\]
Choose
\[w_{1},\ldots,w_{n}\in X\backslash\{y_{i},\,y_{j}\gamma_{k}\delta_{k}^{-1}:1 \leq i\leq n,\,(j,k)\in Q\}\]
such that \(w_{i}=w_{j}\) if and only if \(x_{i}=x_{j}\). Fix any \(\theta\in\mathcal{BL}_{X}\) (\(\subseteq S\)), and note that \(|\operatorname{im}\theta|=|X\backslash\operatorname{im}\theta|=|X|.\) Choose
\[z_{1},\ldots,z_{n}\in\operatorname{im}\theta\backslash\{x_{i}\theta:1\leq i \leq n\}\]
such that \(z_{i}=z_{j}\) if and only if \(y_{i}=y_{j}\). Note that the sets
\[A=\{w_{i},y_{i}:1\leq i\leq n\}\quad\text{and}\quad B=\{x_{i}\theta,z_{i}:1\leq i \leq n\}\]
are finite and have the same cardinality. Choose a bijection
\[\lambda:X\!\setminus\!(\operatorname{im}\theta\cup A)\to X\!\setminus\!( \operatorname{im}\theta\cup B),\]
and extend \(\lambda\) to a bijection
\[\lambda^{\prime}:(X\!\setminus\!\operatorname{im}\theta)\cup A\to(X\! \setminus\!\operatorname{im}\theta)\cup B\]
by setting \(w_{i}\lambda^{\prime}=x_{i}\theta\) and \(y_{i}\lambda^{\prime}=z_{i}\) (\(1\leq i\leq n\)). We have
\[X\!\setminus\!\operatorname{dom}\lambda^{\prime}=\operatorname{im}\theta \!\setminus\!A\quad\text{and}\quad X\!\setminus\!\operatorname{im}\lambda^{ \prime}=\operatorname{im}\theta\!\setminus\!B.\]
Since \(|\operatorname{im}\theta|=|X|\) and \(A\) and \(B\) are finite, we have
\[|X\!\setminus\!\operatorname{dom}\lambda^{\prime}|=|X\!\setminus\! \operatorname{im}\lambda^{\prime}|=|X|.\]
Since \(S\) is \(|X|\)-transitive, there exists some \(\varphi\in S\) extending \(\lambda^{\prime}.\) Note that \(\operatorname{im}\theta\cup\operatorname{im}\varphi=X.\) Since \(D_{r}(S,U)\leq 2,\) there exists a \(U\)-sequence
\[\theta=\gamma\sigma,\ \delta\sigma=\gamma^{\prime}\sigma^{\prime},\ \delta^{ \prime}\sigma^{\prime}=\varphi.\]
First suppose that \((\gamma,\delta)\) and \((\gamma^{\prime},\delta^{\prime})\) are disjoint pairs. If the pair \((\theta,\delta\sigma)\) were intersecting, then there would exist \(x,y\in X\) such that \(x\theta=(x\gamma)\sigma=y\delta\sigma,\) but then \(x\gamma=y\delta\) since \(\sigma\) is injective, contradicting that \((\gamma,\delta)\) is disjoint. Thus \((\theta,\delta\sigma)\) is disjoint, and similarly \((\delta\sigma,\varphi)=(\gamma^{\prime}\sigma^{\prime},\varphi)\) is disjoint. Thus, we have
\[\operatorname{im}\delta\sigma\cap(\operatorname{im}\theta\cup\operatorname{ im}\varphi)=\varnothing.\]
But \(\operatorname{im}\theta\cup\operatorname{im}\varphi=X,\) so we have a contradiction. We conclude that at least one of \((\gamma,\delta)\) and \((\gamma^{\prime},\delta^{\prime})\) is intersecting.
Suppose first that \((\gamma,\delta)\) is intersecting, so that \((\gamma,\delta)=(\gamma_{j},\delta_{j})\) for some \(j\in\{1,\ldots,n\}.\) We then have
\[y_{j}\gamma^{\prime}\sigma^{\prime}=y_{j}\delta_{j}\sigma=x_{j}\gamma_{j} \sigma=x_{j}\theta=w_{j}\varphi=w_{j}\delta^{\prime}\sigma^{\prime}.\]
Since \(\sigma^{\prime}\) is injective, we have \(y_{j}\gamma^{\prime}=w_{j}\delta^{\prime}.\) Thus \((\gamma^{\prime},\delta^{\prime})\) is intersecting, so that \((\gamma^{\prime},\delta^{\prime})=(\gamma_{k},\delta_{k})\) for some \(k\in\{1,\ldots,n\}.\) Hence \(y_{j}\gamma_{k}=w_{j}\delta_{k}.\) But then \((j,k)\in Q\) and \(w_{j}=y_{j}\gamma_{k}\delta_{k}^{-1},\) contradicting the choice of \(w_{j}.\)
Now suppose that \((\gamma^{\prime},\delta^{\prime})\) is intersecting, so that \((\gamma^{\prime},\delta^{\prime})=(\gamma_{l},\delta_{l})\) for some \(l\in\{1,\ldots,n\}.\) Since \(z_{l}\in\operatorname{im}\theta,\) there exists some \(x\in X\) such that \(z_{l}=x\theta.\) Thus, we have
\[x\gamma\sigma=x\theta=z_{l}=y_{l}\varphi=y_{l}\delta_{l}\sigma^{\prime}=x_{l} \gamma_{l}\sigma^{\prime}=x_{l}\delta\sigma.\]
Since \(\sigma\) is injective, it follows that \(x\gamma=x_{l}\delta.\) But it has already been established that \((\gamma,\delta)\) is not intersecting, so we have a contradiction. Thus \(D_{r}(S)\geq 3.\) This completes the proof of (4) \(\Rightarrow\) (1) and hence of the proposition.
It follows immediately from Proposition 3.13 that each of \(\mathcal{BL}_{X}\), \(\mathcal{BL}_{X}^{1}\), \(\mathcal{S}_{X}\cup\mathcal{BL}_{X}\) and \(\mathcal{I\!n\!j}_{X}\) has right diameter either \(3\) or \(4\). We shall prove that the former two have right diameter \(3\) and the latter two have right diameter \(4\).
Since \(\mathcal{I}\!n\!j_{X}\backslash\mathcal{BL}_{X}\) is a submonoid of \(\mathcal{I}\!n\!j_{X},\) for any subsemigroup \(S\) of \(\mathcal{I}\!n\!j_{X}\) we have that \(S\setminus\mathcal{BL}_{X}\) is a (possibly empty) subsemigroup of \(S.\) If \(S\backslash\mathcal{BL}_{X}\) is finite and non-empty, then it is a subgroup of \(\mathcal{S}_{X};\) this follows from the fact that \(\mathcal{I}\!n\!j_{X}\backslash\mathcal{S}_{X}\) contains no idempotents.
**Theorem 3.14**.: _If \(S\) is a semigroup such that \(\mathcal{BL}_{X}\leq S\leq\mathcal{I}\!n\!j_{X}\) and \(S\setminus\mathcal{BL}_{X}\) is finite, then \(D_{r}(S)=3.\) In particular, \(\mathcal{BL}_{X}\) and \(\mathcal{BL}_{X}^{1}\) have right diameter 3._
Proof.: By Proposition 3.13, it suffices to prove that \(D_{r}(S)\leq 3.\)
Let \(U=(S\backslash\mathcal{BL}_{X})\cup\{\widehat{\alpha},\widehat{\beta}\}.\) Certainly \(U\) is finite since \(S\backslash\mathcal{BL}_{X}\) is finite. We shall prove that \(\omega_{S}^{r}=\langle U\rangle\) and \(D_{r}(S,U)\leq 3.\)
So, consider \(\theta,\varphi\in S.\) If \(\theta,\varphi\in S\backslash\mathcal{BL}_{X},\) then \(\theta,\varphi\in U,\) so clearly there is a \(U\)-sequence from \(\theta\) to \(\varphi\) of length 1. Assume then that \(\theta\in\mathcal{BL}_{X},\) and suppose first that \(\varphi\in S\backslash\mathcal{BL}_{X}.\) Since \(X=\operatorname{im}\widehat{\alpha}\cup\operatorname{im}\widehat{\beta}\) and \(|X\backslash\operatorname{im}\theta|=|X|,\) it follows that either \(|X\backslash(\operatorname{im}\theta\cup\operatorname{im}\widehat{\alpha})|=|X|\) or \(|X\setminus(\operatorname{im}\theta\cup\operatorname{im}\widehat{\beta})|=|X|.\) Assume without loss of generality that \(|X\setminus(\operatorname{im}\theta\cup\operatorname{im}\widehat{\alpha})|=|X|.\) Then, by Lemma 3.12, there exists an \((\widehat{\alpha},\widehat{\beta})\)-sequence from \(\theta\) to \(\widehat{\alpha}\) of length 2. Since \(\widehat{\alpha},\varphi\in U,\) we conclude that there is a \(U\)-sequence from \(\theta\) to \(\varphi\) of length 3.
Finally, suppose that \(\varphi\in\mathcal{BL}_{X}.\) If \(|X\setminus(\operatorname{im}\theta\cup\operatorname{im}\varphi)|=|X|,\) then by Lemma 3.12 there exists an \((\widehat{\alpha},\widehat{\beta})\)-sequence of length 2 from \(\theta\) to \(\varphi.\) Suppose then that \(|X\backslash(\operatorname{im}\theta\cup\operatorname{im}\varphi)|<|X|.\) Let \(Y=\operatorname{im}\varphi\setminus\operatorname{im}\theta.\) Since
\[X\setminus\operatorname{im}\theta=\big{(}X\setminus(\operatorname{im}\theta \cup\operatorname{im}\varphi)\big{)}\cup Y\]
and \(|X\setminus\operatorname{im}\theta|=|X|,\) it follows that \(|Y|=|X|.\) Let \(\lambda:X\to Y\) be an injection such that \(|Y\setminus\operatorname{im}\lambda|=|X|,\) and let \(\gamma=\widehat{\gamma}(\theta,\lambda).\) Clearly \(\gamma\) is an injection (and hence \(|\operatorname{im}\gamma|=|X|\)). Also, we have
\[Y\setminus\operatorname{im}\lambda\subseteq X\setminus(\operatorname{im} \theta\cup\operatorname{im}\lambda)=X\setminus\operatorname{im}\gamma.\]
Since \(|Y\setminus\operatorname{im}\lambda|=|X|,\) it follows that \(|X\setminus\operatorname{im}\gamma|=|X|.\) Thus \(\gamma\in S.\) Recall that \(\theta=\widehat{\alpha}\gamma\) and \(\lambda=\widehat{\beta}\gamma.\) Now, since \(\operatorname{im}\lambda\subseteq Y\subseteq\operatorname{im}\varphi,\) we have that
\[|X\setminus(\operatorname{im}\lambda\cup\operatorname{im}\varphi)|=|X\setminus \operatorname{im}\varphi|=|X|.\]
Therefore, by Lemma 3.12, there exists an \((\widehat{\alpha},\widehat{\beta})\)-sequence from \(\lambda\) to \(\varphi\) of length 2. Hence, we have an \((\widehat{\alpha},\widehat{\beta})\)-sequence from \(\theta\) to \(\varphi\) of length 3. This completes the proof.
Next, we show that a subsemigroup \(S\) of \(\mathcal{I}\!n\!j_{X}\) such that \(S\setminus\mathcal{BL}_{X}\) is finitely transitive cannot have right diameter strictly less than 4.
**Proposition 3.15**.: _Let \(S\) be a subsemigroup of \(\mathcal{I}\!n\!j_{X}\) such that \(S\setminus\mathcal{BL}_{X}\) is finitely transitive. If \(\omega_{r}^{S}\) is finitely generated, then \(D_{r}(S)\geq 4.\)_
Proof.: Suppose for a contradiction that \(D_{r}(S,U)\leq 3\) for some finite set \(U\subseteq S.\) Let \(P\) denote the (finite) collection of all tuples \((\alpha_{1},\beta_{1},\alpha_{2},\beta_{2},\alpha_{3},\beta_{3})\in U^{6}\) where \(\alpha_{1},\beta_{3}\in S\backslash\mathcal{BL}_{X}.\) Observe that for any \(\alpha\in S\backslash\mathcal{BL}_{X}\) and \(\beta\in S,\) since \(|X\backslash\operatorname{im}\alpha|<|X|\) we have \(|\operatorname{im}\alpha\cap\operatorname{im}\beta|=|X|,\) or, equivalently, in \(\mathcal{I}_{X}\) we have \(|\operatorname{im}\alpha\beta^{-1}|=|\operatorname{im}\beta\alpha^{-1}|=|X|.\) Therefore, we may choose a set of distinct elements \(\{x_{p},y_{p}:p\in P\}\) such that, for each \(p=(\alpha_{1},\ldots,\beta_{3})\in P,\) in \(\mathcal{I}_{X}\) we have \(x_{p}\in\operatorname{im}\beta_{1}\alpha_{1}^{-1},\)\(y_{p}\in\operatorname{im}\alpha_{3}\beta_{3}^{-1}\) and \(x_{p}\alpha_{1}\beta_{1}^{-1}\alpha_{2}\neq y_{p}\beta_{3}\alpha_{3}^{-1}\beta_{2}.\)
Since \(S\!\setminus\!\mathcal{BL}_{X}\) is finitely transitive, there exist \(\theta,\varphi\in S\!\setminus\!\mathcal{BL}_{X}\) such that \(x_{p}\theta=x_{p}\) and \(y_{p}\varphi=x_{p}\) for all \(p\in P.\) As \(D_{r}(S,U)\leq 3,\) there exists a \(U\)-sequence
\[\theta=\alpha_{1}\gamma_{1},\,\beta_{1}\gamma_{1}=\alpha_{2}\gamma_{2},\, \beta_{2}\gamma_{2}=\alpha_{3}\gamma_{3},\,\beta_{3}\gamma_{3}=\varphi.\]
Since \(\mathcal{BL}_{X}\) is an ideal of \(S,\) it follows that \(\alpha_{1},\beta_{3}\in S\!\setminus\!\mathcal{BL}_{X},\) so that \((\alpha_{1},\ldots,\beta_{3})\in P.\) Letting \((\alpha_{1},\ldots,\beta_{3})=p,\) we have
\[x_{p}\alpha_{1}\beta_{1}^{-1}\alpha_{2}\gamma_{2}=x_{p}\alpha_{1 }\beta_{1}^{-1}\beta_{1}\gamma_{1}=x_{p}\alpha_{1}\gamma_{1}=x_{p}\theta=x_{p} =y_{p}\varphi=y_{p}\beta_{3}\gamma_{3} =y_{p}\beta_{3}\alpha_{3}^{-1}\alpha_{3}\gamma_{3}\] \[=y_{p}\beta_{3}\alpha_{3}^{-1}\beta_{2}\gamma_{2}.\]
But then, since \(\gamma_{2}\) is injective, we have \(x_{p}\alpha_{1}\beta_{1}^{-1}\alpha_{2}=y_{p}\beta_{3}\alpha_{3}^{-1}\beta_{2},\) contradicting the choice of \(x_{p}\) and \(y_{p}\). Thus \(D_{r}(S)\geq 4.\)
If \(S\) is a subsemigroup of \(\mathcal{I}\!n\!j_{X}\) such that \(S\!\setminus\!\mathcal{BL}_{X}\) is \(|X|\)-transitive, then certainly \(S\) is \(|X|\)-transitive and \(S\!\setminus\!\mathcal{BL}_{X}\) is finitely transitive. Thus, by Propositions 3.13 and 3.15, we have:
**Theorem 3.16**.: _For a subsemigroup \(S\) of \(\mathcal{I}\!n\!j_{X}\) such that \(S\!\setminus\!\mathcal{BL}_{X}\) is \(|X|\)-transitive, the following are equivalent:_
1. \(\omega_{S}^{r}\) _is finitely generated and_ \(D_{r}(S)=4\)_;_
2. \(\omega_{S}^{r}\) _is finitely generated;_
3. \(S^{r}\) _is finitely generated and_ \(S\cap\mathcal{BL}_{X}\neq\varnothing\)_;_
4. \(S^{r}\) _is finitely generated and_ \(S\) _contains_ \(\mathcal{BL}_{X}\)_._
If \(S\) is a subsemigroup of \(\mathcal{I}\!n\!j_{X}\) containing \(\mathcal{S}_{X},\) then certainly \(S^{r}\) is finitely generated (since \(S\) is a monoid) and \(S\!\setminus\!\mathcal{BL}_{X}\) is \(|X|\)-transitive (since it contains \(\mathcal{S}_{X},\) which is \(|X|\)-transitive). Thus, we deduce:
**Theorem 3.17**.: _For a monoid \(S\) such that \(\mathcal{S}_{X}\leq S\leq\mathcal{I}\!n\!j_{X}\), the following are equivalent:_
1. \(\omega_{S}^{r}\) _is finitely generated and_ \(D_{r}(S)=4\)_;_
2. \(\omega_{S}^{r}\) _is finitely generated;_
3. \(S\cap\mathcal{BL}_{X}\neq\varnothing\)_;_
4. \(S\) _contains_ \(\mathcal{BL}_{X}\)_._
_Consequently, the monoids \(\mathcal{S}_{X}\cup\mathcal{BL}_{X}\) and \(\mathcal{I}\!n\!j_{X}\) have right diameter 4._
**Remark 3.18**.: For any non-empty set \(I\) of infinite cardinals \(q\leq|X|,\) the set \(S=\bigcup_{q\in I}\mathcal{BL}_{X,q}\) is an \(|X|\)-transitive subsemigroup of \(\mathcal{I}\!n\!j_{X}\). Moreover, we have \(S^{r}=\alpha S^{1}\) for any \(\alpha\in\mathcal{BL}_{X,q_{0}},\) where \(q_{0}\) is the smallest cardinal in \(I.\) If \(I\) contains \(|X|\) and at least one other cardinal \(q<|X|,\) then, by Theorem 3.16, the universal right congruence \(\omega_{r}^{S}\) is finitely generated and \(D_{r}(S)=4.\)
## 4. Transformation Semigroups: Left Diameter
This section has a parallel structure to Section 3; that is, it naturally splits into three parts, correponding to questions (Q4), (Q5) and (Q6) of Section 2.3.
So, we begin by considering which of the transformation semigroups \(S\) appearing in Table 1 are finitely generated as left ideals. Of course, this holds if \(S\) is a monoid or left
simple. Also, it is fairly straightforward to show that \(\mathcal{T}_{X}\backslash\mathcal{I}n\!j_{X}\) is generated as a left ideal of itself by any \(\alpha\in\mathcal{S}\!urj_{X}\backslash\mathcal{S}_{X}\) (in fact, we shall see that \(\mathcal{T}_{X}\backslash\mathcal{I}n\!j_{X}\) has left diameter \(2\)). The remaining semigroups \((\mathcal{T}_{X}\backslash\mathcal{S}\!urj_{X}\) and \(\mathcal{B}\!\mathcal{L}_{X,q},\,\aleph_{0}\leq q\leq|X|)\) are dealt with by the following result.
**Theorem 4.1**.: _If \(S\) is a finitely transitive subsemigroup of \(\mathcal{T}_{X}\backslash\mathcal{S}\!urj_{X}\), then \(S^{l}\) is not finitely generated. In particular, the semigroups \(\mathcal{T}_{X}\backslash\mathcal{S}\!urj_{X}\) and \(\mathcal{B}\!\mathcal{L}_{X,q}\) (\(\aleph_{0}\leq q\leq|X|\)) are not finitely generated as left ideals of themselves._
Proof.: Consider any finite subset \(U\subseteq S.\) For each \(\alpha\in U\) choose \(x_{\alpha}\in X\backslash\operatorname{im}\alpha\) (such an element exists because \(\alpha\) is not surjective). Since \(S\) is finitely transitive, there exists \(\theta\in S\) such that \(x_{\alpha}\theta=x_{\alpha}\) for each \(\alpha\in U.\) Then \(\operatorname{im}\theta\not\subseteq\operatorname{im}\alpha\) for all \(\alpha\in U,\) and hence \(\theta\notin S^{1}\alpha\) for any \(\alpha\in U,\) so \(S\neq S^{1}U.\) Hence, \(S^{l}\) is not finitely generated.
We now consider which of the remaining transformation semigroups \(S\) from Table 1 have \(\omega_{S}^{l}\) not finitely generated. First, we establish an analogue of Proposition 3.2, followed by a technical lemma.
Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\). For a subset \(U\subseteq S,\) we define
\[\Sigma^{\prime}(U)=\{\alpha_{1}^{-1}\beta_{1}\ldots\alpha_{k}^{-1}\beta_{k}:k \in\mathbb{N},\,\alpha_{i},\beta_{i}\in U\left(1\leq i\leq k\right)\}\subseteq \mathcal{B}_{X}.\]
Observe that for any \(\theta,\varphi\in\mathcal{T}_{X},\) in \(\mathcal{B}_{X}\) we have
\[\theta^{-1}\varphi=\{(x\theta,x\varphi):x\in X\}.\]
**Proposition 4.2**.: _Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\), and let \(U\subseteq S\) be a generating set for the universal left congruence \(\omega_{S}^{l}\). Then for any \(\theta,\varphi\in S\) there exists \(\sigma\in\Sigma^{\prime}(U)\) with \(\theta^{-1}\varphi\subseteq\sigma\)._
Proof.: Let \(\theta,\varphi\in S.\) Suppose first that \(\theta=\varphi.\) Since \(\omega_{S}^{l}\) is generated by \(U,\) there exists \(\alpha\in U\) such that \(\theta\in S^{1}\alpha.\) Since \(\operatorname{im}\theta\subseteq\operatorname{im}\alpha\), it follows that \(\theta^{-1}\varphi=\theta^{-1}\theta\subseteq\alpha^{-1}\alpha\in\Sigma^{ \prime}(U).\)
Now suppose that \(\theta\neq\varphi.\) Then there exists a \(U\)-sequence
\[\theta=\gamma_{1}\alpha_{1},\,\gamma_{1}\beta_{1}=\gamma_{2}\alpha_{2},\, \ldots,\,\gamma_{k}\beta_{k}=\varphi.\]
Let \(\sigma=\alpha_{1}^{-1}\beta_{1}\ldots\alpha_{k}^{-1}\beta_{k}\in\Sigma^{ \prime}(U).\) We claim that \(\theta^{-1}\varphi\subseteq\sigma.\) So, let \((x,y)\in\theta^{-1}\varphi\). Then there exists \(z\in X\) such that \(x=z\theta\) and \(y=z\varphi.\) Then \(x=(z\gamma_{1})\alpha_{1}\), so that \((x,z\gamma_{1})\in\alpha_{1}^{-1}\). Therefore, we have
\[(x,z\gamma_{2}\alpha_{2})=(x,z\gamma_{1}\beta_{1})\in\alpha_{1}^{-1}\beta_{1}.\]
It follows that \((x,z\gamma_{2})\in\alpha_{1}^{-1}\beta_{1}\alpha_{2}^{-1}\), which in turn implies that \((x,z\gamma_{2}\beta_{2})\in\alpha_{1}^{-1}\beta_{1}\alpha_{2}^{-1}\beta_{2}\). Continuing in this way, we obtain
\[(x,y)=(x,z\varphi)=(x,z\gamma_{k}\beta_{k})\in\alpha_{1}^{-1}\beta_{1}\ldots \alpha_{k}^{-1}\beta_{k}=\sigma,\]
as required.
**Lemma 4.3**.: _Let \(S\) be an \(\aleph_{0}\)-transitive subsemigroup of \(\mathcal{T}_{X}\) satisfying the following condition: for any finite subset \(U\subset S\) there are infinitely many \(x\in X\) such that for each \(\sigma\in\Sigma^{\prime}(U)\) the set \(X\setminus x\sigma\) is infinite. Then \(\omega_{S}^{l}\) is not finitely generated._
Proof.: Suppose for a contradiction that \(\omega_{S}^{l}\) is generated by a finite subset \(U\subset S,\) and let \(\Sigma=\Sigma^{\prime}(U).\) Let \(X^{\prime}\) be the (infinite) set of all \(x\in X\) such that for each \(\sigma\in\Sigma\) the set \(X\!\setminus\!x\sigma\) is infinite. Choose a set of distinct elements \(\{x_{\sigma}:\sigma\in\Sigma\}\subseteq X^{\prime}\) such that \(|X\!\setminus\!\{x_{\sigma}:\sigma\in\Sigma\}|=|X|.\) Since the set \(X\!\setminus\!x_{\sigma}\sigma\) is infinite for each \(\sigma\in\Sigma,\) we may choose a set of distinct elements \(\{y_{\sigma}:\sigma\in\Sigma\}\) such that \(|X\!\setminus\!\{y_{\sigma}:\sigma\in\Sigma\}|=|X|\) and \((x_{\sigma},y_{\sigma})\notin\sigma\) for each \(\sigma\in\Sigma.\) Since \(\Sigma\) is countable and \(S\) is \(\aleph_{0}\)-transitive, there exist \(\theta,\varphi\in S\) such that \(x_{\sigma}\theta=x_{\sigma}\) and \(x_{\sigma}\varphi=y_{\sigma}\) for all \(\sigma\in\Sigma.\) Then \((x_{\sigma},y_{\sigma})\in\theta^{-1}\varphi\) for all \(\sigma\in\Sigma.\) Now, by Proposition 4.2, there exists \(\sigma\in\Sigma\) with \(\theta^{-1}\varphi\subseteq\sigma\). But then \((x_{\sigma},y_{\sigma})\in\sigma,\) and we have a contradiction.
We can now show that all the subsemigroups of \(\mathcal{H}_{X}\) appearing in Table 1 do not have a finitely generated universal left congruence.
**Theorem 4.4**.: _If \(S\) is an \(\aleph_{0}\)-transitive subsemigroup of \(\mathcal{H}_{X}\), then \(\omega_{S}^{l}\) is not finitely generated. In particular, the universal left congruence on each of the following semigroups is not finitely generated: \(\mathcal{S}_{X}\); \(\mathcal{F}_{X}\); \(\mathcal{I}\!n\!j_{X}\); \(\mathcal{BL}_{X,p}\) where \(\aleph_{0}\leq p\leq|X|\); \(\mathcal{BL}_{X}^{1}\); \(\mathcal{S}_{X}\cup\mathcal{BL}_{X}\); \(\mathcal{DBL}_{X,q}\) where \(\aleph_{0}\leq q<|X|\); \(\mathcal{H}_{X}\); and \(\mathcal{K}_{X}\)._
Proof.: We claim that \(S\) satisfies the condition of Lemma 4.3, and hence \(\omega_{S}^{l}\) is not finitely generated. Indeed, consider any \(x\in X,\)\(U\subseteq S\) and \(\sigma=\alpha_{1}^{-1}\beta_{1}\ldots\alpha_{k}^{-1}\beta_{k}\in\Sigma(U)\) (where \(\alpha_{i},\beta_{i}\in U\)). Define \(\sigma_{i}=\alpha_{1}^{-1}\beta_{1}\ldots\alpha_{i}^{-1}\beta_{i}\) for \(i=0,\ldots,k\) (interpreting \(\sigma_{0}=1_{X}\)). We have \(|x\sigma_{0}|=|\{x\}|<|X|.\) Now let \(i\in\{1,\ldots,k\},\) and assume that \(|x\sigma_{i-1}|<|X|.\) We have \(x\sigma_{i-1}\alpha_{i}^{-1}\alpha_{i}=x\sigma_{i-1}\cap\operatorname{im} \alpha_{i},\) so \(|x\sigma_{i-1}\alpha_{i}^{-1}\alpha_{i}|\leq|x\sigma_{i-1}|<|X|.\) Since \(\alpha_{i}\in\mathcal{H}_{X},\) it follows that \(|x\sigma_{i-1}\alpha_{i}^{-1}|<|X|.\) Therefore,
\[|x\sigma_{i}|=|x\sigma_{i-1}\alpha_{i}^{-1}\beta_{i}|\leq|x\sigma_{i-1}\alpha_ {i}^{-1}|<|X|.\]
Hence, by induction, we have \(|x\sigma|=|x\sigma_{k}|<|X|.\) Thus \(X\!\setminus\!x\sigma\) is infinite, as required.
**Remark 4.5**.:
1. The statements and proofs of Lemma 4.3 and Theorem 4.4 would still hold if we replaced 'finitely generated' with 'countably generated'.
2. The monoid \(\mathcal{I}\!n\!j_{X}\) coincides with the \(\mathcal{R}\)-class of the identity of \(\mathcal{T}_{X},\) so every subsemigroup of \(\mathcal{I}\!n\!j_{X}\) is \(\mathcal{R}^{*}\)-simple. Thus, by the left-right dual of Proposition 2.6, the universal left congruence \(\omega_{S}^{l}\) on any uncountable subsemigroup \(S\) of \(\mathcal{I}\!n\!j_{X}\) is not finitely generated.
The remaining semigroups to consider are \(\mathcal{T}_{X}\!\setminus\!\mathcal{I}\!n\!j_{X}\), \(\mathcal{DBL}_{X}\), \(\mathcal{DBL}_{X}^{1}\), \(\mathcal{S}_{X}\!\cup\!\mathcal{DBL}_{X}\) and \(\mathcal{Sur}\!j_{X}\). We will show that each of these semigroups has a finitely generated universal left congruence and finite left diameter. To this end, we first establish the following mappings, which were introduced in [8] to prove that \(\mathcal{T}_{X}\) and \(\mathcal{PT}_{X}\) have monogenic diagonal left acts.
Choose any bijection \(\widetilde{\nu}:X\to X\times X,\) and let \(\widetilde{\alpha}=\widetilde{\nu}\pi_{1}\) and \(\widetilde{\beta}=\widetilde{\nu}\pi_{2},\) where \(\pi_{1},\pi_{2}:X\times X\to X\) denote the projections onto the first and second coordinates, respectively. Note that \(\widetilde{\alpha},\widetilde{\beta}\in\mathcal{DBL}_{X}\). For each pair \(\theta,\varphi\in\mathcal{T}_{X},\) define a map
\[\widetilde{\gamma}(\theta,\varphi):X\to X,x\mapsto(x\theta,x\varphi)\widetilde{ \nu}^{-1}.\]
Observe that \(\ker\widetilde{\gamma}(\theta,\varphi)=\ker\theta\cap\ker\varphi\), and \(\widetilde{\gamma}(\theta,\varphi)\widetilde{\alpha}=\theta\) and \(\widetilde{\gamma}(\theta,\varphi)\widetilde{\beta}=\varphi.\) It follows immediately that \(\mathcal{T}_{X}\times\mathcal{T}_{X}=\mathcal{T}_{X}(\widetilde{\alpha}, \widetilde{\beta}).\)
We fix the maps \(\widetilde{\nu},\)\(\widetilde{\alpha},\)\(\widetilde{\beta}\) and \(\widetilde{\gamma}(\theta,\varphi)\)\((\theta,\varphi\in\mathcal{T}_{X})\) for the remainder of this section.
**Definition 4.6**.: Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\) such that \(\widetilde{\alpha},\widetilde{\beta}\in S,\) let \(\theta,\varphi\in S,\) and let \(k\in\mathbb{N}.\) By an \((\widetilde{\alpha},\widetilde{\beta},k)\)_-inducing sequence from \(\theta\) to \(\varphi\)_ (in \(S\)), we mean a sequence
\[\theta=\psi_{1},\psi_{2},\ldots,\psi_{k+1}=\varphi\]
of elements of \(S\) where \(\widetilde{\gamma}(\psi_{i},\psi_{i+1})\in S\) for each \(i\in\{1,\ldots,k\}.\)
The following lemma is an analogue of the (1)\(\Rightarrow\)(2) part of Lemma 3.7, showing that \((\widetilde{\alpha},\widetilde{\beta},k)\)-inducing sequences give rise to \((\widetilde{\alpha},\widetilde{\beta})\)-sequences of length \(k.\)
**Lemma 4.7**.: _Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\) such that \(\widetilde{\alpha},\widetilde{\beta}\in S.\) If there exists an \((\widetilde{\alpha},\widetilde{\beta},k)\)-inducing sequence_
\[\theta=\psi_{1},\psi_{2},\ldots,\psi_{k+1}=\varphi\]
_from \(\theta\) to \(\varphi\) in \(S,\) then there exists an \((\widetilde{\alpha},\widetilde{\beta})\)-sequence_
\[\theta=\gamma_{1}\widetilde{\alpha},\,\gamma_{1}\widetilde{\beta}=\gamma_{2} \widetilde{\alpha},\,\ldots,\,\gamma_{k-1}\widetilde{\beta}=\gamma_{k} \widetilde{\alpha},\,\gamma_{k}\widetilde{\beta}=\varphi\]
_from \(\theta\) to \(\varphi\) of length \(k\) in \(S.\)_
Proof.: By definition we have \(\widetilde{\gamma}(\psi_{i},\psi_{i+1})\in S\) for each \(i\in\{1,\ldots,k\}.\) Letting \(\gamma_{i}=\widetilde{\gamma}(\psi_{i},\psi_{i+1}),\) we have \(\gamma_{i}\widetilde{\alpha}=\psi_{i}\) and \(\gamma_{i}\widetilde{\beta}=\psi_{i+1}.\) Hence, we have an \((\widetilde{\alpha},\widetilde{\beta})\)-sequence
\[\theta=\gamma_{1}\widetilde{\alpha},\,\gamma_{1}\widetilde{\beta}=\gamma_{2} \widetilde{\alpha},\,\ldots,\,\gamma_{k-1}\widetilde{\beta}=\gamma_{k} \widetilde{\alpha},\,\gamma_{k}\widetilde{\beta}=\varphi\]
in \(S,\) as required.
Lemma 4.7 yields the following counterpart of Proposition 3.8.
**Proposition 4.8**.: _Let \(S\) be a subsemigroup of \(\mathcal{T}_{X}\) such that:_
1. \(\widetilde{\alpha},\widetilde{\beta}\in S\)_;_
2. _there exists_ \(n\in\mathbb{N}\) _such that for any pair_ \(\theta,\varphi\in S\) _there is an_ \((\widetilde{\alpha},\widetilde{\beta},k)\)_-inducing sequence from_ \(\theta\) _to_ \(\varphi\) _in_ \(S\) _for some_ \(k\leq n.\)__
_Then \(\omega_{S}^{l}\) is generated by the pair \((\widetilde{\alpha},\widetilde{\beta})\) and \(D_{l}(S)\leq n.\) Furthermore, if \(n=1\) (so that \(\widetilde{\gamma}(\theta,\varphi)\in S\) for any \(\theta,\varphi\in S\)), then the diagonal left \(S\)-act is generated by \((\widetilde{\alpha},\widetilde{\beta})\) (and is hence monogenic)._
We now consider \(\mathcal{T}_{X}\backslash\mathcal{I}\!n\!j_{X}.\)
**Theorem 4.9**.: _The semigroup \(\mathcal{T}_{X}\backslash\mathcal{I}\!n\!j_{X}\) has left diameter 2._
Proof.: Let \(S=\mathcal{T}_{X}\backslash\mathcal{I}\!n\!j_{X}.\) By Table 2 and Proposition 2.10, \(S\) does not have left diameter 1. Using Proposition 4.8, we show that \(\omega_{S}^{l}=\langle(\widetilde{\alpha},\widetilde{\beta})\rangle\) with \(D_{l}(S)\leq 2,\) and hence \(D_{l}(S)=2.\)
It is clear that \(\widetilde{\alpha},\widetilde{\beta}\in S.\) Fix any \(x\in X,\) and consider arbitrary \(\theta,\varphi\in S.\) Certainly \(c_{x}\in S.\) We have
\[\ker\widetilde{\gamma}(\theta,c_{x})=\ker\theta\cap\ker c_{x}=\ker\theta\cap(X \times X)=\ker\theta.\]
Therefore, since \(\theta\) is not injective, it follows that \(\widetilde{\gamma}(\theta,c_{x})\) is not injective, i.e. \(\widetilde{\gamma}(\theta,c_{x})\in S.\) Similarly, we have \(\widetilde{\gamma}(c_{x},\varphi)\in S.\) Thus, there is an \((\widetilde{\alpha},\widetilde{\beta},2)\)-inducing sequence \(\theta,\)\(c_{x},\)\(\varphi,\) as required.
We now turn our attention to the dual Baer-Levi semigroup \(\mathcal{DBL}_{X}.\)
We call a partition of an infinite set \(Y\) a \(\mathcal{DBL}\)_-partition_ of \(Y\) if it is of the form \(\{A_{y}:y\in Y\}\) where \(|A_{y}|=|Y|\) for all \(y\in Y.\) For each \(\alpha\in\mathcal{DBL}_{X},\) the set \(\{x\alpha^{-1}:x\in X\}\) of kernel classes of \(\alpha\) forms a \(\mathcal{DBL}\)-partition of \(X.\)
The following technical lemma concerning \(\mathcal{DBL}\)-partitions will be crucial in determining the left diameter of \(\mathcal{DBL}_{X}.\)
**Lemma 4.10**.: _Let \(\{A_{x}:x\in X\}\) and \(\{B_{x}:x\in X\}\) be a pair of \(\mathcal{DBL}\)-partitions of \(X.\) Then there exists a third \(\mathcal{DBL}\)-partition \(\{C_{x}:x\in X\}\) of \(X\) such that for each \(x\in X\) the set \(\{A_{x}\cap C_{y}:y\in X\}\) is a \(\mathcal{DBL}\)-partition of \(A_{x}\), and \(\{B_{x}\cap C_{y}:y\in X\}\) is a \(\mathcal{DBL}\)-partition of \(B_{x}.\)_
Proof.: Let \(|X|=\kappa,\) and for convenience assume that \(X=\kappa,\) where as usual the cardinal \(\kappa\) is identified with the set of all ordinals \(\lambda<\kappa.\) So, consider a pair of \(\mathcal{DBL}\)-partitions \(\{A_{\lambda}:\lambda\in\kappa\}\) and \(\{B_{\lambda}:\lambda\in\kappa\}.\) We begin by defining a sequence \((x_{\lambda})_{\lambda\in\kappa}\) of distinct elements of \(X\) by transfinite induction, as follows. First, we define the set
\[T=\kappa^{3}\times\{0,1\}=\{(\alpha,\beta,\gamma,n):\alpha,\beta,\gamma\in \kappa,\ n\in\{0,1\}\}.\]
Since \(|T|=\kappa,\) we may fix a bijection \(\kappa\to T,\lambda\mapsto t_{\lambda}\). Now let \(\lambda\in\kappa,\) and suppose that we have defined the elements \(x_{\mu}\) for all \(\mu<\lambda\). Also write \(t_{\lambda}=(\alpha,\beta,\gamma,n),\) and define \(Y=\{x_{\mu}:\mu<\lambda\}.\) Since \(|Y|=|\lambda|<\kappa\) (as \(\kappa\) is a cardinal), we can define \(x_{\lambda}\) to be any element of \(A_{\alpha}\setminus Y\) if \(n=0,\) or any element of \(B_{\alpha}\setminus Y\) if \(n=1.\)
Now that we have defined the sequence \((x_{\lambda})_{\lambda\in\kappa},\) for each \(\lambda\in\kappa\) we define
\[T_{\lambda}=\kappa\times\kappa\times\{\lambda\}\times\{0,1\}=\{(\alpha,\beta, \lambda,n):\alpha,\beta\in\kappa,\ n\in\{0,1\}\}.\]
Finally, we set
\[C_{\lambda}=\begin{cases}\{x_{\mu}:t_{\mu}\in T_{\lambda}\}&\text{if $\lambda \geq 1$},\\ \{x_{\mu}:t_{\mu}\in T_{\lambda}\}\cup X\setminus\{x_{\lambda}:\lambda\in \kappa\}&\text{if $\lambda=0$}.\end{cases}\]
Then \(\{C_{\lambda}:\lambda\in\kappa\}\) is a \(\mathcal{DBL}\)-partition of \(X\) because \(\{T_{\lambda}:\lambda\in\kappa\}\) is a \(\mathcal{DBL}\)-partition of \(T.\) Also, for any \(\lambda,\mu\in\kappa,\) the set \(C_{\lambda}\) contains \(\kappa\) elements of the form \(x_{\nu}\) where \(t_{\nu}\in\{\mu\}\times\kappa\times\{\lambda\}\times\{0\},\) each of which belongs to \(A_{\mu}\) by definition. This shows that \(|A_{\mu}\cap C_{\lambda}|=\kappa\) for all \(\lambda,\mu\in\kappa.\) Similarly, we have \(|B_{\mu}\cap C_{\lambda}|=\kappa\) for all \(\lambda,\mu\in\kappa.\) This completes the proof.
We are now in a position to compute the left diameter of \(\mathcal{DBL}_{X}.\)
**Theorem 4.11**.: _The dual Baer-Levi semigroup \(\mathcal{DBL}_{X}\) has left diameter 2._
Proof.: Let \(S=\mathcal{DBL}_{X}\,.\) By Table 2 and Proposition 2.10, \(S\) does not have left diameter 1.
To prove the inequality \(D_{l}(S)\leq 2\) we use Proposition 4.8. We have already noted that \(\widetilde{\alpha},\widetilde{\beta}\in S.\) Consider \(\theta,\varphi\in S.\) For each \(x\in X,\) let \(A_{x}=x\theta^{-1}\) and \(B_{x}=x\varphi^{-1}.\) Then
\(\{A_{x}:x\in X\}\) and \(\{B_{x}:x\in X\}\) are \(\mathcal{DBL}\)-partitions of \(X.\) By Lemma 4.10, there exists a \(\mathcal{DBL}\)-partition \(\{C_{x}:x\in X\}\) of \(X\) such that for each \(x\in X\) the set \(\{A_{x}\cap C_{y}:y\in X\}\) is a \(\mathcal{DBL}\)-partition of \(A_{x},\) and \(\{B_{x}\cap C_{y}:y\in X\}\) is a \(\mathcal{DBL}\)-partition of \(B_{x}\). Let \(\lambda\in S\) be given by \(x\lambda^{-1}=C_{x}\) for all \(x\in X.\) We claim that
\[\theta,\,\lambda,\,\varphi\]
is an \((\widetilde{\alpha},\widetilde{\beta},2)\)-inducing sequence from \(\theta\) to \(\varphi.\) Letting \(\gamma_{1}=\widetilde{\gamma}(\theta,\lambda)\) and \(\gamma_{2}=\widetilde{\gamma}(\lambda,\varphi),\) we need to show that \(\gamma_{1},\gamma_{2}\in S.\) Indeed, for each \(y\in X\) we have
\[y\gamma_{1}^{-1} =\{x\in X:(x\theta,x\lambda)=y\widetilde{\nu}\}=\{x\in X:x\theta =y\widetilde{\alpha}\text{ and }x\lambda=y\widetilde{\beta}\}\] \[=\{x\in X:x\in(y\widetilde{\alpha})\theta^{-1}\text{ and }x\in(y \widetilde{\beta})\lambda^{-1}\}=\{x\in X:x\in A_{y\widetilde{\alpha}}\text{ and }x\in C_{y\widetilde{\beta}}\}\] \[=A_{y\widetilde{\alpha}}\cap C_{y\widetilde{\beta}},\]
and similarly \(y\gamma_{2}^{-1}=C_{y\widetilde{\alpha}}\cap B_{y\widetilde{\beta}}.\) Thus, for each \(y\in X\) we have
\[|y\gamma_{1}^{-1}|=|A_{y\widetilde{\alpha}}\cap C_{y\widetilde{\beta}}|=|X| \quad\text{and}\quad|y\gamma_{2}^{-1}|=|C_{y\widetilde{\alpha}}\cap B_{y \widetilde{\beta}}|=|X|,\]
so that \(\gamma_{1},\gamma_{2}\in S,\) as required.
Next, we establish a technical lemma, and then employ it to show that submonoids of \(\mathcal{S}\!\mathit{urj}_{X}\) containing \(\mathcal{DBL}_{X}^{1}\) have left diameter either \(3\) or \(4\). In this lemma and what follows, a subset \(Y\subset X\) is _colarge_ (in \(X\)) if \(|X\!\setminus\!Y|=|X|.\)
**Lemma 4.12**.: _Let \(\{\alpha_{1},\ldots,\alpha_{n}\}\) be a finite subset of \(\mathcal{T}_{X}\), and let \(x_{1},\ldots,x_{n-1}\) be (not necessarily distinct) elements of \(X.\) If the set \(Y=\bigcup_{1\leq i\leq n-1}x_{i}\alpha_{i}^{-1}\) is colarge in \(X,\) then there exists at most one element \(x\in X\) such that \(Y\cup x\alpha_{n}^{-1}\) is not colarge in \(X.\)_
Proof.: Suppose that \(Y\cup x\alpha_{n}^{-1}\) is not colarge. Since \(X\!\setminus\!Y=\left(X\!\setminus\!(Y\cup x\alpha_{n}^{-1})\right)\cup(x \alpha_{n}^{-1}\!\setminus\!Y),\) and \(|X\!\setminus\!Y|=|X|,\) it follows that \(|x\alpha_{n}^{-1}\!\setminus\!Y|=|X|.\) But then for any \(y\in X\!\setminus\!\{x\}\) we have
\[X\!\setminus\!(Y\cup y\alpha_{n}^{-1})=(X\!\setminus\!Y)\cap(X\!\setminus\!y \alpha_{n}^{-1})\supseteq x\alpha_{n}^{-1}\!\setminus\!Y,\]
and hence \(Y\cup y\alpha_{n}^{-1}\) is colarge.
**Proposition 4.13**.: _If \(S\) is a monoid such that \(\mathcal{DBL}_{X}^{1}\leq S\leq\mathcal{S}\!\mathit{urj}_{X}\), then \(\omega_{S}^{l}\) is finitely generated and \(D_{l}(S)\in\{3,4\}.\)_
Proof.: Since \(\mathcal{DBL}_{X}\) is an ideal of \(S,\) it follows from Theorem 4.11 and the dual of Lemma 2.4 that \(\omega_{S}^{l}\) is finitely generated with \(D_{l}(S)\leq 4.\)
Now suppose for a contradiction that \(D_{l}(S,U)\leq 2\) for some finite set \(U\subset S\). Let
\[V=\{\mu^{-1}\lambda:\mu\in U\cap\mathcal{S}_{X},\lambda\in U\},\]
and note that \(V\) is finite. By an easy induction argument, using Lemma 4.12, we may fix elements \(y_{\varphi}\in X\) (\(\varphi\in V\)) such that \(A=\bigcup_{\varphi\in V}y_{\varphi}\varphi^{-1}\) is colarge. For each pair \(\alpha,\beta\in U\) and each \(\varphi\in V,\) we fix some \(x_{\alpha,\beta,\varphi}\in X\) such that \(x_{\alpha,\beta,\varphi}\alpha^{-1}\cap y_{\varphi}\beta^{-1}\neq\varnothing\) (since \(y_{\varphi}\in X=\operatorname{im}\beta,\) we can pick any \(z\in y_{\varphi}\beta^{-1}\) and define \(x_{\alpha,\beta,\varphi}=z\alpha\)). Now choose any \(\theta\in\mathcal{DBL}_{X}\,(\subseteq S)\)
such that \(\{x_{\alpha,\beta,\varphi}:\alpha,\beta\in U,\varphi\in V\}\theta^{-1}\subseteq X \setminus A\) (such a map exists because \(A\) is colarge). As \(D_{l}(S,U)\leq 2\), there exists a \(U\)-sequence
\[\theta=\gamma\alpha,\,\gamma\beta=\delta\lambda,\,\delta\mu=1_{X}\]
(where \(\alpha,\beta,\lambda,\mu\in U\)). Since \(\mathcal{S}\!\mathit{urj}_{X}\backslash\mathcal{S}_{X}\) is an ideal of \(\mathcal{S}\!\mathit{urj}_{X}\)[9, Proof of Theorem 4.4.2], it follows that \(\delta,\mu\in\mathcal{S}_{X}\) with \(\delta=\mu^{-1}\). Thus, letting \(\varphi=\mu^{-1}\lambda\in V,\) we have \(\theta=\gamma\alpha\) and \(\gamma\beta=\varphi.\) Let \(x=x_{\alpha,\beta,\varphi}\), choose some \(z\in x\alpha^{-1}\cap y_{\varphi}\beta^{-1}\), and then pick some \(u\in z\gamma^{-1}\). Then \(u\theta=u\gamma\alpha=z\alpha=x\), so \(u\in x\theta^{-1}\). On the other hand, we have \(u\varphi=u\gamma\beta=z\beta=y_{\varphi}\), so \(u\in y_{\varphi}\varphi^{-1}\in A\). But this contradicts the fact that \(x\theta^{-1}\subseteq X\setminus A.\) Thus \(D_{l}(S)\geq 3\).
Note that the set \(\mathcal{S}\!\mathit{urj}_{X}\backslash\mathcal{DBL}_{X}\) is not a subsemigroup of \(\mathcal{S}\!\mathit{urj}_{X}\) (in contrast to the situation for \(\mathcal{I}\!nj_{X}\), where \(\mathcal{I}\!nj_{X}\backslash\mathcal{BL}_{X}\)_is_ a subsemigroup). However, for a subsemigroup \(S\) of \(\mathcal{S}\!\mathit{urj}_{X}\), if \(S\backslash\mathcal{DBL}_{X}\) is finite and non-empty then it is a subgroup of \(\mathcal{S}_{X}\).
**Theorem 4.14**.: _For any finite subgroup \(G\) of \(\mathcal{S}_{X}\), the monoid \(G\cup\mathcal{DBL}_{X}\) has left diameter 3. In particular, \(\mathcal{DBL}_{X}^{1}\) has left diameter 3._
Proof.: Let \(S=G\cup\mathcal{DBL}_{X}\). Since \(D_{l}(\mathcal{DBL}_{X},\{\widetilde{\alpha},\widetilde{\beta}\})=2\) (by the proof of Theorem 4.11), it is clear that \(\omega_{S}^{l}\) is generated by the finite set \(U=G\cup\{\widetilde{\alpha},\widetilde{\beta}\}\) and that \(D_{l}(S,U)\leq 3.\) On the other hand, we have \(D_{l}(S)\geq 3\) by Proposition 4.13. Thus \(D_{l}(S)=3\).
We now raise the following question, concerning a natural analogue of Proposition 3.15.
**Open Problem 4.15**.: If \(S\) is a subsemigroup of \(\mathcal{S}\!\mathit{urj}_{X}\) containing a finitely transitive subsemigroup of \(S\backslash\mathcal{DBL}_{X}\), and \(\omega_{S}^{l}\) is finitely generated, is \(D_{r}(S)\geq 4\)?
The following result affirmatively answers Open Problem 4.15 in the special case that \(S\) contains \(\mathcal{S}_{X}\).
**Proposition 4.16**.: _Let \(S\) be a monoid such that \(\mathcal{S}_{X}\leq S\leq\mathcal{S}\!\mathit{urj}_{X}\). If \(\omega_{S}^{l}\) is finitely generated, then \(D_{l}(S)\geq 4\)._
Proof.: Suppose for a contradiction that \(D_{l}(S,U)\leq 3\) for some finite set \(U\subset S\). Let \(P\) denote the collection of all tuples \((\alpha_{1},\beta_{1},\alpha_{2},\beta_{2},\alpha_{3},\beta_{3})\in U^{6}\) where \(\alpha_{1},\beta_{3}\in\mathcal{S}_{X}\). Write \(P=\{p_{1},\ldots,p_{n},\ldots,p_{n+r}\}\), where \(p_{i}=(\alpha_{1}^{(i)},\ldots,\beta_{3}^{(i)})\) (\(1\leq i\leq n+r\)) and \(p_{1},\ldots,p_{n}\) are those tuples \(p=(\alpha_{1},\ldots,\beta_{3})\in P\) for which there exists an element \(w_{p}\in X\) such that the set \(\bigcup_{z\neq w_{p}}z\beta_{2}^{-1}\alpha_{2}\) is finite. For \(i\in\{1,\ldots,n\}\), write \(w_{p_{i}}\) as \(w_{i}\), and let \(W=\{w_{1},\ldots,w_{n}\}\). Also, let \(L=\bigcup_{1\leq i\leq n}\bigcup_{z\neq w_{i}}z(\beta_{2}^{(i)})^{-1}\alpha_{2} ^{(i)}\), and note that \(L\) is finite. We now establish the following claim.
**Claim**.: _(1) There exist \(u_{i},v_{i},x_{i},y_{i}\in X\) (\(1\leq i\leq n\)) such that_
\[u_{i}\notin W,\;v_{i}\notin L,\;x_{i}\in u_{i}(\alpha_{3}^{(i)})^{-1}\beta_{3} ^{(i)}\;\;\text{and}\;\;y_{i}\in v_{i}(\beta_{1}^{(i)})^{-1}\alpha_{1}^{(i)},\]
_with \(x_{i}\neq x_{i^{\prime}}\) and \(y_{i}\neq y_{i^{\prime}}\) for distinct pairs \(i,i^{\prime}\in\{1,\ldots,n\}\). (2) Let \(y_{i}\) (\(1\leq i\leq n\)) be as given in (1), and let_
\[K=\{y_{i}(\alpha_{1}^{(n+j)})^{-1}\beta_{1}^{(n+j)}:1\leq i\leq n,1\leq j\leq r\}.\]
_For each \(1\leq j\leq r,\) there exist \(a_{j},b_{j}\in X\) with \(b_{j}\in a_{j}(\beta_{2}^{(n+j)})^{-1}\alpha_{2}^{(n+j)}\setminus K\) such that_
\[\bigcup_{1\leq q\leq j}a_{q}(\alpha_{3}^{(n+q)})^{-1}\beta_{3}^{(n+q)}\quad\text {and}\quad\bigcup_{1\leq q\leq j}b_{q}(\beta_{1}^{(n+q)})^{-1}\alpha_{1}^{(n+q)}\]
_are colarge._
Proof.: We prove both (1) and (2) by induction.
(1) For the base case, pick any \(u_{1}\notin W\) and \(v_{1}\notin L,\) and then choose \(x_{1}\in u_{1}(\alpha_{3}^{(1)})^{-1}\beta_{3}^{(1)}\) and \(y_{1}\in v_{1}(\beta_{1}^{(1)})^{-1}\alpha_{1}^{(1)}.\)
Now let \(k\in\{2,\ldots,n\},\) and assume that \(u_{i},v_{i},x_{i},y_{i}\in X\) (\(1\leq i\leq k-1\)) have been chosen such that
\[u_{i}\notin W,\;v_{i}\notin L,\;x_{i}\in u_{i}(\alpha_{3}^{(i)})^{-1}\beta_{3} ^{(i)}\;\text{ and }\;y_{i}\in v_{i}(\beta_{1}^{(i)})^{-1}\alpha_{1}^{(i)},\]
with \(x_{i}\neq x_{i^{\prime}}\) and \(y_{i}\neq y_{i^{\prime}}\) for distinct pairs \(i,i^{\prime}\in\{1,\ldots,k-1\}\). Since the map \((\beta_{3}^{(k)})^{-1}\alpha_{3}^{(k)}\) is surjective, its set of kernel classes \(\{x(\alpha_{3}^{(k)})^{-1}\beta_{3}^{(k)}:x\in X\}\) is infinite. Therefore, we may choose \(u_{k}\notin W\) such that
\[u_{k}(\alpha_{3}^{(k)})^{-1}\beta_{3}^{(k)}\cap\{x_{1},\ldots,x_{k-1}\}=\varnothing.\]
Similarly, we may choose \(v_{k}\notin L\) such that
\[v_{k}(\beta_{1}^{(k)})^{-1}\alpha_{1}^{(k)}\cap\{y_{1},\ldots,y_{k-1}\}=\varnothing.\]
Take any \(x_{k}\in u_{k}(\alpha_{3}^{(k)})^{-1}\beta_{3}^{(k)}\) and \(y_{k}\in v_{k}(\beta_{1}^{(k)})^{-1}\alpha_{1}^{(k)}\). This completes the inductive step.
(2) We first note that \(K\) is finite. Now, for the base case, let \(a_{1}\in X\) be such that \(a_{1}(\beta_{2}^{(n+1)})^{-1}\alpha_{2}^{(n+1)}\) is not contained in \(K\) (which is possible by surjectivity), and let \(b_{1}\in a_{1}(\beta_{2}^{(n+1)})^{-1}\alpha_{2}^{(n+1)}\setminus K.\) The sets \(a_{1}(\alpha_{3}^{(n+1)})^{-1}\beta_{3}^{(n+1)}\) and \(b_{1}(\beta_{1}^{(n+1)})^{-1}\alpha_{1}^{(n+1)}\) are colarge since the sets \(a_{1}(\alpha_{3}^{(n+1)})^{-1}\) and \(b_{1}(\beta_{1}^{(n+1)})^{-1}\) are colarge and the maps \(\beta_{3}^{(n+1)}\) and \(\alpha_{1}^{(n+1)}\) are bijections.
Now let \(j\in\{2,\ldots,r\},\) and assume that we have chosen \(a_{q},b_{q}\in X\) (\(1\leq q\leq j-1\)) with \(b_{q}\in a_{q}(\beta_{2}^{(n+q)})^{-1}\alpha_{2}^{(n+q)}\setminus K\) such that
\[A_{j-1}=\bigcup_{1\leq q\leq j-1}a_{q}(\alpha_{3}^{(n+q)})^{-1}\beta_{3}^{(n+q )}\quad\text{and}\quad B_{j-1}=\bigcup_{1\leq q\leq k-1}b_{q}(\beta_{1}^{(n+q) })^{-1}\alpha_{1}^{(n+q)}\]
are colarge. Observing that for any \(\alpha,\beta\in\mathcal{B}_{X}\) we have \(\alpha^{-1}\beta=(\beta^{-1}\alpha)^{-1}\), by Lemma 4.12 there exists at most one element \(c_{j}\) such that the set \(A_{j-1}\cup c_{j}(\alpha_{3}^{(n+j)})^{-1}\beta_{3}^{(n+j)}\) is _not_ colarge, and at most one element \(d_{j}\) such that \(B_{j-1}\cup d_{j}(\beta_{1}^{(n+j)})^{-1}\alpha_{1}^{(n+j)}\) is not colarge. Let \(K^{\prime}=K\cup\{d_{j}\}\) if \(d_{j}\) exists; otherwise, let \(K^{\prime}=K.\) Now, if \(c_{j}\) were the only element of \(X\) such that
\[c_{j}(\beta_{2}^{(n+j)})^{-1}\alpha_{2}^{(n+j)}\not\subseteq K^{\prime},\]
then we would have \(\bigcup_{z\neq c_{j}}z(\beta_{2}^{(n+j)})^{-1}\alpha_{2}^{(n+j)}\subseteq K^{\prime},\) which is finite, contradicting the fact that \(p_{n+j}\) is not one of the tuples \(p_{1},\ldots,p_{n}\). Hence, we may pick \(a_{j}\in X\) (with \(a_{j}\neq c_{j}\) if
\(c_{j}\) exists) such that
\[A_{j-1}\cup a_{j}(\alpha_{3}^{(n+j)})^{-1}\beta_{3}^{(n+j)}=\bigcup_{1\leq q\leq j }a_{q}(\alpha_{3}^{(n+q)})^{-1}\beta_{3}^{(n+q)}\]
is colarge, and \(a_{j}(\beta_{2}^{(n+j)})^{-1}\alpha_{2}^{(n+j)}\) possesses an element \(b_{j}\notin K^{\prime}\). Then
\[B_{j-1}\cup b_{j}(\beta_{1}^{(n+j)})^{-1}\alpha_{1}^{(n+j)}=\bigcup_{1\leq q \leq j}b_{q}(\beta_{1}^{(n+q)})^{-1}\alpha_{1}^{(n+q)}\]
is colarge. This completes the inductive step.
We fix the elements \(u_{i},v_{i},x_{i},y_{i},a_{j},b_{j}\in X\) (\(1\leq i\leq n\), \(1\leq j\leq r\)) and the set \(K\), as given in the above claim, for the remainder of this proof. Let
\[A=\bigcup_{1\leq j\leq r}a_{j}(\alpha_{3}^{(n+j)})^{-1}\beta_{3}^{(n+j)}\qquad \text{and}\qquad B=\bigcup_{1\leq j\leq r}b_{j}(\beta_{1}^{(n+j)})^{-1}\alpha_ {1}^{(n+j)}.\]
Choose \(\theta\in\mathcal{S}_{X}\) such that
\[x_{i}\theta=y_{i}\ \ (1\leq i\leq n)\quad\text{and}\quad(A\backslash\{x_{i}:1 \leq i\leq n\})\theta\subseteq X\backslash B.\]
(Such a bijection exists since the elements \(x_{1},\ldots,x_{n}\) are distinct, the elements \(y_{1},\ldots,y_{n}\) are distinct, and the sets \(A\backslash\{x_{i}:1\leq i\leq n\}\) and \(B\) are both colarge.) Since \(D_{l}(S,U)\leq 3\), there exists a \(U\)-sequence
\[\theta=\gamma_{1}\alpha_{1},\ \gamma_{1}\beta_{1}=\gamma_{2}\alpha_{2},\ \gamma_{2}\beta_{2}=\gamma_{3}\alpha_{3},\ \gamma_{3}\beta_{3}=1_{X}.\]
As \(\mathcal{S}\textit{urj}_{X}\backslash\mathcal{S}_{X}\) is an ideal of \(\mathcal{S}\textit{urj}_{X}\), we have \(\gamma_{1},\alpha_{1},\gamma_{3},\beta_{3}\in\mathcal{S}_{X}\) with \(\gamma_{1}=\theta\alpha_{1}^{-1}\) and \(\gamma_{3}=\beta_{3}^{-1}\). Thus \(p=(\alpha_{1},\beta_{1},\alpha_{2},\beta_{2},\alpha_{3},\beta_{3})\in P,\) and we have
\[\theta\alpha_{1}^{-1}\beta_{1}=\gamma_{2}\alpha_{2},\ \gamma_{2}\beta_{2}=\beta_{3}^{-1} \alpha_{3}.\]
Now, for any \(z\in X\),
\[y\in z\beta_{2}^{-1}\alpha_{2} \Leftrightarrow\text{there exists $x\in X$ such that $x\beta_{2}=z,y=x\alpha_{2}$}\] \[\Leftrightarrow\text{there exists $x^{\prime}\in X$ such that $x^{\prime}\gamma_{2}\beta_{2}=z,y=x^{\prime}\gamma_{2}\alpha_{2}$ (since $\gamma_{2}$ is surjective)}\] \[\Leftrightarrow y\in z(\gamma_{2}\beta_{2})^{-1}(\gamma_{2}\alpha_{2})\] \[\Leftrightarrow y\in z(\beta_{3}^{-1}\alpha_{3})^{-1}(\theta\alpha_{1}^{-1} \beta_{1})\] \[\Leftrightarrow y\in z\alpha_{3}^{-1}\beta_{3}\theta\alpha_{1}^{-1} \beta_{1}.\]
Thus, for each \(z\in X\) we have
\[z\beta_{2}^{-1}\alpha_{2}=z\alpha_{3}^{-1}\beta_{3}\theta\alpha_{1}^{-1}\beta _{1}.\] ( \[*\] )
Suppose first that \(p=p_{i}\) where \(i\in\{1,\ldots,n\}\) (so \(\alpha_{k}=\alpha_{k}^{(i)}\), \(\beta_{k}=\beta_{k}^{(i)}\) for \(k=1,2,3\)). Then \(u_{i}\notin W,\) so \(u_{i}\beta_{2}^{-1}\alpha_{2}\subseteq L.\) But then we have
\[v_{i}=y_{i}\alpha_{1}^{-1}\beta_{1}=x_{i}\theta\alpha_{1}^{-1}\beta_{1}\in u_ {i}\alpha_{3}^{-1}\beta_{3}\theta\alpha_{1}^{-1}\beta_{1}=u_{i}\beta_{2}^{-1} \alpha_{2}\subseteq L,\]
contradicting the choice of \(v_{i}\).
Now suppose that \(p=p_{n+j}\) where \(j\in\{1,\ldots,r\}\) (so \(\alpha_{k}=\alpha_{k}^{(n+j)}\), \(\beta_{k}=\beta_{k}^{(n+j)}\) for \(k=1,2,3\)). Then we have
\[b_{j}\in a_{j}\beta_{2}^{-1}\alpha_{2}=a_{j}\alpha_{3}^{-1}\beta _{3}\theta\alpha_{1}^{-1}\beta_{1} \subseteq A\theta\alpha_{1}^{-1}\beta_{1}\] \[\subseteq\big{(}(A\setminus\{x_{i}:1\leq i\leq n\})\cup\{x_{i}:1 \leq i\leq n\}\big{)}\theta\alpha_{1}^{-1}\beta_{1}\] \[\subseteq(X\setminus B)\alpha_{1}^{-1}\beta_{1}\cup\{y_{i}\alpha _{1}^{-1}\beta_{1}:1\leq i\leq n\}\] \[\subseteq(X\setminus b_{j}\beta_{1}^{-1}\alpha_{1})\alpha_{1}^{-1 }\beta_{1}\cup K\] \[\subseteq(X\setminus\{b_{j}\})\cup K=X\setminus\{b_{j}\},\]
where the first equality is due to \((*)\) and the final equality is due to the fact that \(b_{j}\notin K.\) Again, we have a contradiction. Thus \(D_{l}(S)\geq 4\).
By Propositions 4.13 and 4.16, we have:
**Theorem 4.17**.: _If \(S\) is a monoid such that \(\mathcal{S}_{X}\cup\mathcal{DBL}_{X}\leq S\leq\mathcal{S}\mbox{\it wrj}_{X},\) then \(D_{l}(S)=4.\) In particular, the monoids \(\mathcal{S}_{X}\cup\mathcal{DBL}_{X}\) and \(\mbox{\it Surj}_{X}\) have left diameter 4._
Unfortunately, we have not obtained an analogue of Theorem 3.17, classifying those subsemigroups of \(\mathcal{S}\mbox{\it wrj}_{X}\) containing \(\mathcal{S}_{X}\) that have left diameter 4. We conclude this section by considering a potential such classification.
First, for a map \(\alpha\in\mathcal{T}_{X}\) define
\[K(\alpha)=\{x\in X:|x\alpha^{-1}|=|X|\},\]
and let \(k(\alpha)=|K(\alpha)|.\) The set \(K(\alpha)\) and parameter \(k(\alpha)\) were introduced in [13]. Note that \(\mathcal{DBL}_{X}=\{\alpha\in\mathcal{S}\mbox{\it wrj}_{X}:K(\alpha)=X\}.\)
**Proposition 4.18**.: _Let \(S\) be a monoid such that \(\mathcal{S}_{X}\leq S\leq\mathcal{S}\mbox{\it wrj}_{X}\). Then \(S\) contains \(\mathcal{DBL}_{X}\) if and only if there exists some \(\alpha\in S\) such that \(k(\alpha)=|X|.\)_
Proof.: We have already observed that \(K(\alpha)=X\) for any \(\alpha\in\mathcal{DBL}_{X}\), so the forward direction clearly holds.
For the reverse implication, let \(Y\) be any subset of \(K(\alpha)\) such that \(|Y|=|X\setminus Y|=|X|\), and fix a bijection \(X\to Y,x\mapsto y_{x}\). For each \(x\in X\) fix some \(a_{x}\in x\alpha^{-1}\), and note that the set \(A=\{a_{x}:x\in X\}\) satisfies \(|A|=|X\setminus A|=|X|,\) as \(K(\alpha)\neq\varnothing\). The map \(Y\to A,y_{x}\mapsto a_{x}\) can therefore be extended to a bijection \(\pi\in\mathcal{S}_{X}\). We then see that \(\beta=\alpha\pi\alpha\in S\) belongs to \(\mathcal{DBL}_{X}\). Indeed, for any \(x\in X\) we have \(x\beta^{-1}=x\alpha^{-1}\pi^{-1}\alpha^{-1}\supseteq a_{x}\pi^{-1}\alpha^{-1}= y_{x}\alpha^{-1},\) and \(|y_{x}\alpha^{-1}|=|X|\) as \(y_{x}\in Y\subseteq K(\alpha).\)
Finally, let \(\gamma\in\mathcal{DBL}_{X}\) be arbitrary. Take any bijection \(\psi\in\mathcal{S}_{X}\subseteq S\) such that \((x\gamma^{-1})\psi=x\beta^{-1}\) for each \(x\in X\). Then, for each \(x\in X\), we have \(x\psi\in\big{(}(x\gamma)\gamma^{-1})\psi=(x\gamma)\beta^{-1},\) so that \(x\gamma=(x\psi)\beta=x(\psi\beta).\) Thus \(\gamma=\psi\beta\in S,\) and hence \(\mathcal{DBL}_{X}\subseteq S.\)
**Open Problem 4.19**.: For a monoid \(S\) such that \(\mathcal{S}_{X}\leq S\leq\mathcal{S}\mbox{\it wrj}_{X},\) are the following equivalent?
1. \(D_{l}(S)=4;\)
2. \(\omega_{S}^{l}\) is finitely generated;
3. there exists \(\alpha\in S\) such that \(k(\alpha)=|X|;\)
4. \(S\) contains \(\mathcal{DBL}_{X}.\)
(1)\(\Rightarrow\)(2) certainly holds, we have (3)\(\Leftrightarrow\)(4) by Proposition 4.18, and (4)\(\Rightarrow\)(1) follows from Theorem 4.17. Thus, to answer this question in the affirmative, it would suffice to prove that (2) implies (3).
## 5. Monoids of Partitions
In this section we consider the partition monoid \(\mathcal{P}_{X}\) and the partial Brauer monoid \(\mathcal{PB}_{X}\) (where \(X\) is an arbitrary infinite set).
The _partition monoid_\(\mathcal{P}_{X}\) consists of all set partitions of \(X\cup X^{\prime},\) where \(X^{\prime}=\{x^{\prime}:x\in X\}\) is a disjoint copy of \(X.\) So, an element of \(\mathcal{P}_{X}\) is of the form \(\alpha=\{A_{i}:i\in I\},\) where the \(A_{i}\) are non-empty, pairwise disjoint subsets of \(X\cup X^{\prime}\) such that \(X\cup X^{\prime}=\bigcup_{i\in I}A_{i};\) the \(A_{i}\) are called the _blocks_ of \(\alpha.\) An element of \(\mathcal{P}_{X}\) may be represented as a graph with vertices \(X\cup X^{\prime}\) whose connected components are the blocks of the partition; when depicting such a graph, vertices from \(X\) and \(X^{\prime}\) are displayed on upper and lower rows, respectively. It is from this graph-theoretic viewpoint that we define the product in \(\mathcal{P}_{X}.\)
Let \(\alpha,\beta\in\mathcal{P}_{X}\). Introduce another copy \(X^{\prime\prime}=\{x^{\prime\prime}:x\in X\}\) of \(X,\) disjoint from \(X\cup X^{\prime}.\) Denote by \(\alpha_{\downarrow}\) the graph obtained from \(\alpha\) by replacing every \(x^{\prime}\) with \(x^{\prime\prime},\) and denote by \(\beta^{\uparrow}\) the graph obtained from \(\beta\) by replacing every \(x\) with \(x^{\prime\prime}\). The product graph \(\Pi(\alpha,\beta)\) is the graph with vertex set \(X\cup X^{\prime\prime}\cup X^{\prime}\) and edge set the union of the edge sets of \(\alpha_{\downarrow}\) and \(\beta^{\uparrow}\). The graph \(\Pi(\alpha,\beta)\) is drawn with vertices from \(X^{\prime\prime}\) displayed in a new middle row. We define \(\alpha\beta\) to be the partition of \(X\cup X^{\prime}\) such that \(u,v\in X\cup X^{\prime}\) belong to the same block if and only if \(u\) and \(v\) belong to the same connected component of \(\Pi(a,b)\). We illustrate this product in Figure 1 (using elements from a finite partition monoid). Under this product, \(\mathcal{P}_{X}\) is a monoid with identity \(\big{\{}\{x,x^{\prime}\}:x\in X\big{\}}.\)
The _partial Brauer monoid_\(\mathcal{PB}_{X}\) is the submonoid of \(\mathcal{P}_{X}\) consisting of all partitions whose blocks have size at most 2. (Thus, the partition \(\beta\) in Figure 1 in fact belongs to \(\mathcal{PB}_{6}.\))
It turns out that both \(\mathcal{P}_{X}\) and \(\mathcal{PB}_{X}\) have right diameter 1 and left diameter 1, as follows from the following stronger result.
**Theorem 5.1**.: _The diagonal right act and diagonal left act of both \(\mathcal{P}_{X}\) and \(\mathcal{PB}_{X}\) are monogenic. Consequently, both \(\mathcal{P}_{X}\) and \(\mathcal{PB}_{X}\) have right diameter 1 and left diameter 1._
Proof.: The partition monoid \(\mathcal{P}_{X}\) has an involution \(*:\mathcal{P}_{X}\to\mathcal{P}_{X}\), \(\alpha\to\alpha^{*}\), where \(\alpha^{*}\) is obtained from \(\alpha\) by swapping dashed and undashed vertices (pictorially, \(\alpha^{*}\) is '\(\alpha\) upside-down'), and this map restricts to an involution on \(\mathcal{PB}_{X}\). It follows that for all partitions \(\theta,\varphi,\alpha,\beta,\gamma\in\mathcal{P}_{X}\) (or \(\mathcal{PB}_{X}\)) we have
\[(\theta,\varphi)=(\alpha,\beta)\gamma\;\Leftrightarrow\;(\theta^{*},\varphi^{ *})=\gamma^{*}(\alpha^{*},\beta^{*}),\]
and hence the diagonal right act of \(\mathcal{P}_{X}\) (resp. \(\mathcal{PB}_{X}\)) is monogenic if and only if the diagonal left act of \(\mathcal{P}_{X}\) (resp. \(\mathcal{PB}_{X}\)) is monogenic. Thus, it suffices to show that the diagonal right acts of \(\mathcal{P}_{X}\) and \(\mathcal{PB}_{X}\) are monogenic.
Divide \(X\) into five subsets as follows:
\[X=A\sqcup B\sqcup C\sqcup D\sqcup E\qquad\text{where}\qquad|A|=|B|=|C|=|D|=|E|= |X|.\]
Write
\[A=\{a_{x}:x\in X\},\quad B=\{b_{x}:x\in X\},\quad\text{etc.}\]
Define (and fix) \(\alpha,\beta\in\mathcal{P}_{X}\) as follows:
\[\alpha=\big{\{}\{x,a^{\prime}_{x}\}:x\in X\}\cup\big{\{}\{b^{ \prime}_{x},c^{\prime}_{x}\}:x\in X\big{\}}\cup\big{\{}\{d^{\prime}_{x}\}:x \in X\big{\}}\cup\big{\{}\{e^{\prime}_{x}\}:x\in X\big{\}};\] \[\beta=\big{\{}\{x,e^{\prime}_{x}\}:x\in X\}\cup\big{\{}\{c^{\prime }_{x},d^{\prime}_{x}\}:x\in X\big{\}}\cup\big{\{}\{a^{\prime}_{x}\}:x\in X\big{\}} \cup\big{\{}\{b^{\prime}_{x}\}:x\in X\big{\}}.\]
See Figure 2, and note that in fact \(\alpha\) and \(\beta\) belong to \(\mathcal{PB}_{X}\).
For each set \(T=U\cup V^{\prime}\) where \(U,V\subseteq X,\) define
\[\lambda(T)=\{a_{u}:u\in U\}\cup\{b_{v}:v\in V\}\qquad\text{and}\qquad\rho(T)= \{e_{u}:u\in U\}\cup\{d_{v}:v\in V\}.\]
Consider \(\theta,\varphi\in\mathcal{P}_{X}\). Set
\[\lambda(\theta)=\big{\{}\lambda(T):T\in\theta\big{\}}\qquad\text{and}\qquad \rho(\varphi)=\big{\{}\rho(T):T\in\varphi\big{\}}.\]
Note that \(\lambda(\theta)\) and \(\rho(\varphi)\) are partitions of \(A\cup B\) and \(D\cup E\), respectively. Now define
\[\gamma=\gamma(\theta,\varphi)=\lambda(\theta)\cup\big{\{}\{c_{x},x^{\prime}\} :x\in X\big{\}}\cup\rho(\varphi).\]
Then \(\gamma\in\mathcal{P}_{X}\) and \((\theta,\varphi)=(\alpha,\beta)\gamma\); see Figures 3 and 4. Moreover, if \(\theta,\varphi\in\mathcal{PB}_{X}\) then \(\gamma\in\mathcal{PB}_{X}\). Thus, we have \(\mathcal{P}_{X}\times\mathcal{P}_{X}=(\alpha,\beta)\mathcal{P}_{X}\) and \(\mathcal{PB}_{X}\times\mathcal{PB}_{X}=(\alpha,\beta)\,\mathcal{PB}_{X}\).
## Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council [EP/V002953/1] and the Australian Research Council [FT190100632]. We thank John Truss for his help in proving Lemma 4.10.
|
2310.16980 | First frequency-domain phenomenological model of the multipole asymmetry
in gravitational-wave signals from binary-black-hole coalescence | Gravitational-wave signals from binaries that contain spinning black holes in
general include an asymmetry between the $+m$ and $-m$ multipoles that is not
included in most signal models used in LIGO-Virgo-KAGRA (LVK) analysis to date.
This asymmetry manifests itself in out-of-plane recoil of the final black hole
and its inclusion in signal models is necessary both to measure this recoil,
but also to accurately measure the full spin information of each black hole. We
present the first model of the anti-symmetric contribution to the dominant
co-precessing-frame signal multipole throughout inspiral, merger and ringdown.
We model the anti-symmetric contribution in the frequency domain, and take
advantage of the approximations that the anti-symmetric amplitude can be
modelled as a ratio of the (already modelled) symmetric amplitude, and analytic
relationships between the symmetric and anti-symmetric phases during the
inspiral and ringdown. The model is tuned to single-spin numerical-relativity
simulations up to mass-ratio 8 and spin magnitudes of 0.8, and has been
implemented in a recent phenomenological model for use in the fourth LVK
observing run. However, the procedure described here can be easily applied to
other time- or frequency-domain models. | Shrobana Ghosh, Panagiota Kolitsidou, Mark Hannam | 2023-10-25T20:31:50Z | http://arxiv.org/abs/2310.16980v1 | First frequency-domain phenomenological model of the multipole asymmetry in gravitational-wave signals from binary-black-hole coalescence
###### Abstract
Gravitational-wave signals from binaries that contain spinning black holes in general include an asymmetry between the \(+m\) and \(-m\) multipoles that is not included in most signal models used in LIGO-Virgo-KAGRA (LVK) analysis to date. This asymmetry manifests itself in out-of-plane recoil of the final black hole and its inclusion in signal models is necessary both to measure this recoil, but also to accurately measure the full spin information of each black hole. We present the first model of the anti-symmetric contribution to the dominant co-precessing-frame signal multipole throughout inspiral, merger and ringdown. We model the anti-symmetric contribution in the frequency domain, and take advantage of the approximations that the anti-symmetric amplitude can be modelled as a ratio of the (already modelled) symmetric amplitude, and analytic relationships between the symmetric and anti-symmetric phases during the inspiral and ringdown. The model is tuned to single-spin numerical-relativity simulations up to mass-ratio 8 and spin magnitudes of 0.8, and has been implemented in a recent phenomenological model for use in the fourth LVK observing run. However, the procedure described here can be easily applied to other time- or frequency-domain models.
## I Introduction
The LIGO-Virgo-KAGRA (LVK) collaboration has published \(\sim\)90 gravitational-wave (GW) observations [1; 2; 3] since the first detection in 2015 [4]. The majority of these have been from binary black holes (BBHs), from which we are beginning to infer the astrophysical distribution of black-hole masses and spins [5; 6; 7; 8] and references therein. So far population inference has had to rely on limited spin information from each binary; to measure the magnitude and orientation of both spins we typically require louder signals than in most of those observed so far [9; 10]. We also require sufficiently accurate and physically complete theoretical waveform models. One physical effect that is necessary to measure the full spin information is an asymmetry in the signals' multipolar structure that is not included in the standard full inspiral-merger-ringdown (IMR) models of PHENOM[11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22] or SEOBNR[23; 24; 25; 26; 27; 28; 29] family used in current LVK analyses.
A non-eccentric BBH is characterised by the black-hole masses \(m_{1}\) and \(m_{2}\), and each black hole's angular momentum, \(\mathbf{S}_{i}\), which are usually represented in geometric units as the dimensionless vectors \(\boldsymbol{\chi}_{i}=\mathbf{S}_{i}/m_{i}^{2}\). The dominant effect of the spins on the GW signal is due to the spin components aligned with the binary's orbital angular momentum, which affect the rate of inspiral, and can therefore be measured through their effect on the signal's phase. The remaining (in-plane) spin components have little effect on the inspiral rate. They instead induce orbital and spin precession, which lead to modulations in the signal's amplitude and phase [30]. In most cases this is a weaker contribution to the signal and more difficult to measure, in turn making it difficult to measure the full spin information of the binary. Spin misalignment also leads to an asymmetry in the power emitted above and below the orbital plane, and can lead to large out-of-plane recoils of the final black hole [31]. Most signals to date have been too weak to observe precession and recoil (with the notable exception of several analyses of the signal GW200129.065458 signal - which we refer to as GW200129 in the rest of the text) [3; 32; 33], but more signals with measurable spin misalignment are expected as detector sensitivities improve.
Most current generic-binary models separately consist of a model of the signal in a non-inertial frame that tracks the precession of the orbital plane (a "co-precessing" frame), and a model of the time- or frequency-dependent precession angles. If the signal is decomposed into spin-weighted spherical harmonics, the dominant contributions in the co-precessing frame are the \((\ell=2,|m|=2)\) multipoles. As discussed in more detail in Sec. II, current Phenom and EOB models assume the symmetry \(h_{22}^{\rm CP}=h_{2-2}^{\rm CP}\) in the co-precessing frame. Precessing binaries also include an anti-symmetric contribution. There have been indications for some time that neglecting this contribution could lead to measurement biases [34; 35], and more recently explicit examples of such biases have been found [36]. One model that _does_ include the anti-symmetric contribution is the NR surrogate model NRSur7dq4 [37], and this likely plays an important role in being able to accurately infer the primary spin in GW200129 [32; 33], demonstrating the need to include the anti-symmetric contribution in Phenom and EOB models.
In this paper we present a simple method to model the anti-symmetric contribution to the \((\ell=2,|m|=2)\) co-precessing-frame multipoles, taking into account the phenomenology of how the anti-symmetric contribution depends on the in-plane spin direction and relates to the symmetric contribution. Note that all of the examples shown in this paper are constructed using either numerical relativity waveforms or post-Newtonian estimates and that the model introduced here is versatile in that it can be integrated into any frequency-domain approximant.
To motivate our focus on only the anti-symmetric contribution to the dominant multipoles, Fig. 1 shows the frequency-domain amplitude of the co-precessing-frame multipoles for a signal with mass-ratio \(q=m_{1}/m_{2}=2\), spin on the larger black hole of \(\chi=0.8\), and spin misalignment with the orbital angular momentum of \(\theta_{\rm LS}=90^{\circ}\), i.e., the spin initially lies entirely in the orbital plane. (This is case CF_38 in Ref. [38].) We see that the anti-symmetric (2,2) amplitude is of comparable strength to the symmetric \((3,3)\); since the \((3,3)\) extends to higher frequencies, it will contribute more power than the anti-symmetric \((2,2)\) at high masses, and comparable power
in low-mass systems. The next-strongest anti-symmetric contribution is to the \((3,3)\), and we see that this is significantly weaker than the symmetric \((4,4)\). This suggests that any model that includes symmetric contributions up to \(\ell\leq 4\) need only include the dominant \((2,2)\) anti-symmetric contribution. If we wish to accurately model the signal to the level of the symmetric \((5,5)\) contribution, then we must also include the anti-symmetric \((3,3)\). Current Phenom models include symmetric multipoles up to \(\ell=4\), and so we limit our attention to only the dominant anti-symmetric contribution. (Note that the anti-symmetric (3,3) multipole is also weaker than the symmetric (2,1) and (3,2) in this configuration.)
We find that we can model the \((2,2)\) anti-symmetric contribution using numerical relativity (NR) simulations that cover only the reduced parameter space of the binary's mass ratio, the larger black hole's spin magnitude, and its misalignment angle; to a first approximation we _do not_ need to sample the in-plane spin direction, which can be treated analytically. A more complete model that includes sub-dominant in-plane-spin-direction effects, and two-spin effects, is left to future work.
In Sec. II we explain the motivation behind our modelling approach. We describe the preparation of the NR data that we used to calibrate our model in Sec. III. In Sec. IV we construct a model of the ratio of the anti-symmetric and symmetric amplitudes, and in Sec. V, we present our method to construct the anti-symmetric phase from the symmetric phase and the precession angle, \(\alpha\). We discuss the accuracy of our prescription in Sec. VI.
## II Asymmetry background
An aligned-spin binary is invariant under reflection across the orbital plane. If we choose a coordinate system where the orbital plane is the \(x\)-\(y\) plane and perform a decomposition of the gravitational-wave signal into spin-weighted spherical harmonics, then this symmetry arises in the signal multipoles as
\[h_{\ell,m}(t)=(-1)^{\ell}h^{*}_{\ell,-m}(t). \tag{1}\]
This relationship is useful when constructing a model of the multipoles of an aligned-spin binary: we need only explicitly model the positive-\(m\) multipoles, and the negative-\(m\) multipoles follow from Eq. (1).
When the spins are not aligned with the orbital angular momentum, both the spins and the orbital plane may precess [30; 39]. In systems with mis-aligned spins Eq. (1) no longer holds, even if there is no orbital precession. The simplest illustration of this is the "superkick" configuration [31; 40; 41]: here the black holes are of equal mass and of equal spin, and the spins lie in the orbital plane but point in opposite directions. The symmetry of this system implies that the orbital plane does not precess, and although the spins will precess _in the orbital plane_, they both precess at the same rate and so remain oppositely directed to each other, so that the vector sum of the two spins is zero at all times. Although the direction of the orbital plane remains fixed, emission of linear momentum perpendicular to the orbital plane causes the entire system to move up and down. This linear momentum emission and "bobbing" [42] manifests itself in the gravitational-wave signal as an asymmetry in the positive- and negative-\(m\) multipoles, i.e., a violation of Eq. (1) [31].
The symmetry of Eq. (1) will remain broken regardless of any rotations performed on the multipoles [43]. This point becomes important when constructing signal models, where we regularly make use of a "co-precessing frame". In this frame the signal during the inspiral can be approximated as that of a non-precessing binary [44] and so many current waveform models are split into a model for aligned-spin binaries and a model for the precession dynamics, and the precession dynamics are then used to "twist up" a non-precessing-binary waveform to produce the complete precessing-binary waveform [13; 15; 16; 19; 24; 25; 28; 45; 46]. However, since the non-precessing-binary waveform respects the symmetry in Eq. (1), the model cannot reproduce the asymmetry that should be present in the true precessing-binary signal.
Several studies have considered the impact of neglecting these multipole asymmetries. Ref. [35] compares the multipoles from precessing-binary waveforms with those from nominally equivalent non-precessing binaries, to test a number of assumptions that go into the construction of many commonly used waveform models, including neglecting the multipole asymmetry. Ref. [34] compares NR waveforms from configurations with different in-plane spin directions and magnitudes, and argues that neglecting the multipole asymmetry may lead to parameter biases even at moderate SNRs, and that including the multipole asymmetry will be necessary to clearly measure in-plane spins and identify precessing systems. Finally, Ref. [36] uses the surrogate model NRSur7dqd to identify the level of bias in binary measurement examples, and confirms that neglecting the multipole asymmetry leads to biases in measuring in-plane spins (but the masses and effective aligned spin \(\chi_{\rm eff}\) are less affected). They also confirm the importance of the multipole asymmetry in precession measurements, showing that it had a significant impact on measurement of the properties of the signal GW200129 [32; 33].
In the next section we will summarise the leading-order PN contribution to the asymmetry, which provides some insight into the phenomenology of the multipole asymmetry, and also
Figure 1: Frequency-domain amplitude of co-precessing-frame symmetric and anti-symmetric contributions for a (\(q=2,\chi=0.8,\theta_{\rm LS}=90^{\circ}\)) configuration. The figure shows the symmetric contributions to \((2,2)\), \((3,3)\), \((4,4)\) and \((5,5)\), and the anti-symmetric contributions to \((2,2)\) and \((3,3)\).
motivate our modelling procedure. Although the multipole asymmetry has been known for some time, and indeed is included in the standard PN expressions that we use here, and is also discussed in detail in Ref. [43], we are not aware of any prior treatment that discusses the amplitude and phasing of the anti-symmetric (2,2) contribution in relation to the symmetric contribution, or notes the simple dependence of the relative phase between different in-plane spin directions, which is a key feature of the asymmetry that we exploit in constructing our model.
### Inspiral
To gain insight into the phenomenology of the multipole asymmetry during the inspiral, we consider the leading-order post-Newtonian contributions to a binary where only one black hole is spinning and the spin lies entirely in the orbital plane. The binary consists of two black holes with masses \(m_{1}\) and \(m_{2}\) and the dimensionless spin on the primary is \(\chi=S_{1}/m_{1}^{2}\), where \(S_{1}\) is the magnitude of the black hole's angular momentum. We use the post-Newtonian expressions from Ref. [47], where in this single-spin case the symmetric and anti-symmetric spin contributions are \(\chi_{s}=\chi_{a}=\chi/2\). The in-plane spin components incline the total angular momentum \(\mathbf{J}\) with respect to the normal to the orbital plane (and direction of the Newtonian orbital angular momentum, \(\mathbf{L}\)) by an angle \(\iota\), and the azimuthal precession angle of \(\mathbf{L}\) around \(\mathbf{J}\) is \(\alpha\); this is also the azimuthal angle of the total in-plane spin. As such, if we choose the instantaneous orbital plane to coincide with the \(x\)-\(y\) plane, then the entirely in-plane spin can be written as \(\chi=\chi(\cos\alpha),\sin(\alpha),0\).
We start with the multipole \(h_{22}\) as given in Eq. (16) in Ref. [47]. Requiring symmetry due to exchanging black holes (see the discussion prior to Eq. (4.15) in the same paper), leads to the relation \(h_{\ell m}(\Phi)=(-1)^{\iota\neq n}h_{\ell-\ell m}^{\mathrm{C}\eta}(\Phi+\pi)\), where \(\Phi\) is the orbital phase. We can enter the instantaneous orbital plane by setting \(\iota=\alpha=0\), and from our choice of spin we can then substitute \(\chi_{xx}=\chi_{xx}=\chi\cos(\alpha)/2\) and \(\chi_{x\alpha}=\chi_{y\alpha}=\chi\sin(\alpha)/2\); see sec. VI.B of Ref. [48] for details. We then have an approximation to the symmetric and anti-symmetric contributions to the signal in the co-precessing frame, \(h_{22}^{\mathrm{C}\eta}=(h_{22}^{\mathrm{C}\eta}+h_{2-2}^{\mathrm{C}\eta})/2\) and \(h_{22}^{\mathrm{C}\eta\pm}=(h_{22}^{\mathrm{C}\eta}-h_{2-2}^{\mathrm{C}\eta})/2\), up to \(O(\iota^{4})\),
\[h_{22}^{\mathrm{C}\eta,s} = A\left(1+\frac{(55\eta-107)\nu^{2}}{42}\right)e^{-2\iota\Phi}, \tag{2}\] \[h_{22}^{\mathrm{C}\eta,a} = A\frac{(1+\delta\chi\,\nu^{2}}{4}\,e^{-i\phi(\Phi\leftrightarrow \alpha)}, \tag{3}\]
where the overall amplitude is \(A=\sqrt{64\pi/5}M\eta\nu^{2}/D_{L}\), \(M\) is the total mass, \(\eta=m_{1}m_{2}/M^{2}\) is the symmetric mass ratio, \(\delta=(m_{1}-m_{2})/M\), \(\nu\) is the relative speed of the two black holes, and \(D_{L}\) is the luminosity distance to the source. Note that in Ref. [47] the symbol \(\Phi\) denotes the orbital phase in the instantaneous orbital plane, but here we use it to denote the total orbital phase that enters in the waveform.
We may immediately note several important features from Eqs. (2) and (3). The spin does not enter the amplitude of the symmetric contribution at this order. The anti-symmetric contribution enters at \(O(\nu^{2})\) lower than the symmetric contribution. We see that we may also consider the anti-symmetric amplitude \(|h_{22}^{\mathrm{C}\eta,a}|\) as a simple rescaling of the symmetric amplitude, \(|h_{22}^{\mathrm{C}\eta,a}|\).
The in-plane spin direction, \(\alpha\), does not enter into the amplitude of the anti-symmetric contribution, but does modify the phase. The physical interpretation is that the phase of the anti-symmetric contribution depends on the direction of the in-plane spin relative to the separation vector of the two black holes. This will vary with the orbital phase, \(\Phi\), but also the (slower) precession rotation of the spin, given by \(\alpha\). This is also consistent with the observation in studies of out-of-plane recoil, that the recoil amplitude depends sinusoidally on the initial direction of the in-plane spin [31].
Finally, we note a key observation for the model that we will produce: if we modify the initial in-plane spin direction by \(\Delta\alpha\), this will induce a simple overall phase shift in the anti-symmetric contribution, \(h_{22}^{\mathrm{C}\eta,a}\). This suggests that, given a set of single-spin numerical-relativity (NR) waveforms that cover the parameter space of mass ratio, aligned-spin magnitude and in-plane spin magnitude, we will have enough information to build a model of the anti-symmetric contribution to single-spin waveforms _without the need to also sample multiple initial in-plane spin directions_. We have just such a set of waveforms to hand, as used to construct the first NR-tuned full inpiral-merger-ringdown model of (the symmetric contribution to) precessing-binary waveforms [48], and discussed in Ref. [38].
### Merger and ringdown
Before proceeding to construct a model, we consider the phenomenology of the anti-symmetric contribution through merger and ringdown, and inspect which inspiral features hold for the entire waveform.
One of the main features of the anti-symmetric contribution that we see in the leading-order inspiral single-spin expressions (2) and (3) is that an in-plane rotation of the spin by an angle \(\Delta\alpha\) results in a corresponding shift in the anti-symmetric (2,2) phase by \(\Delta\alpha\). This is evident from Fig. 2; the two configurations considered here correspond to the superkick configuration described earlier. \(\vec{S}_{1}^{\perp}\) denotes the initial in-plane spin vector of the primary and \(\vec{r}_{12}\) is the initial separation vector pointing from the primary to the secondary. It is clear that the asymmetry phase for a configuration with \(\vec{S}_{1}^{\perp}\perp\vec{r}_{12}\) can be easily produced by applying a phase shift of \(\pi/2\) to the anti-symmetric waveform of a configuration with \(\vec{S}_{1}^{\perp}\parallel\vec{r}_{12}\). We note that the simple phase relationship does not appear to hold as well through merger and ringdown, but the deviation is small enough that this could be due to numerical error, and requires more detailed study in future.
A second key feature of the anti-symmetric contribution is that its frequency is roughly half that of the symmetric contribution (plus a small correction from the spin-precession rate \(\dot{\alpha}\)). Fig. 3 shows the time-domain GW frequency of the symmetric and anti-symmetric contributions for a configuration where only the larger black hole is spinning, with the spin (\(\chi=0.7\)) entirely in-plane. We see that during the inspiral the anti-symmetric frequency is approximately half that of the symmetric, as we expect.
During ringdown the two frequencies are equal. This is consistent with our expectations from perturbation theory. In the ringdown, where perturbation theory results are applicable, the (\(\ell=2,m=\pm 2\)) multipoles will have the same (constant) frequency, and will d
same rate. We therefore expect both the symmetric and antisymmetric combinations of \(h_{22}\) and \(h_{2,-2}\) to have the same frequency, and for the ratio of the symmetric and anti-symmetric amplitudes to be constant throughout the ringdown.
The third property we took from the inspiral expressions (2) and (3) was that the anti-symmetric amplitude can be considered as a rescaling of the symmetric amplitude. Fig. 4 illustrates that this holds for the entire waveform. It shows the frequency-domain amplitude of the symmetric and anti-symmetric (2,2) contributions for a configuration with \((q=1,\chi=0.4,\theta_{\mathrm{LS}}=60^{\circ})\), case CF_7 in Ref. [38]. We see in particular that the turnover to ringdown occurs at the same frequency (the ringdown frequency is at \(fM\sim 0.09\) for this configuration). We also see that, as discussed above, the symmetric and anti-symmetric contributions decay at the same rate.
We will now exploit these features to construct a model for the anti-symmetric amplitude and phase.
### Structure of the model
Based on the observations in the previous section we construct an approximate model of the anti-symmetric contribution to the (2,2) multipole in the co-precessing-frame as follows. We start with a model for the symmetric contribution, which provides us with the symmetric amplitude, \(A_{s}(f)\), and symmetric phase, \(\phi_{s}(f)\). In the examples we consider here the symmetric contribution is calculated from NR waveforms. In a full model, we would start with the symmetric contribution from an already existing model. An explicit example is the multi-mode precessing-binary model described in Ref. [49], but the anti-symmetric model described in this paper can be applied to any existing frequency-domain precessing-binary model that separately provides \(A_{s}(f)\), and symmetric phase, \(\phi_{s}(f)\), and the precession angle \(\alpha(f)\), as we describe below.
To construct the anti-symmetric amplitude, \(A_{a}(f)\), we model the ratio \(\kappa(f)=A_{a}(f)/A_{s}(f)\). In the inspiral we use a post-Newtonian estimate of the amplitude ratio, and find that we can model the amplitude ratio accurately through to the ringdown by adding only one higher-order term, which we fit to our NR data. In the ringdown we treat the amplitude ratio as a constant, as motivated in the previous section. The amplitude model is presented in Sec. IV
To construct the anti-symmetric phase, during the inspiral we combine the symmetric phase, \(\phi_{s}(f)\), and the precession angle \(\alpha(f)\), as prescribed by Eq. (3), i.e., \(\phi_{a}(f)\sim\phi_{s}(f)/2+\alpha(f)\). In the merger-ringdown the anti-symmetric phase will behave as \(\phi_{a}(f)\sim\phi_{s}(f)\). We apply an overall time and phase shift to smoothly connect these two functional forms at some transition frequency. We note that we find
Figure 4: The amplitude of the \((l=2,m=2)\) symmetric and anti-symmetric waveform components in the frequency-domain co-precessing frame for a \((q=1,\chi_{1}=0.4,\theta_{LS}=60^{\circ})\) NR simulation.
Figure 3: The symmetric and anti-symmetric frequencies obtained from NR data of a \((q=2,\chi=0.7,\theta_{\mathrm{LS}}=90^{\circ})\) configuration. During inspiral the anti-symmetric frequency, \(\omega_{a}\), is about half of the symmetric frequency, \(\omega_{s}\) (or nearly equal to the orbital frequency), and close to merger quickly catches up with the symmetric frequency. As expected from perturbation theory, \(\omega_{s}=\omega_{a}\) during ringdown.
Figure 2: Anti-symmetric waveform constructed from NR waveforms for two superkick configurations in the time-domain, \(\vec{S}_{1}^{l}\perp\hat{r}_{12}\) in dashed black and \(\vec{S}_{1}^{+}\perp\hat{r}_{12}\) in solid grey, shows a constant phase offset of \(\pi/2\). _Inset_: Dashed blue line shows that the anti-symmetric waveform for \(\vec{S}_{1}^{+}\perp\hat{r}_{12}\) can be constructed by just applying a \(\pi/2\) phase shift to the \(\vec{S}_{1}^{+}\parallel\hat{r}_{12}\) waveform even in the strong-field regime close to merger (\(n_{\mathrm{merger}}=1784M\)).
that it is possible to produce an accurate model of the anti-symmetric phase \(\phi_{a}(f)\) using the same transition function for any binary, i.e., the parameters of the transition function do not need to be fit across the binary parameter space. The phase model is presented in Sec. V
## III Numerical Relativity Data
The model is tuned to 80 single-spin NR waveforms generated using the BAM code [50; 51]. They cover mass ratios \(q=\{1,2,4,8\}\), spin magnitudes on the larger black holes of \(\chi_{1}=\{0.2,0.4,0.6,0.8\}\), and angles of spin misalignment with the orbital angular momentum of \(\theta_{\rm LS}=(30^{\circ},60^{\circ},90^{\circ},120^{\circ},150^{\circ})\). In labelling the configurations, the cases are ordered according to the mass ratios, then the spin magnitudes, then the misalignment angles; for example, case 57 corresponds to (\(q=4,\chi_{1}=0.8,\theta_{\rm LS}=60^{\circ}\)). This is the same indexing as in Ref. [38], which provides full details on the catalog of simulations. To motivate the model and test our modelling assumptions we have also used families of simulations that consider variations in the initial in-plane-spin direction, based on those in Ref. [34]. One notable addition to this set were simulations of superkick configurations where the black holes were given an additional out-of-plane momentum, to remove a secular drift in the centre-of-mass.
Several processing steps are performed to prepare the NR data for modelling, using the tools in Ref. [52; 53]. We start with the waveform data for the \(\ell=2\) multipoles of the Weyl scalar, \(\psi_{4,2m}\), in an inertial frame. We apply a Hann window to remove "junk" radiation from the beginning of the waveforms (a non-physical artefact of the initial data), and to remove numerical noise in the post-ringdown waveform. Furthermore, to ensure that the frequency-domain step-size is sufficiently small, the time-domain data are padded with zeros to the right. More details are given in Ref. [38].
We then transform to a frame that tracks \(\hat{J}(t)\); as described in Ref. [48], this retains the approximation that the direction of \(\hat{J}(t)\) is constant throughout the waveform. Modelling deviations from this approximation are left to future work, and are discussed further in Ref. [48], along with the procedure to track \(\hat{J}(t)\) and perform this transformation. At the level of approximation and accuracy of the anti-symmetric model presented here, we do not expect this approximation to have any appreciable effect on the final model. We then transform the \(\psi_{4,2\pm 2}\) multipoles to a co-precessing frame, the "quadrupole-aligned" (QA) [54] or "optimal emission direction" [55; 56] frame. In this frame the multipoles are significantly simplified, with the \((l=2,|m|=2)\) multipoles having the strongest amplitude and the precession-induced modulations minimised.
The \((2,\pm 2)\) multipoles of time-domain co-precessing-frame \(\Psi_{4}\) are now transformed to the frequency domain,
\[\tilde{\psi}^{CP}_{4,2\pm 2}(f)=\int\psi^{CP}_{4,2\pm 2}(f)e^{-2\pi if}\,dt. \tag{4}\]
To compute the strain, we note that \(\Psi_{4}(t)=-\bar{h}(t)\), so in the frequency domain we may write,
\[\tilde{h}^{CP}_{2,\pm 2}(f)=-\frac{\tilde{\psi}^{CP}_{4,2\pm 2}(f)}{\omega^{2}}, \tag{5}\]
where \(\omega=2\pi f\). The anti-symmetric and symmetric components of the waveform in the QA frame are computed from
\[\tilde{h}^{NR}_{s}(f) = \frac{1}{2}(\tilde{h}^{CP}_{2,2}+\tilde{h}^{*CP}_{2,-2}), \tag{6}\] \[\tilde{h}^{NR}_{a}(f) = \frac{1}{2}(\tilde{h}^{CP}_{2,2}-\tilde{h}^{*CP}_{2,-2}). \tag{7}\]
The symmetric and anti-symmetric strains are complex quantities that can be written as
\[\tilde{h}^{NR}_{s}(f) = A^{NR}_{s}(f)e^{i\omega^{NR}_{s}(f)}, \tag{8}\] \[\tilde{h}^{NR}_{a}(f) = A^{NR}_{a}(f)e^{i\omega^{NR}_{s}(f)} \tag{9}\]
and we can easily calculate their amplitude, \(A^{NR}\), and phase, \(\phi^{NR}\), as their absolute value and argument, respectively. We denote the ratio between the anti-symmetric and symmetric amplitudes as \(\kappa_{\rm NR}(f)=A^{NR}_{a}(f)/A^{NR}_{s}(f)\).
## IV Model of the Amplitude Ratio
We model the amplitude of the anti-symmetric contribution \(A_{a}(f)\) as a ratio of the symmetric contribution \(A_{s}(f)\), i.e., \(A_{a}(f)=\kappa(f)A_{s}(f)\).
Our first step in constructing the ratio model is to compute the ratio in the framework of PN theory as a PN expansion in terms of \(v/c\), where \(v\) is the relative velocity of the two black holes and \(c\) is the speed of light, and we choose geometric units where \(c=1\). We again restrict our analysis to single-spin binaries.
To compute the PN ratio, we follow a procedure similar to the illustrative calculation in Sec. II. We obtain from Ref. [47] the complex 1.5PN expressions of the \((\ell=2,|m|=2)\) multipoles, \(h^{PN}_{2,2\pm 2}\), for spinning, precessing binaries with generic inclination angle \(\iota\) moving on nearly circular orbits. The strains of the \(\ell=|m|=2\) multipoles can then be transformed to a co-precessing frame that follows the instantaneous orbital plane. To achieve this, we choose a simple co-precessing frame where we set to zero the inclination angle of the orbital angular momentum relative to the total angular momentum, \(\iota=0\), and also set the precession angle, \(\alpha=0\). We then use an approximate reduction to a single-spin system [48],
\[\chi_{s/a,x} = -\chi\sin(\theta_{LS}-\iota)\cos(\alpha)/2 \tag{10}\] \[\chi_{s/a,y} = -\chi\sin(\theta_{LS}-\iota)\sin(\alpha)/2\] (11) \[\chi_{s/a,z} = \chi\cos(\theta_{LS}-\iota)/2 \tag{12}\]
where \(\chi_{s}=(\chi_{1}+\chi_{2})/2\), \(\chi_{a}=(\chi_{1}-\chi_{2})/2\) are the symmetric and anti-symmetric spins defined in Ref. [47] and \(\theta_{LS}\) is the inclination of the spin from the orbital angular momentum vector.
We then find that the symmetric and anti-symmetric amplitudes are,
\[A^{PN}_{s}(f) = -\frac{4M}{21D_{L}}\sqrt{\frac{\pi}{5}}v^{2}\eta\left(42+84\pi v^{ 3}+v^{2}(-107+55\eta)\right. \tag{13}\] \[\left.-28v^{3}(1+\delta-\eta)\chi\cos\theta_{LS}\right),\] \[A^{PN}_{a}(f) = -\frac{2M}{D_{L}}\sqrt{\frac{\pi}{5}}v^{4}(1+\delta)\eta\chi\sin \theta_{LS}. \tag{14}\]
The ratio of the two amplitudes is then given by,
\[\kappa_{\text{PN}}(f)=\frac{21v^{2}(1+\delta)\chi\sin\theta_{LS}}{2(42+84\pi v^{3} +v^{2}(-107+55\eta)-28v^{3}(1+\delta-\eta)\chi\cos\theta_{LS})}, \tag{15}\]
where \(\eta=m_{1}m_{2}/M^{2}\) is the symmetric mass ratio and \(\delta=(m_{1}-m_{2})/M>0\) is a fractional mass difference. The expression of the PN ratio of the anti-symmetric amplitude over the symmetric amplitude depends on the symmetric mass ratio, \(\eta\), the spin magnitude, \(\chi\), and the angle \(\theta_{LS}\) of the system. Consequently, Eq. (15) can be used to compute the PN ratio of any configuration as a function of frequency since \(v=(\pi fM)^{1/3}\).
We cannot expect the PN estimate of the amplitude ratio, Eq. (15), to be accurate through merger and ringdown. To capture this behaviour we investigated the addition of unknown higher-order terms and fit their coefficients to the NR data. The simplest approach is to add only one additional term, for example,
\[\kappa(f)=\kappa_{\text{PN}}(f)\,(1+bv^{\eta}), \tag{16}\]
where \(b\) is fit to the NR data. To do this it was necessary to choose an appropriate frequency range over which to perform the fit. The NR data are noisy in the early inspiral and in the post-ringdown frequencies, so we first identified a consistent definition of a frequency range that could be used for all 80 NR simulations, based on the ringdown frequency, \(f_{RD}\) of each NR configuration. The frequency range that we used in the final fits was given by \(Mf_{min}=\left(Mf_{RD}-0.01\right)/5\) and \(Mf_{max}=Mf_{RD}-0.002\).
We investigated fits to the amplitude ratio of the form Eq. (16) with \(n=3,5,7\). Fig. 5 shows the results for one configuration, and illustrates that \(n=5\) provides the most accurate fit. In fact, we find that \(n=5\) consistently provides the most accurate fit across all 80 NR simulations.
It is also clear from Fig. 5 that our fit is not accurate beyond the ringdown frequency. Beyond this point the amplitude ratio appears to plateau. This is consistent with our expectations from perturbation theory, as discussed in Sec. II.2, which predicts that the amplitude ratio will be constant throughout ringdown. To include this feature in our model, we fix \(\kappa(f)\) to the value \(\kappa(f_{RD})\) at frequencies \(f>f_{RD}\). To avoid a sharp transition we use a moving average algorithm such that,
\[\kappa_{n}(f)=\frac{1}{(2k+1)}\sum_{i=n-k+1}^{n+k+1}\kappa(f_{i}),\ \ n\in[k,N-k]. \tag{17}\]
We use a symmetric window of an equal number of points (\(k=40\)) on either side of the frequency \(f\) to calculate the moving average. Here \(N\) is the length of the frequency series and the algorithm updates \(\kappa(f)\) for \(40\leq n\leq N-40\).
We fit the coefficient \(b\) in Eq. (16) (with \(n=5\)) to each of the 80 single-spin NR waveforms from the NR catalogue in Ref. [38]. Fig. 6 shows the resulting fit for the amplitude ratio for the (\(q=1,\chi=0.4,\theta_{LS}=60^{\circ}\)) configuration. We see that the final fit, including the levelling off of the amplitude ratio through ringdown, agrees well with the NR data. The lower panel of the figure shows the resulting estimate for the anti-symmetric amplitude. We see that this agrees well with the NR data up to the point where the NR amplitude becomes dominated by noise.
The fit cannot reproduce the NR amplitude ratio in all cases. In many cases the NR amplitude ratio oscillates during the inspiral. Fig. 7 shows an extreme example of this feature, from the (\(q=1,\chi=0.8,\theta_{LS}=30^{\circ}\)) configuration. It is not clear what causes these oscillations. Oscillations in the co-precessing-frame amplitude ratio during the inspiral can be due to the choice of co-precessing frame, as we will discuss later. However, if that were the cause of the oscillations in the NR data, we would expect there to be some correlation with the degree of precession in the configuration. Instead we do not find any relationship between the amplitude of the oscillations, which in some cases are negligible (as in Fig. 6), and in others (like Fig. 7) lead to significant variations in the amplitude ratio. We do find, though, that our model passes through a reasonable estimate of the mean of the oscillations. The largest impact is on the constant value of the amplitude that the model settles on for the ringdown regime; in the example in Fig. 7 the model's estimate of the anti-symmetric amplitude during the ringdown is roughly 20% below the NR value.
Finally, in some cases we also found that the amplitude ratio in the NR data did not level off during the ringdown. Fig. 8 is an example of this. We have not been able to determine the reason for this. As noted previously, we expect that since the \((2,2)\) and \((2,-2)\) multipoles decay at the same rate, that the ratio between their amplitudes would remain constant during the ringdown. It is possible that this effect is obscured by numerical noise. Regardless of the cause, and in the absence of any compelling explanation for alternative behaviour, we have chosen to impose the behaviour that we expect from perturbation theory.
The coefficient \(b\) is shown as a function of symmetric mass ratio and misalignment angle in Fig. 9; the values for all four spin magnitudes are shown together. This plot illustrates the general trend of \(b\) across the parameter space. We find that there is no general trend with respect to the spin magnitude, and this is illustrated more clearly in Fig. 10, which shows \(b\) as
Figure 5: Amplitude ratio for the (\(q=1,\chi=0.4,\theta_{LS}=60^{\circ}\)) configuration, for the NR data, the PN ratio \(\kappa_{\text{PN}}\), and fits to higher-order corrections as in Eq. (16), with \(n=3,5,7\). The vertical dashed line indicates the ringdown frequency.
a function of symmetric mass ratio, for each spin magnitude, for the subset of cases with \(\theta_{\rm LS}=90^{\circ}\). The PN amplitude ratio already includes a linear dependence on the spin magnitude, and given the uncertainty in our fits, we do not attempt to identify a higher-order spin dependence. We motivate this point further in Sec. VI (Fig. 14). We then include all simulations into a two-dimensional parameter-space fit of the form,
\[b(\eta,\theta_{\rm LS})=b_{0}+b_{1}\eta+b_{2}\theta_{LS}+b_{3}\eta\theta_{LS} \tag{18}\]
where we find \(b_{0}=18.0387\), \(b_{1}=15.4509\), \(b_{2}=55.1140\) and \(b_{3}=-203.6290\). From Eq. (18), we notice that \(b\) does not go to zero when \(\theta_{LS}\) is \(0^{\circ}\) or \(180^{\circ}\). However, the presence of the \(\sin\theta_{LS}\) term at the numerator of the ansatz of the ratio model in Eq. (15) ensures that the multipole asymmetry goes to zero at these points. The amplitude as predicted by this fit is shown on each of our figures and labelled as "final model".
Two caveats to this approach are worth noting. One is the choice of co-precessing frame. Previous work has shown that the symmetric (2,2) contribution takes a simple form in the QA co-precessing frame; indeed, the amplitude evolution can be approximated by the (monotonic) amplitude of an equivalent aligned-spin configuration. This is not necessarily the case for the anti-symmetric contribution, and this is one possible cause of the oscillations that we see (although, as noted previously, it does not show a clear correlation with the strength of precession). Conversely, we found that the amplitude evolution of the anti-symmetric (2,2) multipole was monotonic for PN waveforms if we choose \(\iota=\alpha=0\) in their construction (which is equivalent to choosing a co-precessing frame that tracks the Newtonian orbital angular momentum, i.e., the normal to the orbital plane), but if we consider the PN waveforms in the QA frame then the anti-symmetric (2,2) amplitude shows strong modulations. This illustrates that the anti-symmetric component can depend strongly on the choice of co-precessing frame, and although we do not expect this to be a significant issue at the level of accuracy or approximation of the current model, it may require better understanding in future refinements of anti-symmetric models.
The second point is that our model is constructed based on the phenomenology of single-spin binaries. If the model is to be used for generic binaries, one much choose a method to treat two-spin configurations. One option, which is employed in the PhenomX implementation [48; 49], is as follows.
We can describe the spin of an equivalent single-spin system using Eqs. (16) and (17) in Ref. [48], but diverge slightly
Figure 6: The amplitude ratio (top) and amplitude (bottom) for the \((q=1,\chi=0.4,\theta_{LS}=60^{\circ})\) configuration. The vertical line indicates the ringdown frequency for the \((\ell=2,|m|=2)\) multipoles.
Figure 7: The amplitude ratio (top) and amplitude (bottom) for the \((q=1,\chi=0.8,\theta_{LS}=30^{\circ})\) configuration. The vertical line indicates the ringdown frequency for the \((\ell=2,|m|=2)\) multipoles.
in the definition of the in-plane spin as follows:
\[\chi^{\perp}=\begin{cases}\chi_{\text{as}}\cos^{2}(\theta_{q})+\chi_{\text{P}}\sin ^{2}(\theta_{q}),&1\leq q\leq 1.5\\ \chi_{\text{P}},&q>1.5\end{cases} \tag{19}\]
where \(\chi_{\text{P}}\) is the effective precession spin as defined in Ref. [57]. The anti-symmetric amplitude for an equal-mass binary with both spins equal in magnitude, entirely in-plane and pointing in the same direction, must drop to zero. The symmetric in-plane spin combination of Eq. (15) in Ref. [48] cannot account for this effect in superkick configuration. Therefore, we can instead use an anti-symmetric in-plane spin combination,
\[\chi_{\text{as}}=\frac{|\mathbf{S}_{\mathbf{1}}^{\perp}-\mathbf{S}_{\mathbf{2 }}^{\perp}|}{m_{1}^{2}}, \tag{20}\]
to appropriately map two-spin to single-spin systems for generating the anti-symmetric waveform.
## V Phase model
GW phases for chirp signals in the frequency domain are typically quite featureless and therefore not conducive to modeling. Following a suite of models for the symmetric phase [11; 12; 17], we focus first on the anti-symmetric phase derivative. Fig. 11 demonstrates that the anti-symmetric phase derivative (i.e., frequency) behaviour in the time-domain as discussed in Sec. II.2 is preserved in the frequency domain as well. Remarkably, we find that it is possible to construct a map of the symmetric phase derivative to the anti-symmetric phase derivative that is independent of the binary's parameters. Therefore, we do not need to produce any parametric fits for the map over the intrinsic parameter space of BH binaries, which makes this model extremely simple.
Our model of the anti-symmetric phase derivative is defined
Figure 8: The amplitude ratio (top) and amplitude (bottom) for the \((q=2,\chi=0.4,\theta_{LS}=90^{\circ})\) configuration. The vertical line indicates the ringdown frequency for the \((\ell=2,|m|=2)\) multipoles.
Figure 10: The \(b\) coefficient as a function of the symmetric mass ratio, \(\eta\), for a selected angle \(\theta_{LS}=90^{o}\) and all the available spin values, \(\chi=[0.2,0.4,0.6,0.8]\). The grey line shows the surface fit, \(b(\eta,90^{\circ})\) from Eq. (18).
Figure 9: Surface \(b(\eta,\theta_{LS})=b_{0}+b_{1}\eta+b_{2}\theta_{LS}+b_{3}\eta\theta_{LS}\) fit of the model’s coefficient, \(b\), to the two-dimensional parameter space \(\eta,\theta_{LS}\). The red points denote the 80 computed \(b\) coefficients of the multipole asymmetry amplitude model.
by the piecewise function,
\[\phi_{a}^{\prime}(f)=\begin{cases}\frac{1}{2}\phi_{s}^{\prime}(f)+\alpha^{\prime} (f)+A_{0},&f\leq f_{T}-\frac{f_{c}}{2}\\ \phi_{\text{im}}^{\prime}(f),&f_{T}-\frac{f_{c}}{2}<f\leq f_{T}+\frac{f_{c}}{2} \\ \phi_{s}^{\prime}(f),&f_{T}+\frac{f_{c}}{2}\leq f<0.12,\end{cases} \tag{21}\]
where the phase derivative in the intermediate region is given by,
\[\phi_{\text{im}}^{\prime}(f)=\frac{1}{2}\Big{[}1-\frac{3}{2f_{w}}((f-f_{T})- \frac{(f-f_{T})^{3}}{3f_{w}^{2}})\Big{]}\Big{(}\frac{\phi_{s}^{\prime}}{2}+ \alpha^{\prime}+A_{0})\] \[+\frac{1}{2}\Big{[}1+\frac{3}{2f_{w}}((f-f_{T})-\frac{(f-f_{T})^{ 3}}{3f_{w}^{2}})\Big{]}(\phi_{s}^{\prime}+B_{0}) \tag{22}\]
As is evident from Fig. 11, the functional form of region I transitions to the functional form of region II in the intermediate region. To ensure smooth transition in the intermediate region an obvious choice would be a _tanh_ windowing function. Noting that a _tanh_ function can be computationally inefficient during model evaluation, we instead use the Taylor expansion of the _tanh_ function up to second order with appropriate normalization to construct the phase derivative functional form of Eq. (22).
The phase derivative ansatz was calibrated to NR simulations by treating the transition frequency, \(f_{T}\), the width of the transition window, \(f_{w}\), and the phase coefficients (\(A_{0}\) and \(B_{0}\)) as free parameters. The window parameters were not particularly sensitive to the tuning and the minor variations could be attributed to the noise in the anti-symmetric phase derivatives; \(f_{T}=0.85f_{RD}\) and \(f_{w}=0.005\) was found to be an optimal choice across the entire single-spin parameter space.
Once the parameters of the window function were fixed, we investigated the impact of fixing the phase coefficients. A best fit to data for \(B_{0}=0\) was found to be consistent across the parameter space; \(A_{0}\) on the other hand showed some variation, but no specific trend. Furthermore, choosing the algebraic mean of \(A_{0}\) for the set of 80 simulations did not significantly impact the quality of the fit. This shows (1) the fitting algorithm tries to find the best \(A_{0}\) for continuity of the phase derivative at \(f_{T}\), and (2) the variation in optimal \(A_{0}\) across the parameter space is more likely due to noisy data and not a function of intrinsic parameters of the binary.
Applying a shift to the phase derivative is equivalent to an overall time shift of the waveform. We exploit this freedom by fixing the symmetric phase derivative minima to be 0 at \(f_{RD}\). This imposes,
\[A_{0}=\frac{1}{2}\phi_{s}^{\prime}(f_{T})-\alpha^{\prime}(f_{T}). \tag{23}\]
Figure 12 shows that \(A_{0}\) obtained from the fitting algorithm and from NR data using Eq. (23) reasonably well for a majority of the cases (cf. _y-axis_ on Fig. 11). The anti-symmetric model will be used with a phenomenological symmetric waveform model, so an \(A_{0}\) derived from the symmetric waveform model makes the phase construction self-consistent and robust. Furthermore, Fig. 12 highlights that the gain in accuracy by making a model to capture the near-stochastic behaviour of \(A_{0}\) may be outweighed by errors introduced by over-modelling. As such, our model of the anti-symmetric phase is NR-informed but, noting that the data for the anti-symmetric waveform is often close to numerical noise, we prioritised our understanding of the physics rather than model optimization.
The phase of the anti-symmetric waveform is obtained by integrating the two pieces,
\[\phi_{a}(f)=\begin{cases}\frac{1}{2}\phi_{s}(f)+\alpha(f)+A_{0}f+\phi_{A0},& f\leq f_{T}\\ \phi_{s}(f)+\phi_{B0},&f_{T}\leq f<0.12,\end{cases} \tag{24}\]
where the integration constant \(\phi_{B0}\) is determined by continuity at \(f_{T}\)
\[\phi_{B0}=\alpha(f_{T})-\phi_{s}(f_{T})+A_{0}f_{T}. \tag{25}\]
Figure 11: The anti-symmetric phase derivative (thick black line), the symmetric phase derivative (grey dashed line) and the combination of half of the symmetric phase derivative with the derivative of the precession angle \(\alpha\) (thin grey line). The specific configuration shown here is for a single-spin binary with (\(q=1,\chi=0.2\), \(\theta_{LS}=30^{\circ}\)).
Figure 12: Comparison of phase coefficient \(A_{0}\), obtained from fitting the phase ansatz to NR data (light purple triangles) across the parameter space and from NR data using Eq. (23) (blue circles), for cases with \(\chi=0.4\) and \(0.8\). The algebraic mean of the set of coefficients are shown by horizontal lines in corresponding colors. The case index is as described in Sec. III.
Finally, the phase of the asymmetry is modulated by the in-plane spin direction \(\alpha\); therefore, the initial phase, \(\phi_{A0}\), is the value of \(\alpha\) at a reference frequency i.e., \(\phi_{A0}=\alpha(f_{net})\).
## VI Model accuracy
A standard measure of waveform accuracy used extensively in the literature is the _match_ of the waveform model with NR waveforms, defined as,
\[M(h_{NR},h_{M})=4\,\mathrm{Re}\int\limits_{0}^{\infty}\frac{\mathrm{h}_{\rm NR }(\mathrm{f})\mathrm{h}_{\rm M}^{*}(\mathrm{f})}{\mathrm{S}_{\rm n}(\mathrm{f })}\mathrm{df}, \tag{26}\]
where \(h(f)=h_{*}(f)-ih_{\rm x}(f)\) is a complex frequency sequence constructed from the two polarizations of the waveform. As such, calculating matches of just the anti-symmetric waveform is not physically meaningful. Furthermore, a true measure of performance of precessing waveforms in data analysis can only be obtained by calculating matches of the full waveform in the inertial frame, making considerations for precession as well as extrinsic parameters. Therefore, matches of just the anti-symmetric waveform in the co-precessing frame provide some indication of the accuracy of one ingredient in the full waveform, but do not indicate the overall accuracy of the corresponding precessing waveform.
However, an inner product like that in Eq. (26) is a useful measure of agreement between two complex frequency series. Since a match-like calculation for the anti-symmetric contribution in the co-precessing frame cannot be interpreted in terms of either signal detection efficiency or parameter measurement accuracy, there is no reason to include the detector sensitivity, and so we will use a simpler inner product of the form,
\[\langle h_{NR}|h_{M}\rangle=\mathrm{Re}\int\limits_{t_{1}}^{t_{2}}\mathrm{h}_ {\rm NR}(\mathrm{f})\mathrm{h}_{\rm M}^{*}(\mathrm{f})\mathrm{df}, \tag{27}\]
where \(f_{1}\) is the starting frequency of the NR waveform in geometric units, \(Mf_{1}=0.02\), and \(Mf_{2}=0.15\), after which point the amplitude of the NR waveform is below the noise floor of the data. We consider normalised waveforms, \(\hat{h}=h/\sqrt{\langle h|h\rangle}\), so that the maximum value of the inner product is one. We used the standard implementation of this inner product in pycbc [58], a python software package for GW data analysis, for our match computations. We then consider the "mismatch" between the two waveforms,
\[\mathcal{M}=1-\frac{\langle h_{NR}|h_{M}\rangle}{\sqrt{\langle h_{NR}|h_{NR} \rangle\langle h_{M}|h_{M}\rangle}}. \tag{28}\]
In Fig. 13 we show the mismatches of the anti-symmetric waveform constructed from our model with the 80 NR waveforms that were used to calibrate the model. To determine the accuracy of the individual components, we also computed matches of the amplitude (phase) model complemented by phase (amplitude) constructed from NR data, with the full anti-symmetric waveform constructed from NR data. The overall accuracy of both the models are comparable. We note that although the model was verified using the same waveforms as used for modelling, since the NR tuning was relatively simple -- i.e., a single co-efficient in Eq. (16) fit to the four-parameter ansatz Eq. (18) across a two-dimensional parameter space -- using a much smaller subset of waveforms would have produced a model of similar accuracy, and the simplicity of this model and the single-spin parameter space obviates any concerns about over-fitting or unexplored corners of parameter space.
To investigate the quality of the surface fit in Eq. (18), we also computed mismatches for the amplitude model using fit coefficients \(b\) of Eq. (16). As is evident from Fig. 14, for most cases the performance is unchanged and for the handful of cases where the mismatch changes, the difference is not very significant. This further illustrates that capturing the nonlinear dependence on spin magnitude is unlikely to make significant improvement to the amplitude model. In addition to the argument made for using Eq. (23) to calculate \(A_{0}\) from the symmetric waveform and precession angle, \(\alpha\), we calculated mismatches for the different choices of \(A_{0}\) in the phase model - i.e., \(A_{0}^{fit}\) and \(A_{0}^{outermost}\) in Fig. 12 - to confirm that there was no reduction in model accuracy.
Note that the anti-symmetric waveform model is downstream from the symmetric waveform model as well as the precession angle models. Therefore, enhancement in performance of the overall model due to the addition of an asymmetry model must always be discussed in the context of the underlying symmetric, precessing waveform model. This is beyond the scope of present work and will be discussed in the context of the current generation frequency-domain precessing-binary PhenomXPNR model [49].
## VII Conclusion
We have presented a method to model the anti-symmetric contribution to the \((\ell=2,|m|=2)\) multipoles in the co-precessing frame. This is a general approach that can be applied to any frequency-domain model that provides the symmetric contribution to the (2,2) amplitude, phase and precession angle, \(\alpha\). We expect that the main insights of this model can be used to easily generalise the procedure to the time-domain and be useful in including multipole asymmetry in current generation time-domain models [45; 46]. We summarise the key insights here.
For the amplitude, we observe that the anti-symmetric amplitude can be easily modeled as a rescaling of the symmetric amplitude. As shown in Fig. 4, both amplitudes have the same basic structure, in particular the same ringdown frequency and decay rate. In the inspiral our model of the amplitude ratio is based on a PN expression for single-spin systems, and in the late inspiral and merger a higher-order correction to the PN expression is calibrated to 80 NR simulations of single-spin binaries [38]. In the ringdown we make use of the prediction from perturbation theory that the ratio of the symmetric and anti-symmetric amplitudes will be constant. The amplitude model ratio is presented in Eqs. (15), (16) and the fit to the higher-order contribution given in Eq. (18).
For the phase, we use the facts that in the inspiral the frequency of the anti-symmetric contribution equals the orbital frequency plus the spin precession frequency, and that during the ringdown the symmetric and anti-symmetric frequencies are the same. We are able to construct a mapping from the symmetric phase and precession angle, \(\phi_{s}\) and \(\alpha\), to the anti-symmetric phase, \(\phi_{a}\), motivated by the 80 single-spin simula
tions, but without the need of any explicit tuning. The model of the phase is given in Eqs. (23), (24) and (25). An additional crucial observation is that a rotation of the initial in-plane spin direction of \(\Delta\alpha\) introduces a corresponding shift of \(\Delta\alpha\) into the anti-symmetric phase. It is this observation that allows us to model the dependence on in-plane-spin direction, even though we do not have a set of NR simulations that span this subspace.
The procedure presented here is not in itself a signal model. As already noted, one must also provide the symmetric amplitude, phase and precession angle. In addition, having constructed a model for the anti-symmetric contribution to the (2,2) multipoles in the co-precessing frame, one must then "twist them up" to produce the multipoles in the inertial frame. This has been done for the recent extension of the multi-mode frequency-domain precessing-binary IMRPhenomXPNR model [49].
This is the first phenomenological full inspiral-merger-ringdown model of the anti-symmetric multipole contributions, and there are many directions for improvement and issues to be resolved. The first limitation of the model is that it is based only on single-spin binaries. We expect that generic two-spin systems can in most cases be modelled to a good approximation by single-spin systems, but given that the anti-symmetric contribution is itself a weak effect, it is unclear how well this approximation can be used. It would be useful to study the applicability of the single-spin approximation for the anti-symmetric contribution, and, indeed, to extend the model to two-spin systems. The anti-symmetric model is also limited to the (\(\ell=2,|m|=2\)) multipoles. We argue in Fig. 1 that the (2,2) contribution will be sufficient for most applications, since the anti-symmetric amplitude is far weaker than the symmetric, but for high signal-to-noise-ratio (SNR) systems, higher-order anti-symmetric multipoles will ultimately be required.
When decomposing the signal into a co-precessing frame and corresponding time- or frequency-dependent angles, one will find (depending on the co-precessing frame chosen) oscillations in the amplitude and/or phase of the anti-symmetric contribution; recall that the co-precessing-frame maps exactly to a non-precessing aligned-spin waveform only for the leading-order quadrupole terms [44; 48; 49]. In this work we have removed oscillations in the inspiral amplitude by simply setting the inclination angle \(\iota\) and precession angle \(\alpha\) to zero in the PN expressions we have used for the signal multipoles [47]; oscillations remain in the co-processing-frame signals for the NR waveforms, but our model consists of only a monotonic fit through these oscillations. A better understanding of these oscillations, or at least a model that captures them, is a necessary next step in modelling the multipole asymmetries.
We have assumed that the only affect of the initial in-plane spin direction is to introduce an overall offset in the anti-symmetric phase. This approximation will not be exact during the merger-ringdown. In particular, in the ringdown we have made the approximation that the amplitude ratio will be independent of the initial in-plane spin direction. However, the amplitude ratio will be determined by the relative phase of the
Figure 13: _Mismatches_ of the anti-symmetric waveform model in the co-precessing frame with NR data. The black triangles show the mismatches for the combined amplitude and phase model while the magenta squares (cyan circles) show the mismatches for just the amplitude (phase) model with the phase (amplitude) constructed from NR data. The dashed magenta and cyan lines show the average mismatch for the amplitude and phase model, respectively; on average they are of comparable accuracy. The black solid line shows the average mismatch for the overall model. The _x-axis_ denotes the case index of the NR simulation as usual i.e., five different \(\theta_{\rm LS}\) for each spin magnitude shown in figure, for \(q=1\),2,4 and 8.
symmetric and anti-symmetric contributions when ringdown begins, and we have not attempted to model this effect.
These limitations aside, we find that our method to construct the anti-symmetric contribution has an average mismatch error better than 0.03. Given that the anti-symmetric contribution is in general less than 10% of the total SNR, we expect that its contribution to the total mismatch uncertainty of a model will be less than \(3\times 10^{-3}\), which is below the average mismatch error of current Phenom and SEOBNR precessing-binary models [46; 49]. With the addition of this first anti-symmetric model, we expect that current models will be able to make more precise measurements of black-hole spins and gravitational recoil. The accuracy of these measurements will depend significantly on the underlying symmetric model, as well as where a signal lies in the binary parameter space, and we leave such accuracy studies with individual models to future work.
## VIII Acknowledgements
We thank Jonathan Thompson, Eleanor Hamilton and Lionel London for many thought-provoking discussions on generic precessing binaries. We are indebted to Lionel London for sharing his numerical relativity data utility package with us, this was used to extract the anti-symmetric waveform data in the co-precessing frame. We thank Frank Ohme and Jannik Mielke for insightful discussions on multipole asymmetry, and Chinmay Kalaghagi for discussions on in-plane spin rotation in superkick configurations.
S.G thanks Sebastian Khan for stimulating discussions on frequency-domain waveform modeling, Duncan Macleod for excellent inputs on computing practices and Edward Fauchon-Jones for support in using supercomputing facilities.
The authors were supported in part by Science and Technology Facilities Council (STFC) grant ST/V00154X/1 and European Research Council (ERC) Consolidator Grant 647839. S.G acknowledges support from the Max Planck Society's Independent Research Group program. P.K was also supported by the GW consolidated grant: STFC grant ST/V005677/1.
Simulations used in this work were performed on the DiRAC@Durham facility, managed by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/P002293/1 and ST/R002371/1, Durham University and STFC operations grant ST/R000832/1. In addition, several of the simulations used in this work were performed as part of an allocation graciously provided by Oracle to explore the use of our code on the Oracle Cloud Infrastructure.
This research also used the supercomputing facilities at Cardiff University operated by Advanced Research Computing at Cardiff (ARCCA) on behalf of the Cardiff Supercomputing Facility and the HPC Wales and Supercomputing Wales (SCW) projects. We acknowledge the support of the latter, which is part-funded by the European Regional Development Fund (ERDF) via the Welsh Government. In part the computational resources at Cardiff University were also supported by STFC grant ST/I006285/1.
Various plots and analyses in this paper were made using Python software packages LALSuite [59], PyCBC [58], gwsurrogate [60], Matplotlib [61], Numpy [62], Imfit and Scipy [63].
Figure 14: _Mismatches_ (same as Fig. 13) showing comparison of the amplitude model constructed using the spin magnitude-independent surface fit of Eq. (16) (_blue squares_) with the amplitude model constructed from the true fit coefficients (_green circles_). |
2303.10211 | SITReg: Multi-resolution architecture for symmetric, inverse consistent,
and topology preserving image registration | Deep learning has emerged as a strong alternative for classical iterative
methods for deformable medical image registration, where the goal is to find a
mapping between the coordinate systems of two images. Popular classical image
registration methods enforce the useful inductive biases of symmetricity,
inverse consistency, and topology preservation by construct. However, while
many deep learning registration methods encourage these properties via loss
functions, no earlier methods enforce all of them by construct. Here, we
propose a novel registration architecture based on extracting multi-resolution
feature representations which is by construct symmetric, inverse consistent,
and topology preserving. We also develop an implicit layer for memory efficient
inversion of the deformation fields. Our method achieves state-of-the-art
registration accuracy on two datasets. | Joel Honkamaa, Pekka Marttinen | 2023-03-17T19:00:24Z | http://arxiv.org/abs/2303.10211v4 | SITReg: Multi-resolution architecture for symmetric, inverse consistent, and topology preserving image registration using deformation inversion layers
###### Abstract
Deep learning based deformable medical image registration methods have emerged as a strong alternative for classical iterative registration methods. Since image registration is in general an ill-defined problem, the usefulness of inductive biases of symmetricity, inverse consistency and topology preservation has been widely accepted by the research community. However, while many deep learning registration methods enforce these properties via loss functions, no prior deep learning registration method fulfills all of these properties by construct. Here, we propose a novel multi-resolution registration architecture which is by construct symmetric, inverse consistent, and topology preserving. We also develop an implicit layer for memory efficient inversion of the deformation fields. The proposed method achieves state-of-the-art registration accuracy on two datasets.
## 1 Introduction
We study deformable medical image registration using deep learning, where the goal is to find a mapping between coordinate systems of two images to match the corresponding anatomical parts. The predicted coordinate mappings are called deformations. Typically, deep learning is applied by training a registration network which outputs a candidate deformation for two input images. We focus on unsupervised intra-modality registration, where there is no ground truth deformation and the images are of the same modality, which is useful, e.g, when deforming brain MRI images from different patients to an atlas or analyzing a patient's breathing cycle using multiple images. Since medical image registration is challenging, various properties are often assumed to improve the registration quality: _inverse consistency_, _symmetry_, and _topology preservation_, are widely seen as useful by the research community (Sotiras et al., 2013). Another property is diffeomorphism, which however overlaps with topology preservation, and hence we consider them as one, as explained later. Although these properties do not guarantee successful registration, they serve as inductive biases to narrow down the search space. As terminology in the literature can be ambiguous, we begin by precisely defining each property, and we provide further clarifications of them in Appendix X.
We define a _registration method_ as a function \(f\) that takes two images, \(x_{1}\) and \(x_{2}\), and produces a deformation. Some methods can output the deformation in both directions, and we use subscripts to indicate the direction. For example, \(f_{1\to 2}\) produces a deformation that aligns the image of the first argument to the image of the second argument. As a result, a registration method may predict up to four different deformations for any given input pair: \(f_{1\to 2}(x_{1},x_{2})\), \(f_{2\to 1}(x_{1},x_{2})\), \(f_{1\to 2}(x_{2},x_{1})\), and \(f_{2\to 1}(x_{2},x_{1})\). Some methods predict deformations in one direction only, resulting in two possible outputs: \(f_{1\to 2}(x_{1},x_{2})\) and \(f_{1\to 2}(x_{2},x_{1})\), in which case we might omit the subscript.
_Inverse consistent_ registration methods ensure that \(f_{1\to 2}(x_{1},x_{2})\) is an accurate inverse of \(f_{2\to 1}(x_{1},x_{2})\), which we quantify using the _inverse consistency error_: \(||f_{1\to 2}(x_{1},x_{2})\circ f_{2\to 1}(x_{1},x_{2})-\mathcal{I}||^{2}\), where \(\circ\) is the composition operator and \(\mathcal{I}\) is the identity deformation. Originally inverse consistency was achieved via variational losses (Christensen et al., 1995) but later algorithms were _inverse consistent by construct_, e.g., classical methods DARTEL (Ashburner, 2007) and SyN (Avants et al., 2008). However, due to a limited spatial resolution of the predicted deformations, even for these methods the inverse consistency error is not exactly zero. Some deep learning methods enforce inverse consistency via a penalty (Zhang, 2018; Kim et al., 2019; Estienne et al., 2021). A popular stationary velocity field (SVF) formulation Arsigny et al. (2006) achieves inverse consistency by construct and has been used by many works, e.g. Dalca et al. (2018); Krebs et al. (2018, 2019); Niethammer et al. (2019); Shen et al. (2019, 2019); Mok and Chung (2020).
In _symmetric registration_, the registration outcome does not depend on the order of the inputs, i.e., \(f_{1\to 2}(x_{1},x_{2})\) equals \(f_{2\to 1}(x_{2},x_{1})\). Since anatomical correspondence trivially does not depend on the input order, enforcing the property is very natural. Unlike with inverse consistency, \(f_{1\to 2}(x_{1},x_{2})\) can equal \(f_{2\to 1}(x_{2},x_{1})\) exactly for some methods (Avants et al., 2008; Estienne et al., 2021), which we call _symmetric by construct_. A related property, cycle consistency, can be assessed using _cycle consistency error_\(||f(x_{1},x_{2})\circ f(x_{2},x_{1})-\mathcal{I}||^{2}\). It can be computed for any method since it does not require the method to predict deformations in both directions. If the method is symmetric by construct, inverse consistency error equals cycle consistency error. Some existing deep learning registration methods enforce cycle consistency (Mahapatra and Ge, 2019; Gu et al., 2020; Zheng et al., 2021) via a penalty. The method by Estienne et al. (2021) is symmetric by construct but only for a single component of their multi-step formulation, and also it is not inverse consistent by construct making the symmetry less powerful. Very recently in parallel to us, Iglesias (2023) proposed a by construct symmetric and inverse consistent registration method within the SVF framework. The work had no effect on our work and achieved the goal in a way different from ours.
The third property, _topology preservation_ of predicted deformations, we define similarly to Christensen et al. (1995). From the real-world point of view it refers to the preservation of anatomical structures. Mathematically we want the deformations to be homeomorphisms, i.e., invertible and continuous. In registration literature it is common to talk about diffeomorphims which are additionally differentiable. In practice we want a deformation not to fold on top of itself which we measure by estimating the local Jacobian determinants of the predicted deformations and checking whether they are positive. Most commonly in deep learning applications topology preservation is achieved using the diffeomorphic SVF formulation (Arsigny et al., 2006). It does not completely prevent the deformation from folding but such voxels are usually limited to just a handful in the whole volume of millions of voxels, which is usually sufficient in practice. Topology preservation can also be encouraged using a specific loss, e.g. by penalizing negative determinants (Mok and Chung, 2020).
Our main contributions can be summed up as follows:
* We propose a multi-resolution deep learning registration architecture which is by construct inverse consistent and symmetric, and preserves topology. The properties are fulfilled for the whole multi-resolution pipeline, not just separately for each resolution. Apart from the parallel work (Iglesias, 2023), we are not aware of other deep learning registration methods which are by construct both symmetric and inverse consistent, and ours is the first such method with a multi-resolution deep learning architecture. For motivation of the multi-resolution approach, see Section 2.2.
* As a component in our architecture, we propose an _implicit_ neural network layer, which we call _deformation inversion layer_, based on a well-known fixed point iteration formula (Chen et al., 2008) and recent advances in Deep Equilibrium models (Bai et al., 2019; Duvenaud et al., 2020). The layer allows memory efficient inversion of deformation fields.
* We show that the method achieves state-of-the-art results on two popular benchmark data sets in terms of registration accuracy and deformation regularity. The accuracy of the inverses generated by our method is also very good and similar to the s-o-t-a SVF framework.
We name the method _SITReg_ after its symmetricity, inverse consistency and topology preservation properties.
## 2 Preliminaries
### Topology preserving registration
The LDDMM method by (Cao et al., 2005) is a classical registration method that can generate diffeomorphic deformations which preserve topology, but it has not been used much in deep learning due to computational cost. Instead, a simpler stationary velocity field (SVF) method (Arsigny et al., 2006) has been popular (Krebs et al., 2018, 2019; Niethammer et al., 2019; Shen et al., 2019, 2019; Mok and Chung, 2020). In SVF the final deformation is obtained by integrating a stationary velocity field over itself over a unit time, which under mild continuity constraints for the velocity field results in a diffeomorphism. Another classical method by Choi and Lee (2000); Rueckert et al. (2006) uses a different approach to generate invertible deformations. The idea is to constrain a deformation to be diffeomorphic but small, and to form the final deformation as a composition of multiple such small deformations. Since diffeomorphisms form a group under composition, also the final deformation is diffeomorphic. Note that this is actually close to a practical implementation of the SVF, where the velocity field is usually integrated by first scaling the velocity field down by a power of two and interpreting the result as a small deformation, which is then repeatedly composed with itself for the final deformation. The idea is hence similar: a composition of small deformations.
In this work we build topology preserving deformations using the same strategy: as composition of small topology preserving deformations.
### Multi-resolution registration
Multi-resolution registration methods learn the deformation by first estimating it in a low resolution and then incrementally improving it while moving towards higher a resolution. For each resolution one feeds the inputs deformed with the deformation learned thus far, and incrementally composes the full deformation. The approach has been around for already a few decades (Rueckert et al., 1999; Oliveira and Tavares, 2014) and has been used by many methods, including the top-performing classical and deep learning registration methods (Avants et al., 2008; Klein et al., 2009; Mok and Chung, 2020, 2021; Hering et al., 2022).
In this work we propose the first multi-resolution deep learning registration architecture that is by construct symmetric, inverse consistent, and topology preserving.
### Symmetric registration formulations
Symmetric registration does not assign moving or fixed identity to either image but instead considers them equally. A classical registration method called symmetric normalization (SyN) Avants et al. (2008) proposed a symmetric registration algorithm which learns two separate transformations: one for deforming the first image half-way toward the second image and the other for deforming the second image half-way toward the first image. The images are then matched in the intermediate coordinates and the full deformation can be obtained as composition of the half-way deformations (of which either one is inverted). The idea of matching the images in intermediate coordinates has later been used by other methods such as the deep learning method SYMNet Mok and Chung (2020). However, SYMNet does not guarantee symmetry by construct for individual predictions.
Figure 1: **Example deformation produced by the method. Composition of the forward and the inverse deformations is shown on the right to demonstrate the accuracy of the inverse. Only one 2D slice is shown of the 3D deformation. The visualized deformation is from the LPBA40 experiment.**
In our architecture we use the intuition of deforming the images half-way towards each other to achieve inverse consistency and symmetry throughout or multi-resolution architecture.
### Deep equilibrium networks
Deep equilibrium networks use _implicit_ fixed point iteration layers, which have emerged as an alternative to the common _explicit_ layers in deep learning (Bai et al., 2019, 2020; Duvenaud et al., 2020). Unlike explicit layers, which produce output via an exact sequence of operations, the output of an implicit layer is defined indirectly as a solution to a fixed point equation, which in turn is defined using a fixed point mapping. In the simplest case the fixed point mapping takes two arguments, one of which is the input. For example, let \(g:A\times B\to B\) be a fixed point mapping defining an implicit layer. Then, for some input \(a\), the output of the layer is a solution \(z\) for equation
\[z=g(z,a). \tag{1}\]
Such an equation is called a fixed point equation and the solution is called a fixed point solution. If \(g\) has suitable properties, the equation can be solved iteratively by starting with an initial guess and repeatedly feeding the output as the next input to \(g\). More advanced iteration methods have also been developed for solving fixed point equations, such as Anderson acceleration (Walker and Ni, 2011).
The main mathematical innovation related to deep equilibrium networks is that the derivative of such a layer with respect to its inputs can be calculated based solely on a fixed point solution, i.e., no intermediate iteration values need to be stored for back-propagation. Now given some, solution \((a_{0},z_{0})\), such that \(z_{0}=g(z_{0},a_{0})\), and assuming certain local invertibility properties for \(g\), the implicit function theorem ensures the existence of a solution mapping in the neighborhood of the solution \((a_{0},z_{0})\), which for another input outputs another solution to the fixed point equation. Let us denote the solution mapping as \(z^{*}\). The solution mapping can be seen as the theoretical explicit layer corresponding to the implicit layer. To find the derivatives of the implicit layer we need to find the Jacobian of \(z^{*}\) at point \(a_{0}\) which can be obtained using implicit differentiation as
\[\partial z^{*}(a_{0})=[I-\partial_{1}g(z_{0},a_{0})]^{-1}\,\partial_{0}g(z_{0 },a_{0}).\]
The vector-Jacobian product of \(z^{*}\) needed for neural network back-propagation can be calculated from this using another fixed point equation, without fully computing the Jacobians, see, e.g., Duvenaud et al. (2020). As a result, both forward and backward passes of the fixed point iteration layer can be computed as a fixed point iteration.
We use these ideas to develop a neural network layer for inverting deformations based on the fixed point equation proposed for that by Chen et al. (2008). The resulting layer is very memory efficient as only the fixed point solution needs to be stored for the backward pass.
## 3 Methods
In deformable image registration the goal is to find a mapping from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{n}\), connecting the coordinate systems of two non-aligned images \(x_{1},x_{2}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}\), often called a deformation. Here \(n\) is the dimensionality of the image, e.g. \(n=3\) for three dimensional medical images, and \(k\) is the number of channels, e.g. \(k=3\) for an RGB-image. Mathematically a deformation applied to an image can be represented as a composition of the image with the deformation, denoted by \(\circ\), and in practice we use linear interpolation to represent images in continuous coordinates. Similarly to many works in image registration, we want to learn a _neural network_\(f\) that can take two images as input and output the deformation connecting the image coordinates. That is, we would like in some sense that \(x_{1}\circ f(x_{1},x_{2})\approx x_{2}\), which in the medical context refers to anatomical correspondence.
### Symmetric formulation
As discussed in the introduction Section 1, we want our method to be symmetric. To achieve this, we propose to define the network \(f\) using an auxiliary network \(u\), which also predicts deformations, as
\[f(x_{1},x_{2}):=u(x_{1},x_{2})\circ u(x_{2},x_{1})^{-1} \tag{2}\]
As a result, it holds that \(f(x_{1},x_{2})=f(x_{2},x_{1})^{-1}\) apart from errors introduced by the composition and inversion, meaning that the cycle-consistency error should be very small. An additional benefit
is that \(f(x_{1},x_{1})\) should equal the identity transformation, again apart from numerical inaccuracies, which is a very natural property for a registration method. Applying the formulation in Equation 2 naively would double the computational cost. Hence we propose to encode features from both the inputs separately before feeding them to the deformation extraction network following Equation 2. Extracting features separately has been used in recent registration methods (Estienne et al., 2021; Young et al., 2022). Denoting the feature extraction network by \(h\), the modified formulation is
\[f(x_{1},x_{2}):=u(h(x_{1}),h(x_{2}))\circ u(h(x_{2}),h(x_{1}))^{-1}. \tag{3}\]
### Multi-resolution architecture
As the overarching architecture, we propose a novel symmetric and inverse consistent multi-resolution coarse-to-fine approach. For motivation, see Section 2.2. Overview of the whole architecture is visualized in Figure 3.2. First, we extract image feature representations \(h^{(k)}(x_{1}),h^{(k)}(x_{2})\), at different resolutions \(k\in\{0,\dots,K-1\}\). Index \(k=0\) is the original resolution and increasing \(k\) by one halves the spatial resolution. In practice \(h\) is a ResNet (He et al., 2016) style convolutional network. Starting from the lowest resolution \(k=K-1\), we recursively build the final deformation between the inputs using the extracted feature representations. To ensure symmetry, we build two deformations: one for deforming the first image half-way towards the second image, and the other for deforming the second image half-way towards the first image (see Section 2.3). The full deformation is then composed of these at the final stage. Let us denote the half-way deformations extracted at resolution \(k\) as \(f^{(k)}_{1\to 1.5}(x_{1},x_{2})\) and \(f^{(k)}_{2\to 1.5}(x_{1},x_{2})\). Initially, at level \(k=K\), these are identity deformations. Then, at each \(k=K-1,\dots,0\), the half-way deformations are updated by composing them with a predicted update deformation. In detail, the update at level \(k\) consists of three steps (visualized in Figure 3.2):
1. Deform the feature representations \(h^{(k)}(x_{1}),h^{(k)}(x_{2})\) of level \(k\) towards each other by the half-way deformations from the previous level \(k+1\): \[\left\{\begin{aligned} z^{(k)}_{1}&:=h^{(k)}(x_{1}) \circ f^{(k+1)}_{1\to 1.5}(x_{1},x_{2})\\ z^{(k)}_{2}&:=h^{(k)}(x_{2})\circ f^{(k+1)}_{2\to 1.5}(x_{1},x_{2}) \end{aligned}\right.\] (4)
2. Define an update deformation \(U^{(k)}\), using the idea from Equation 3 and the half-way deformed feature representations \(z^{(k)}_{1}\) and \(z^{(k)}_{2}\): \[U^{(k)}:=u^{(k)}(z^{(k)}_{1},z^{(k)}_{2})\circ u^{(k)}(z^{(k)}_{2},z^{(k)}_{1} )^{-1}.\] (5)
Figure 2: **Overview of the proposed architecture. Multi-resolution features are first extracted from the inputs \(x_{1}\) and \(x_{2}\) using convolutional encoder \(h\). Output deformations \(f_{1\to 2}(x_{1},x_{2})\) and \(f_{2\to 1}(x_{1},x_{2})\) are built recursively from the multi-resolution features using the symmetric deformation updates described in Section 3.2 and visualized in Figure 3.2. The architecture is symmetric and inverse consistent with respect to the inputs and the final deformation is obtained in both directions. The brain images are from the OASIS dataset (Marcus et al., 2007)**
Here, \(u^{(k)}\) is a trainable convolutional neural network (details in Appendix I) predicting an auxiliary deformation. The intuition here is that the symmetrically predicted update deformation \(U^{(k)}\) should learn to adjust for whatever differences in the image features remain after deforming them half-way towards each other in Step 1 with deformations \(f^{(k+1)}\) from the previous resolution.
3. Obtain the updated half-way deformation \(f^{(k)}_{1\to 1.5}(x_{1},x_{2})\) by composing the earlier half-way deformation of level \(k+1\) with the update deformation \(U^{(k)}\) \[f^{(k)}_{1\to 1.5}(x_{1},x_{2})=f^{(k+1)}_{1\to 1.5}(x_{1},x_{2})\ \circ\ U^{(k)}.\] (6) For the other direction \(f^{(k)}_{2\to 1.5}(x_{1},x_{2})\), we use the inverse of the deformation update \(\left(U^{(k)}\right)^{-1}\) which can be obtained simply by reversing \(z^{(k)}_{1}\) and \(z^{(k)}_{2}\) in Equation 5: \[f^{(k)}_{2\to 1.5}(x_{1},x_{2})=f^{(k+1)}_{2\to 1.5}(x_{1},x_{2})\ \circ\ \left(U^{(k)}\right)^{-1}.\] (7) The inverses \(f^{(k)}_{1\to 1.5}(x_{1},x_{2})^{-1}\) and \(f^{(k)}_{2\to 1.5}(x_{1},x_{2})^{-1}\) are updated similarly.
The full deformations are obtained at stage \(k=0\) as:
\[\left\{\begin{array}{ll}f_{1\to 2}(x_{1},x_{2})&=f^{(0)}_{1\to 1.5}(x_{1},x_{2}) \circ f^{(0)}_{2\to 1.5}(x_{1},x_{2})^{-1}\\ f_{2\to 1}(x_{1},x_{2})&=f^{(0)}_{2\to 1.5}(x_{1},x_{2})\circ f^{(0)}_{1 \to 1.5}(x_{1},x_{2})^{-1}\end{array}\right. \tag{8}\]
The motivation to use half-way deformations is that if we instead used learned full deformations at each stage, we would have to decide either of the image coordinates to which to deform the feature representations of the next stage, which would brake the symmetry of the overall architecture. Now we can instead deform the feature representations of both of the inputs by the symmetrically predicted half-way deformations, which ensures that the updated deformations from each stage are separately invariant to input order.
**Proposition 3.1**.: _The proposed multi-resolution architecture is by construct symmetric and inverse consistent._
Proof.: Appendix IV, including also discussion on numerical errors from compositions and inversions.
Figure 3: **Recursive multi-resolution deformation update.** Figure visualizes the deformation update at resolution \(k\), described in Section 3.2. The update takes as input the half-way deformations \(f^{(k+1)}_{1\to 1.5}(x_{1},x_{2})\) and \(f^{(k+1)}_{2\to 1.5}(x_{1},x_{2})\) from the previous resolution, and updates them through a composition with an update deformation \(U^{(k)}\). \(U^{(k)}\) is calculated symmetrically from image features \(z^{(k)}_{1}\) and \(z^{(k)}_{2}\) at resolution \(k\) (deformed mid-way towards each other with the previous half-way deformations), using a neural network \(u^{(k)}\) according to Equation 5. The deformation inversion layer is used to invert the auxiliary deformations predicted by \(u^{(k)}\) and it is described in Section 3.3.
### Implicit deformation inversion layer
Implementing the architecture requires inverting deformations from \(u^{(k)}\) in Equation 5. This could be done, e.g., with the SVF framework, but we propose an approach which requires storing \(5\) times less data for backward pass compared to the standard SVF, resulting in significant memory saving due to the large number of inversions required. For details on memory usage see Appendix VI.
As shown by Chen et al. (2008), deformations can be inverted in certain cases by a fixed point iteration. Consequently, we propose to use the deep equilibrium network framework from Section 2.4 for inverting deformations, and label the resulting layer _deformation inversion layer_. The fixed point equation proposed by Chen et al. (2008) is
\[g(z,a):=-(a-\mathcal{I})\circ z+\mathcal{I},\]
where \(a\) is the deformation to be inverted, \(z\) is the candidate for the inverted \(a\), and \(\mathcal{I}\) is the identity deformation. It is easy to see that feeding \(a^{-1}\) for \(z\), yields \(a^{-1}\) as an output. We use Anderson acceleration (Walker and Ni, 2011) for solving the fixed point equation and use the memory-effecient back-propagation (Bai et al., 2019; Duvenaud et al., 2020) strategy discussed in Section 2.4.
Lipschitz condition is sufficient but not necessary for the fixed point algorithm to converge (Chen et al., 2008). As Lipschitz condition is not enforced by our method, we do not derive a theoretical proof for convergence, but empirically demonstrate that it always converges, and usually in \(3\) to \(6\) iterations. At most \(7\) iterations were needed for maximum inversion error less than \(10^{-2}\) voxels for the whole volume (Appendix V). The good convergence follows from limiting individual deformations predicted by \(u^{(k)}\) to be small by hard constraint which also ensures topology preservation, see details in Appendix I. Also, as demonstrated by the main experiments, our method produces topology preserving deformations almost everywhere.
### Training and implementation
We train the model in an unsupervised end-to-end manner similarly to most other unsupervised registration methods, by using similarity and deformation regularization losses. The similarity loss encourages deformed images to be similar to the target images, and the regularity loss encourages desirable properties, such as smoothness, on the predicted deformations. For similarity we use local normalized cross-correlation with window width 7 and for regularization we use \(L^{2}\) gradient penalty on the displacement fields, identically to VoxelMorph (Balakrishnan et al., 2019). We apply the losses in both directions to maintain symmetry. One could apply the losses in the intermediate coordinates and avoid building the full deformations during training. However, we do not do this since we expect that applying the losses in the original image coordinates yields the best results. The final loss is:
\[\mathcal{L}=\mathrm{NCC}(x_{1}\circ\phi,\;x_{2})+\mathrm{NCC}(x_{2}\circ\phi^ {-1},\;x_{1})+\lambda*\left[\mathrm{Grad}(d(\phi))+\mathrm{Grad}(d(\phi^{-1}) )\right], \tag{9}\]
where \(\phi\) is a deformation, \(d(\cdot)\) the displacement field, \(\mathrm{NCC}\) the local normalized cross-correlation loss, and \(\mathrm{Grad}\) the gradient loss. For details on hyperparameter selection, see Appendix II. Our implementation is in PyTorch (Paszke et al., 2019), and can be found at [https://github.com/honkami/SITReg](https://github.com/honkami/SITReg). Evaluation methods and preprocessing done by us, see Section 4, are included.
## 4 Experimental setup
**Datasets:** We use two subject-to-subject registration datasets: _OASIS_ brains dataset with 414 T1-weighted brain MRI images (Marcus et al., 2007) as pre-processed for Learn2Reg challenge (Hoopes et al., 2021; Hering et al., 2022) (data use agreement website 1), and _LPBA40_ dataset from University of California Laboratory of Neuro Imaging (USC LONI) with 40 brain MRI images (Shattuck et al., 2008) (LONI Research License, Version 3.02). Pre-processing for both datasets includes bias field correction, normalization, and cropping. For OASIS dataset we use affinely pre-aligned images and for LPBA40 dataset we use rigidly pre-aligned images. Additionally we train our model without any pre-alignment on OASIS data (_OASIS raw_) to test our method with larger initial displacements. Voxel sizes of the affinely aligned and raw datasets are the same but volume sizes differ. Details of the split into training, validation, and test sets, and cropping and resolution can be found in Appendix VIII.
**Evaluation metrics:** We evaluate the _registration accuracy_ using segmentations included in the datasets: automatic segmentations of \(35\) brain structures for OASIS and manual segmentations of \(56\) brain structures for LPBA40. We use two metrics: Dice score (Dice) and 95% quantile of the Hausdorff distances (HD95) between the segmentations of each structure, similarly to Learn2Reg challenge (Hering et al., 2022). Dice score measures overlap of the segmentations and Hausdorff distance measures the distance between the surfaces of the segmentations. We compare the segmentations of the source images deformed by the method and the segmentations of the target images. However, evaluating registration methods solely based on overlap of anatomic regions has its limitations (Pluim et al., 2016; Rohlfing, 2011), and hence also _deformation regularity_ should be measured. As is common, we evaluate the regularity of the generated deformations by metrics based on the local Jacobian determinant. We measure topology preservation by counting the number voxels with negative determinant (\(|J_{\phi}|_{\leq 0}\)), and smoothness by computing the standard deviation of the determinant (\(\mathrm{std}(|J_{\phi}|)\)). A negative determinant means the deformation is not topology preserving at that location. Additionally, we measure inverse and cycle _consistency_ errors, introduced in Section 1.
**Baselines:** We compare against _VoxelMorph_(Balakrishnan et al., 2019), _SYMNet_(Mok and Chung, 2020a), and conditional LapIRN (_cLapIRN_) (Mok and Chung, 2020b, 2021). VoxelMorph is a standard baseline in deep learning based unsupervised registration. With SYMNet we are interested in how well our method preserves topology and how accurate the generated inverse deformations are compared to SVF based methods. Additionally, since SYMNet is symmetric from the loss point of view, it is interesting to see how symmetric predictions it produces in practice. cLapIRN was the best method on OASIS dataset in Learn2Reg 2021 challenge (Hering et al., 2022). We used the official implementations345 adjusted to our datasets. SYMNet uses anti-folding loss to penalize negative determinant. Since this loss is a separate component that could be easily used with any method, we also train SYMNet without it (_SYMNet, no anti-fold_). This provides a comparison on how well the vanilla SVF framework can generate invertible deformations in comparison to our method. For details on hyperparameter selection for baseline models, see Appendix III.
Footnote 3: [https://github.com/voxelmorph/voxelmorph](https://github.com/voxelmorph/voxelmorph)
Footnote 4: [https://github.com/cwmok/Fast-Symmetric-Diffeomorphic-Image-Registration-with-Convolutional-Neural-Networks](https://github.com/cwmok/Fast-Symmetric-Diffeomorphic-Image-Registration-with-Convolutional-Neural-Networks)
Footnote 5: [https://github.com/cwmok/Conditional_LapIRN/](https://github.com/cwmok/Conditional_LapIRN/)
## 5 Results
The results for the OASIS dataset are in Table 1 and for LPBA40 in Table 2. Tissue overlap results on individual anatomical regions can be found in Appendix VII. The proposed method performs very well on both datasets in terms of registration accuracy: dice score and HD95 are equal to or better than for any baseline. In addition, the generated deformations have very little folding, similar to the diffeomorphic SVF-based SYMNet, which justifies calling our algorithm topology preserving. The number of folding voxels for both our method and SYMNet is extremely small, only a few in the whole volume of millions of voxels, whereas cLapIRN and VoxelMorph have significantly more folding voxels. In terms of cycle consistency, our method outperforms all the baseline models by
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Deformation regularity} & \multicolumn{2}{c}{Consistency} \\ \cline{2-7} Model & Dice \(\uparrow\) & HD95\(\downarrow\) & \(|J_{\phi}|_{\leq 0}\) & \(\mathrm{std}(|J_{\phi}|)\) \(\downarrow\) & Cycle \(\downarrow\) & Inverse \(\downarrow\) \\ \hline SYMNet (original) & \(0.788(0.029)\) & \(2.15(0.57)\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}}}\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbfmathbfmathbfmathbf{ \mathbfmathbf{ }}}}}}}}}}}}}}} \) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbf{ \mathbf{ }}}}}}}}}}}}}} )\) & \(3.0e-1(2.9e-2)\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbf{ \mathbfmathbf{ }}}}}}}}}}}}}} )\) & \(3.0e-1(2.9e-2)\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbf{\mathbfmathbfmathbf{ }}}}}}}}}}}}} })\) & \(3.0e-1(2.9e-2)\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbf{\mathbfmathbf{ }}}}}}}}}}}}} )\) & \(3.0e-1(2.9e-2)\) & \(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf \mathbf{\mathbf{ }}}}}}}}}}}}\))))))))))))))))}}})))}}}\)) ((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((())))))))))))))))))))))))) )) ) ) ) ) ) \\\\\\\\\\\\\\\\\\ \end{tabular}}}}}}\)\)\)\)\) \(\) \(\) \mul \
a significant margin, and in terms of inverse consistency our method is similar to the SVF based SYMNet, which performs very well, showcasing that our method is indeed inverse consistent by construct. Interestingly, the model trained with non-affinely-aligned OASIS data performs equally well to the model trained with affinely aligned data, which demonstrates that the method is capable of accurately registering images even with large initial misalignments.
The assessment of registration performance should not be based on a single metric, e.g., segmentation based metrics, but instead on the overall performance with respect to different metrics, similarly to, e.g., the Learn2reg challenge (Hering et al., 2022). In the overall comparison, our method has better registration performance than the baselines since no baseline clearly outperforms our method on any metric, but our method outperforms each baseline clearly on at least two metrics: While cLapIRN performs similarly to our method in terms of tissue overlap metrics, our method achieves that with far fewer folding voxels and better cycle consistency, and while SYMNet performs similarly or slightly better in terms of deformation regularity and inverse consistency, our method performs significantly better on tissue overlap and cycle consistency metrics.
A comparison of the methods' efficiencies is in Table 3. Inference time of our method is slightly larger than that of the compared methods, but unlike VoxelMorph and cLapIRN, it produces deformations in both directions immediately. Also, half a second runtime is still very fast and restrictive only in the most time-critical use cases. In terms of memory usage our method is very competitive.
## 6 Conclusions
We proposed a novel image registration architecture inbuilt with the desirable inductive biases of symmetry, inverse consistency, and topology preservation. The multi-resolution formulation was capable of accurately registering images even with large initial misalignments. As part of our method, we developed a new neural network component _deformation inversion layer_. The model is easily end-to-end trainable and does not require tedious multi-stage training strategies. In the experiments the method demonstrated state-of-the-art registration performance. The main limitations are the somewhat heavier computational cost than other methods, and the lack of theoretical convergence guarantees, especially for larger deformations, although in practice good performance was observed.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Accuracy} & \multicolumn{2}{c}{Deformation regularity} & \multicolumn{2}{c}{Consistency} \\ \cline{2-7} Model & Dice \(\uparrow\) & HD95 \(\downarrow\) & \(|J_{\phi}|_{\leq 0}\downarrow\) & std(\(|J_{\phi}|\)) \(\downarrow\) & Cycle \(\downarrow\) & Inverse \(\downarrow\) \\ \hline SYMNet (original) & \(0.669(0.033)\) & \(6.79(0.70)\) & \(\mathbf{0.69(2.2)}\) & \(0.34(0.049)\) & \(2.7e-1(6.1e-2)\) & \(\mathbf{2.1e-3(4.3e-4)}\) \\ SYMNet (no anti-fold) & \(0.664(0.034)\) & \(6.88(0.73)\) & \(14.(1.0)\) & \(0.36(0.052)\) & \(2.8e-1(5.8e-2)\) & \(2.9e-3(6.7e-4)\) \\ VoxelMorph & \(0.676(0.032)\) & \(6.72(0.68)\) & \(7.26(3.84e)\) & \(0.34(0.035)\) & \(3.1e-1(1.1e-1)\) & - \\ cLapIRN & \(0.714(0.019)\) & \(5.93(0.43)\) & \(3.063(1.1e3)\) & \(\mathbf{0.26(0.019)}\) & \(5.6e-1(1.8e-1)\) & - \\ \hline SITReg & \(\mathbf{0.716(0.017)}\) & \(\mathbf{5.91(0.41)}\) & \(1.8(2.1)\) & \(0.29(0.029)\) & \(\mathbf{2.7e-3(4.1e-4)^{*}}\) & \(2.7e-3(4.1e-4)\) \\ \hline \hline \end{tabular}
* Statistically significant (\(p<0.05\)) improvement compared to the baselines, for details see Appendix IX.
\end{table}
Table 2: **Results for the LPBA40 experiment. Mean and standard deviation of each metric are computed on the test set. VoxelMorph and cLapIRN do not predict inverse deformations and hence the inverse-consistency error is not shown.**
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Inference Time (s) \(\downarrow\) & Inference Memory (GB) \(\downarrow\) & \# parameters (M) \(\downarrow\) \\ \hline SYMNet (original) & \(\mathbf{0.10}\) (\(0.0010\)) & \(\mathbf{1.6}\) & \(\mathbf{0.9}\) \\ SYMNet (simple) & \(\mathbf{0.10}\) (\(0.0011\)) & \(\mathbf{1.6}\) & \(\mathbf{0.9}\) \\ VoxelMorph & \(0.17\) (\(0.00078\)) & \(5.4\) & \(1.3\) \\ cLapIRN & \(0.11\) (\(0.00085\)) & \(4.1\) & \(1.2\) \\ \hline SITReg & \(0.47\) (\(0.032\)) & \(2.9\) & \(1.2\) \\ SITReg (raw data) & \(2.0\) (\(0.022\)) & \(11.1\) & \(2.5\) \\ \hline \hline \end{tabular}
* Statistically significant (\(p<0.05\)) improvement compared to the baselines, for details see Appendix IX.
\end{table}
Table 3: **Computational efficiency on the OASIS dataset. Mean and standard deviation values are shown. Inference time and inference memory usage were measured on NVIDIA GeForce RTX 3090. The images raw dataset without pre-alignment have \(3.8\) times more voxels resulting in significantly larger inference time and memory usage for the SITReg model trained with it.**
## Acknowledgments and Disclosure of Funding
This work was supported by the Academy of Finland (Flagship programme: Finnish Center for Artificial Intelligence FCAI, and grants 336033, 352986) and EU (H2020 grant 101016775 and NextGenerationEU).
Data were provided in part by OASIS-1: Cross-Sectional: Principal Investigators: D. Marcus, R, Buckner, J, Csernansky J. Morris; P50 AG05681, P01 AG03991, P01 AG026276, R01 AG021910, P20 MH071616, U24 RR021382.
We also acknowledge the computational resources provided by the Aalto Science-IT Project.
## References
* Arsigny et al. (2006) V. Arsigny, O. Commowick, X. Pennec, and N. Ayache. A log-euclidean framework for statistics on diffeomorphisms. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, pages 924-931. Springer, 2006.
* Ashburner (2007) J. Ashburner. A fast diffeomorphic image registration algorithm. _Neuroimage_, 38(1):95-113, 2007.
* Avants et al. (2008) B. B. Avants, C. L. Epstein, M. Grossman, and J. C. Gee. Symmetric diffeomorphic image registration with cross-correlation: evaluating automated labeling of elderly and neurodegenerative brain. _Medical image analysis_, 12(1):26-41, 2008.
* Bai et al. (2019) S. Bai, J. Z. Kolter, and V. Koltun. Deep equilibrium models. _Advances in Neural Information Processing Systems_, 32, 2019.
* Bai et al. (2020) S. Bai, V. Koltun, and J. Z. Kolter. Multiscale deep equilibrium models. _Advances in Neural Information Processing Systems_, 33:5238-5250, 2020.
* Balakrishnan et al. (2019) G. Balakrishnan, A. Zhao, M. R. Sabuncu, J. Guttag, and A. V. Dalca. VoxelMorph: A learning framework for deformable medical image registration. _IEEE transactions on medical imaging_, 38(8):1788-1800, 2019.
* Cao et al. (2005) Y. Cao, M. I. Miller, R. L. Winslow, and L. Younes. Large deformation diffeomorphic metric mapping of vector fields. _IEEE transactions on medical imaging_, 24(9):1216-1230, 2005.
* Chen et al. (2008) M. Chen, W. Lu, Q. Chen, K. J. Ruchala, and G. H. Olivera. A simple fixed-point approach to invert a deformation field. _Medical physics_, 35(1):81-88, 2008.
* Choi and Lee (2000) Y. Choi and S. Lee. Injectivity conditions of 2D and 3D uniform cubic B-spline functions. _Graphical models_, 62(6):411-427, 2000.
* Christensen et al. (1995) G. E. Christensen, R. D. Rabbitt, M. I. Miller, S. C. Joshi, U. Grenander, T. A. Coogan, and D. C. Van Essen. Topological properties of smooth anatomic maps. In _Information processing in medical imaging_, volume 3, page 101. Springer Science & Business Media, 1995.
* Dalca et al. (2018) A. V. Dalca, G. Balakrishnan, J. Guttag, and M. R. Sabuncu. Unsupervised learning for fast probabilistic diffeomorphic registration. In _Medical Image Computing and Computer Assisted Intervention-MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I_, pages 729-738. Springer, 2018.
* Duvenaud et al. (2020) D. Duvenaud, J. Z. Kolter, and M. Johnson. Deep implicit layers tutorial-neural ODEs, deep equilibirum models, and beyond. _Neural Information Processing Systems Tutorial_, 2020.
* Estienne et al. (2021) T. Estienne, M. Vakalopoulou, E. Battistella, T. Henry, M. Lerousseau, A. Leroy, N. Paragios, and E. Deutsch. MICS: Multi-steps, inverse consistency and symmetric deep learning registration network. _arXiv preprint arXiv:2111.12123_, 2021.
* Gu et al. (2020) D. Gu, X. Cao, S. Ma, L. Chen, G. Liu, D. Shen, and Z. Xue. Pair-wise and group-wise deformation consistency in deep registration network. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, pages 171-180. Springer, 2020.
* He et al. (2016) K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016.
* Hering et al. (2022) A. Hering, L. Hansen, T. C. Mok, A. C. Chung, H. Siebert, S. Hager, A. Lange, S. Kuckertz, S. Heldmann, W. Shao, et al. Learn2Reg: Comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning. _IEEE Transactions on Medical Imaging_, 2022.
* Hoopes et al. (2021) A. Hoopes, M. Hoffmann, B. Fischl, J. Guttag, and A. V. Dalca. Hypermorph: Amortized hyperparameter learning for image registration. In _International Conference on Information Processing in Medical Imaging_, pages 3-17. Springer, 2021.
* Iglesias (2023) J. E. Iglesias. A ready-to-use machine learning tool for symmetric multi-modality registration of brain mri. _Scientific Reports_, 13(1):6657, 2023.
* Imai et al. (2019)
B. Kim, J. Kim, J.-G. Lee, D. H. Kim, S. H. Park, and J. C. Ye. Unsupervised deformable image registration using cycle-consistent CNN. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, pages 166-174. Springer, 2019.
* Klein et al. [2009] A. Klein, J. Andersson, B. A. Ardekani, J. Ashburner, B. Avants, M.-C. Chiang, G. E. Christensen, D. L. Collins, J. Gee, P. Hellier, et al. Evaluation of 14 nonlinear deformation algorithms applied to human brain mri registration. _Neuroimage_, 46(3):786-802, 2009.
* Krebs et al. [2018] J. Krebs, T. Mansi, B. Mailhe, N. Ayache, and H. Delingette. Unsupervised probabilistic deformation modeling for robust diffeomorphic registration. In _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support_, pages 101-109. Springer, 2018.
* Krebs et al. [2019] J. Krebs, H. Delingette, B. Mailhe, N. Ayache, and T. Mansi. Learning a probabilistic model for diffeomorphic registration. _IEEE transactions on medical imaging_, 38(9):2165-2176, 2019.
* Mahapatra and Ge [2019] D. Mahapatra and Z. Ge. Training data independent image registration with GANs using transfer learning and segmentation information. In _2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)_, pages 709-713. IEEE, 2019.
* Marcus et al. [2007] D. S. Marcus, T. H. Wang, J. Parker, J. G. Csernansky, J. C. Morris, and R. L. Buckner. Open Access Series of Imaging Studies (OASIS): Cross-sectional MRI data in young, middle aged, nondemented, and demented older adults. _Journal of cognitive neuroscience_, 19(9):1498-1507, 2007.
* Mok and Chung [2020a] T. C. Mok and A. Chung. Fast symmetric diffeomorphic image registration with convolutional neural networks. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 4644-4653, 2020a.
* Mok and Chung [2020b] T. C. Mok and A. Chung. Large deformation diffeomorphic image registration with laplacian pyramid networks. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, pages 211-221. Springer, 2020b.
* Mok and Chung [2021] T. C. Mok and A. Chung. Conditional deformable image registration with convolutional neural network. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, pages 35-45. Springer, 2021.
* Niethammer et al. [2019] M. Niethammer, R. Kwitt, and F.-X. Vialard. Metric learning for image registration. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 8463-8472, 2019.
* Oliveira and Tavares [2014] F. P. Oliveira and J. M. R. Tavares. Medical image registration: a review. _Computer methods in biomechanics and biomedical engineering_, 17(2):73-93, 2014.
* Paszke et al. [2019] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In _Advances in Neural Information Processing Systems 32_, pages 8024-8035. Curran Associates, Inc., 2019. URL [http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf](http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf).
* Pluim et al. [2016] J. P. Pluim, S. E. Muenzing, K. A. Eppenhof, and K. Murphy. The truth is hard to make: Validation of medical image registration. In _2016 23rd International Conference on Pattern Recognition (ICPR)_, pages 2294-2300. IEEE, 2016.
* Rohlfing [2011] T. Rohlfing. Image similarity and tissue overlaps as surrogates for image registration accuracy: widely used but unreliable. _IEEE transactions on medical imaging_, 31(2):153-163, 2011.
* Rueckert et al. [1999] D. Rueckert, L. I. Sonoda, C. Hayes, D. L. Hill, M. O. Leach, and D. J. Hawkes. Nonrigid registration using free-form deformations: Application to breast MR images. _IEEE Transactions on Medical Imaging_, 18(8):712-721, 1999.
* Rueckert et al. [2006] D. Rueckert, P. Aljabar, R. A. Heckemann, J. V. Hajnal, and A. Hammers. Diffeomorphic registration using B-splines. In _International Conference on Medical Image Computing and Computer-Assisted Intervention_, pages 702-709. Springer, 2006.
* Rafter et al. [2019]
D. W. Shattuck, M. Mirza, V. Adisetiyo, C. Hojatkashani, G. Salamon, K. L. Narr, R. A. Poldrack, R. M. Bilder, and A. W. Toga. Construction of a 3D probabilistic atlas of human cortical structures. _Neuroimage_, 39(3):1064-1080, 2008.
* Shen et al. (2019a) Z. Shen, X. Han, Z. Xu, and M. Niethammer. Networks for joint affine and non-parametric image registration. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 4224-4233, 2019a.
* Shen et al. (2019b) Z. Shen, F.-X. Vialard, and M. Niethammer. Region-specific diffeomorphic metric mapping. _Advances in Neural Information Processing Systems_, 32, 2019b.
* Sotiras et al. (2013) A. Sotiras, C. Davatzikos, and N. Paragios. Deformable medical image registration: A survey. _IEEE transactions on medical imaging_, 32(7):1153-1190, 2013.
* Walker and Ni (2011) H. F. Walker and P. Ni. Anderson acceleration for fixed-point iterations. _SIAM Journal on Numerical Analysis_, 49(4):1715-1735, 2011.
* Young et al. (2022) S. I. Young, Y. Balbastre, A. V. Dalca, W. M. Wells, J. E. Iglesias, and B. Fischl. SuperWarp: Supervised learning and warping on U-Net for invariant subvoxel-precise registration. _arXiv preprint arXiv:2205.07399_, 2022.
* Zhang (2018) J. Zhang. Inverse-consistent deep networks for unsupervised deformable image registration. _arXiv preprint arXiv:1809.03443_, 2018.
* Zheng et al. (2021) Y. Zheng, X. Sui, Y. Jiang, T. Che, S. Zhang, J. Yang, and H. Li. Symreg-gan: symmetric image registration with generative adversarial networks. _IEEE transactions on pattern analysis and machine intelligence_, 44(9):5631-5646, 2021.
**Appendices**
**SITReg: Multi-resolution architecture for symmetric, inverse consistent, and topology preserving image registration using deformation inversion layers**
**Joel Honkamaa**
Department of Computer Science
Aalto University
Aalto, Finland
[email protected]
**Pekka Marttinen**
Department of Computer Science
Aalto University
Aalto, Finland
[email protected]
## Appendix I Architectural details
The neural networks \(u^{(k)}\) used for predicting deformations (in Equation 5) consist of the following components (in order):
1. Concatenation of the two inputs along the channel dimension. Before concatenation we reparametrize the features as subtraction of the inputs and the sum of the inputs as suggested by Young et al. (2022).
2. Two convolutions with some non-linear activation after each of the convolutions.
3. Convolution with kernel of spatial size \(1\)
4. \(\gamma\times\mathrm{Tanh}\) function where the \(\gamma\in\mathbb{R}^{+}\) is a scaling factor defining maximum displacement. We used the value \(\gamma=0.15\) (voxels) in the experiments. The voxel size is voxel size of each resolution (lower resolution levels have larger voxels).
5. Upsampling with prefiltered cubic spline interpolation (Ruijters and Thevenaz, 2012) to the full resolution. Spline interpolation can be implemented effeciently using transposed convolutions (De Vos et al., 2019). The deformation is upsampled since it needs to be inverted and otherwise the generated inverse would not be accurate in the full resolution.
We equate the resulting displacement field with the deformation. Limiting the range of values with the scaled \(\mathrm{Tanh}\) activation is important since by that we ensure that individual predicted deformation are invertible, which in turn ensures topology preservation and convergence of the fixed point iteration. The chosen constraint was concluded to provide enough freedom while resulting in almost everywhere invertible final deformations (the performance is similar to the SVF (Arsigny et al., 2006) based methods). We also experimented with value \(\gamma=0.3\) which also performed well but chose the \(\gamma=0.15\) for the final experiments. More details on this can be found in Appendix II. Note that since the displacements are in the voxel size of each resolution level, the displacements at lower resolutions correspond to larger displacements at the full resolution (scaling is performed during the upsampling).
## Appendix II Hyperparameter selection details
We experimented on validation set with different hyperparameters during the development. While the final results on test set are computed only for one chosen configuration, the results on validation set might still be of interest for the reader. Results of these experiments for the OASIS dataset are shown in Table A1 and for the LPBA40 dataset in Table A2.
For the OASIS dataset we experimented with two configurations of number of resolution levels \(K\) and maximum displacements \(\gamma\). With both of these configurations we tested three different values for the regularization weight \(\lambda\).
For the LPBA40 dataset we experimented with 8 configurations of number of resolution levels \(K\), maximum displacements \(\gamma\), and whether to predict an affine transformation, but used the regularization weight value \(\lambda=1.0\) for all of them.
Smaller maximum displacement was chosen for both of the experiments since the resulting deformations had better properties with only small decrease in accuracy.
With the raw OASIS dataset without pre-alignment we used \(6\) resolution levels, together with an affine transformation prediction stage before the other deformation updates. We omitted the predicted affine transformation from the deformation regularization.
## III Hyperparameter selection details for baselines
For cLapIRN baseline we used the regularization parameter value \(\overline{\lambda}=0.05\) for the OASIS dataset and value \(\overline{\lambda}=0.1\) for the LPBA40 dataset where \(\overline{\lambda}\) is used as in the paper presenting the method (Mok and Chung, 2021). The values were chosen based on the validation set results shown in Tables A3 and A4.
We trained VoxelMorph with losses and regularization weight identical to our method and for SYMNet we used hyperparameters directly provided by Mok and Chung (2020). We used the default number of convolution features for the baselines except for VoxelMorph we doubled the number of features, as suggested for subject-to-subject registration by Balakrishnan et al. (2019).
## IV Proof of Proposition 3.1
### Inverse consistent by construct
Proof.: Inverse consistency by construct follows directly from Equation 8:
\[f_{1\to 2}(x_{1},x_{2}) =f_{1\to 1.5}^{(0)}(x_{1},x_{2})\circ f_{2\to 1.5}^{(0)}(x_{1},x_{2})^{-1}\] \[=\left(f_{2\to 1.5}^{(0)}(x_{1},x_{2})\circ f_{1\to 1.5}^{(0)}(x_{1},x_{2})^{-1} \right)^{-1}\] \[=f_{2\to 1}(x_{1},x_{2})^{-1}\]
As discussed in Section 1, inverse consistency error is not exactly zero even for earlier methods guaranteeing inverse consistency by construct, and the same is true here. Here the remaining error can be in principle from three sources: limited spatial sampling resolution, deformation inversion layer not converging, or lack of invertibility of deformations. However, as shown in Appendix V, the error caused by fixed point iteration not converging is tiny. Lack of invertibility should also not be a large source of error since as shown by the main experiments, the deformations are almost everywhere invertible. Hence the remaining inverse consistency error should be mostly caused by limited spatial sampling resolution, conclusion supported by the main experiments, where error of similar magnitude to that of diffeomorphic SVF framework was obtained.
### Symmetric by construct
Proof.: We use induction. Assume that for any \(x_{1}\) and \(x_{2}\) at level \(k+1\) the following holds: \(f_{1\to 1.5}^{(k+1)}(x_{1},x_{2})=f_{2\to 1.5}^{(k+1)}(x_{2},x_{1})\). For level \(K\) it holds trivially since \(f_{1\to 1.5}^{(K)}(x_{1},x_{2})\) and \(f_{2\to 1.5}^{(K)}(x_{1},x_{2})\) are defined as identity deformations. For the proof we view \(z_{1}^{(k)}\), \(z_{2}^{(k)}\), and \(U^{(k)}\) as functions of the input images, although not explicitly marked in the main paper. Using the induction assumption we have at level \(k\):
\[z_{1}^{(k)}(x_{1},x_{2})=h^{(k)}(x_{1})\circ f_{1\to 1.5}^{(K)}(x_{1},x_{2})=h^{( k)}(x_{1})\circ f_{2\to 1.5}^{(K)}(x_{2},x_{1})=z_{2}^{(k)}(x_{2},x_{1})\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Hyperparameters & Accuracy & \multicolumn{2}{c}{Deformation regularity} & Consistency \\ \hline \(\overline{\lambda}\) & Dice \(\uparrow\) & HD95\(\downarrow\) & \(|J_{\phi}|_{\leq 0}\downarrow\) & std(\(|J_{\phi}|\)) \(\downarrow\) & Cycle \(\downarrow\) \\ \hline
0.01 & \(0.714(0.014)\) & \(5.86(0.35)\) & \(3.74(5.8e3)\) & \(0.43(0.017)\) & \(9.9e{-1}(2.2e{-1})\) \\
0.05 & \(0.715(0.014)\) & \(5.87(0.35)\) & \(1.2e4(2.6e3)\) & \(0.32(0.013)\) & \(8.0e{-1}(2.1e{-1})\) \\
0.1 & \(0.714(0.014)\) & \(5.88(0.36)\) & \(2.65(8.7e2)\) & \(0.25(0.010)\) & \(6.6e{-1}(2.1e{-1})\) \\
0.2 & \(0.709(0.015)\) & \(5.92(0.38)\) & \(1.3e{-2}(7.4e1)\) & \(0.18(0.0085)\) & \(4.9e{-1}(1.9e{-1})\) \\
0.4 & \(0.698(0.017)\) & \(6.03(0.42)\) & \(5.0e{-2}(2.2e{-1})\) & \(0.13(0.0070)\) & \(3.6e{-1}(1.9e{-1})\) \\
0.8 & \(0.678(0.019)\) & \(6.23(0.48)\) & \(0.0e{0.0}(0.0e0.0)\) & \(0.085(0.0062)\) & \(3.0e{-1}(1.9e{-1})\) \\
1.0 & \(0.671(0.021)\) & \(6.31(0.51)\) & \(0.0e{0.0}(0.0e0.0)\) & \(0.073(0.0061)\) & \(3.0e{-1}(1.9e{-1})\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Regularization parameter optimization results for cLapIRN calculated on the OASIS validation set. Here \(\overline{\lambda}\) refers to the normalized regularization weight of the gradient loss of cLapIRN and should be in range \([0,\ 1]\). Value \(\overline{\lambda}=0.05\) was chosen since it resulted in clearly the highest Dice score. HD95 metric is not included due to relatively high computational cost.
Then also:
\[U^{(k)}(x_{1},x_{2}) =u^{(k)}(z_{1}^{(k)}(x_{1},x_{2}),z_{2}^{(k)}(x_{1},x_{2}))\circ u^{(k )}(z_{2}^{(k)}(x_{1},x_{2}),z_{1}^{(k)}(x_{1},x_{2}))^{-1}\] \[=u^{(k)}(z_{2}^{(k)}(x_{2},x_{1}),z_{1}^{(k)}(x_{2},x_{1}))\circ u^ {(k)}(z_{1}^{(k)}(x_{2},x_{1}),z_{2}^{(k)}(x_{2},x_{1}))^{-1}\] \[=\Big{[}u^{(k)}(z_{1}^{(k)}(x_{2},x_{1}),z_{2}^{(k)}(x_{2},x_{1})) \circ u^{(k)}(z_{2}^{(k)}(x_{2},x_{1}),z_{1}^{(k)}(x_{2},x_{1}))^{-1}\Big{]}^{-1}\] \[=U^{(k)}(x_{2},x_{1})^{-1}\]
Then we can finalize the induction step:
\[f_{1\to 1.5}^{(k)}(x_{1},x_{2}) =f_{1\to 1.5}^{(k+1)}(x_{1},x_{2})\circ U^{(k)}(x_{1},x_{2})\] \[=f_{2\to 1.5}^{(k+1)}(x_{2},x_{1})\circ U^{(k)}(x_{2},x_{1})^{-1}=f _{2\to 1.5}^{(k)}(x_{2},x_{1})\]
From this follows that the method is symmetric by construct:
\[f_{1\to 2}(x_{1},x_{2}) =f_{1\to 1.5}^{(0)}(x_{1},x_{2})\circ f_{2\to 1.5}^{(0)}(x_{1},x_{2})^{-1}\] \[=f_{2\to 1.5}^{(0)}(x_{2},x_{1})\circ f_{1\to 1.5}^{(0)}(x_{2},x_{1})^{-1}= f_{2\to 1}(x_{2},x_{1})\]
Unlike inverse consistency by construct, this relation holds exactly.
## Appendix V Fixed point iteration convergence in deformation inversion layers
We conducted an experiment on the fixed point iteration convergence in the deformation inversion layers with the model trained on OASIS dataset. The results can be seen in Figure A1. The main result was that in the whole OASIS test set of 9591 pairs not a single deformation required more than 7 iterations for convergence. Deformations requiring 7 iterations were only \(0.13\%\) of all the deformations and a significant majority of the deformations (\(96\%\)) required \(3\) to \(5\) iterations. In all the experiments, including this one, the stopping criterion for the iterations was maximum displacement error within the whole volume reaching below one hundredth of a voxel, which is a very small error. In conclusion, lack of convergence of the fixed point iteration in the deformation inversion layer does not introduce practically relevant error.
Deformation inversion layer memory usage
We conducted an experiment on the memory usage of the deformation inversion layer compared to the stationary velocity field (SVF) framework (Arsigny et al., 2006) since SVF framework could also be used to implement the suggested architecture in practice.
With the SVF framework one could slightly simplify the deformation update Equation 5 to the form
\[U^{(k)}:=\exp(u^{(k)}(z_{1}^{(k)},z_{2}^{(k)})-u^{(k)}(z_{2}^{(k)},z_{1}^{(k)}))\] (A1)
where \(\exp\) is the SVF integration (corresponding to Lie algebra exponentiation), and \(u^{(k)}\) now predicts an auxiliary velocity field. We compared memory usage of this to our implementation, and used the implementation by Dalca et al. (2018) for SVF integration.
The results are shown in Table A5. Our version implemented using the deformation inversion layer requires \(5\) times less data to be stored in memory for the backward pass compared to the SVF integration. The peak memory usage during the inversion is also slightly lower. The memory saving is due to the memory efficient back-propagation through the fixed point iteration layers, which requires only the final inverted volume to be stored for backward pass. Since our architecture requires two such operations for each resolution level (\(U^{(k)}\) and its inverse), the memory saved during training is significant.
NOTE: In the main main text we stated a \(29\) times smaller memory usage for our method. This result was based on an initial analysis that did not take into account the simplification in Equation A1. The results reported here were obtained after the main paper submission deadline, and they will be incorporated in the main text in the revised version.
## VII Additional results
Figures A2 and A3 visualize dice scores for individual anatomical regions for both OASIS and LPBA40 datasets. VoxelMorph and SYMNet perform systematically worse than our method, while claplRN and our method perform very similarly on most regions.
Figure A4 visualizes how the deformation is being gradually updated during the multi-resolution architecture.
Figure A2: **Individual brain structure dice scores for the OASIS experiment.** Boxplot shows performance of each of the compared methods on each of the brain structures in the OASIS dataset. Algorithms from left to right in each group: SITReg, cLapIRN, VoxelMorph, SYMNet (original)
Figure A4: **Visualization of deformation being gradually updated.** Each \(f_{1\to 2}^{(k)}(x_{1},x_{2})\) corresponds to the full deformation after resolution level \(k\). The example is from the OASIS experiment.
## VIII Dataset details
We split the OASIS dataset into \(255\), \(20\) and \(139\) images for training, validation, and testing. The split differs from the Learn2Reg challenge since the test set is not available, but sizes correspond to the splits used by Mok and Chung (2020, 2021). We used all image pairs for testing and validation, yielding \(9591\) test and \(190\) validation pairs. For the affinely-aligned OASIS experiment we cropped the images to \(144\times 192\times 160\) resolution. Images in raw OASIS dataset have resolution \(256\times 256\times 256\) and we did not crop the images.
We split the LPBA40 into \(25\), \(5\) and \(10\) images for training, validation, and testing. This leaves us with \(10\) pairs for validation and \(45\) for testing. We cropped the LPBA40 images to \(160\times 192\times 160\) resolution.
## IX Details on statistical significance
We computed statistical significance of the results comparing the test set predictions of the trained models with each other. We measured the statistical significance using permutation test, and in practice sampled \(10000\) permutations. In Figures 1 and 2 all the improvements denoted with asterisk (*) obtained very small p-value with not a single permutation (out of the \(10000\)) giving larger mean difference than the one observed.
To establish for certain the relative performance of the methods with respect to the tight metrics, one should train multiple models per method with different random seeds. However, our claim is not that the developed method improves the results with respect to a single tight metric but rather that the overall performance is better by a clear margin (see Section 5).
## X Clarifications on symmetry, inverse consistency, and topology preservation
Here we provide examples of symmetry, inverse consistency and lack of topology preservation to further clarify how the terms are used in the paper.
Since symmetry and inverse consistency are quite similar properties, their exact difference might be unclear. Examples of registration methods that are _inverse consistent by construct but not symmetric_ are many deep learning frameworks applying the stationary velocity field (Arsigny et al., 2006) approach, e.g, (Dalca et al., 2018; Krebs et al., 2018, 2019; Mok and Chung, 2020). All of them use a neural network to predict a velocity field for an ordered pair of input images. The final deformation is then produced via Lie algebra exponentiation of the velocity field, that is, by integrating the velocity field over itself over unit time. Details of the exponentiation are not important here but the operation has an interesting property: By negating the velocity field to be exponentiated, the exponentiation results in inverse deformation. Denoting the Lie algebra exponential by \(\exp\), and using notation from Section 1, we can define such methods as
\[\left\{\begin{array}{ll}f_{1\to 2}(x_{1},x_{2})&:=\exp(g(x_{1},x_{2}))\\ f_{2\to 1}(x_{2},x_{1})&:=\exp(-g(x_{1},x_{2}))\end{array}\right.\] (A2)
where \(g\) is the learned neural network predicting the velocity field. As a result, the methods are inverse consistent since \(\exp(g(x_{1},x_{2}))=\exp(-g(x_{1},x_{2}))^{-1}\) (accuracy is limited by spatial sampling resolution). However, by changing the order of inputs to \(g\), there is no guarantee that \(g(x_{1},x_{2})=-g(x_{2},x_{1})\) and hence such methods are not symmetric by construct.
MICS (Estienne et al., 2021) is an example of a method which is _symmetric by construct but not inverse consistent_. MICS is composed of two components: encoder, say \(E\), and decoder, say \(D\), both of which are learned. The method can be defined as
\[\left\{\begin{array}{ll}f_{1\to 2}(x_{1},x_{2})&:=D(E(x_{1},x_{2})-E(x_{2},x_{1})) \\ f_{2\to 1}(x_{1},x_{2})&:=D(E(x_{2},x_{1})-E(x_{1},x_{2})).\end{array}\right.\] (A3)
As a result, the method is symmetric by construct since \(f_{1\to 2}(x_{1},x_{2})=f_{2\to 1}(x_{2},x_{1})\) holds exactly. However, there is no architectural guarantee that \(f_{1\to 2}(x_{1},x_{2})\) and \(f_{2\to 1}(x_{2},x_{1})\) are inverses of each other, and the paper proposes to encourage that using a loss function. In the paper they use such components in multi-steps manner, and as a result the overall architecture is no longer symmetric.
Lack of topology preservation means in practice that the predicted deformation folds on top of itself. An example of such deformation is shown in Figure A5.
Figure A5: **Visualization of a 2D deformation which is not topology preserving.** The deformation can be seen folding on top of itself. |
2302.07444 | A Case Study on Designing Evaluations of ML Explanations with Simulated
User Studies | When conducting user studies to ascertain the usefulness of model
explanations in aiding human decision-making, it is important to use real-world
use cases, data, and users. However, this process can be resource-intensive,
allowing only a limited number of explanation methods to be evaluated.
Simulated user evaluations (SimEvals), which use machine learning models as a
proxy for human users, have been proposed as an intermediate step to select
promising explanation methods. In this work, we conduct the first SimEvals on a
real-world use case to evaluate whether explanations can better support
ML-assisted decision-making in e-commerce fraud detection. We study whether
SimEvals can corroborate findings from a user study conducted in this fraud
detection context. In particular, we find that SimEvals suggest that all
considered explainers are equally performant, and none beat a baseline without
explanations -- this matches the conclusions of the original user study. Such
correspondences between our results and the original user study provide initial
evidence in favor of using SimEvals before running user studies. We also
explore the use of SimEvals as a cheap proxy to explore an alternative user
study set-up. We hope that this work motivates further study of when and how
SimEvals should be used to aid in the design of real-world evaluations. | Ada Martin, Valerie Chen, Sérgio Jesus, Pedro Saleiro | 2023-02-15T03:27:55Z | http://arxiv.org/abs/2302.07444v2 | # A Case Study on Designing Evaluations of ML Explanations with Simulated User Studies
###### Abstract
When conducting user studies to ascertain the usefulness of model explanations in aiding human decision-making, it is important to use real-world use cases, data, and users. However, this process can be resource-intensive, allowing only a limited number of explanation methods to be evaluated. Simulated user evaluations (SimEvals), which use machine learning models as a proxy for human users, have been proposed as an intermediate step to select promising explanation methods. In this work, we conduct the first SimEvals on a real-world use case to evaluate whether explanations can better support ML-assisted decision-making in e-commerce fraud detection. We study whether SimEvals can corroborate findings from a user study conducted in this fraud detection context. In particular, we find that SimEvals suggest that all considered explainers are equally performant, and none beat a baseline without explanations - this matches the conclusions of the user study. Such correspondences between our results and the original user study provide initial evidence in favor of using SimEvals before running user studies. We also explore the use of SimEvals as a cheap proxy to explore an alternative user study set-up. We hope that this work motivates further study of when and how SimEvals should be used to aid in the design of real-world evaluations.
## 1 Introduction
The field of interpretable machine learning has proposed a large and diverse number of techniques to explain model behavior. However, it is difficult to anticipate exactly which explanations may help humans with a particular use case (Chen et al., 2022b; Davis et al., 2020). There have been calls for more human-centered approaches Wortman Vaughan & Wallach (2021); Liao & Varshney (2021) to investigate how humans benefit from explanations in specific use cases, particularly through user studies (Doshi-Velez & Kim, 2017). Ideally, these user studies would utilize real users, tasks, and data to maximize the applicability of the study's findings (Amarasinghe et al., 2020). Since real-world user studies can be resource-intensive to conduct and thus typically only evaluate a limited number of explanation methods (or explainers), simulated user evaluations (SimEvals) have been proposed as a way to identify candidate explanation methods for user studies using machine learning models (Chen et al., 2022a). While the original work by Chen et al. (2022a) performed a cursory evaluation of SimEvals, it is unclear whether this approach would generalize to real-world use cases of explanations.
In this work, we focus on a real-world decision support use case where professional fraud analysts review e-commerce transactions to determine whether a transaction is fraudulent. We conduct the first SimEvals on a real-world task and data and compare the results to the findings from a user study with real-world users conducted by Amarasinghe et al. (2022) as shown in Figure 1. We instantiate SimEvals to study whether any of these explanations contained predictive information about whether a transaction was fraudulent and find no statistical difference in SimEval performance between the three explanation methods and a baseline SimEval without explanations. The results of this SimEval trial closely match the findings of Amarasinghe et al. (2022). Our results suggest that
SimEvals could have helped to select better candidate explainers in the original user study, reducing its cost and improving its chance of locating a successful explainer. They also provide evidence that SimEval performance is associated with human performance across different explainers.
We also explore the use of SimEvals to cheaply identify an alternative study design beyond the canonical set-up where analysts are provided both the transaction and the explanation as shown in Figure 3. Our preliminary findings suggest that a subset of the explainers considered in the original study can be used as a human-centric dimensionality reduction technique (i.e., there is not statistically less signal in only presenting the explanation on its own) to reduce the time cost of processing a full transaction. To get an initial signal on the validity of this proposed design, we conduct short interviews with multiple fraud analysts and evaluate whether the information the analysts typically look for in a full transaction is present in the explanations used in the alternative study design.
In summary, this work explores two ways to utilize SimEvals in a real-world context. We believe that our comparative investigation of real and simulated user studies will serve as an example of using SimEvals more effectively.
## 2 Can SimEvals corroborate findings from real-world user study?
While initial results from prior work suggest that SimEvals can be used to identify promising explanations for user studies, there has been limited evaluation of its utility in real-world contexts. We conduct such an evaluation to see whether SimEvals confirm user study findings in a fraud detection use case. We first summarize the findings from the original study by Amarasinghe et al. (2022) in this use case and then discuss how to instantiate and train a SimEval for each explanation method in the study. Figure 1 summarizes the findings of this section.
Figure 1: We compare real and simulated user studies of explanations (SimEvals) for the real-world use case of fraud detection decision support, shown in the top and bottom rows respectively. A user study designed to compare three different study arms, each with a different explanation method, in order of performance. A corresponding SimEval study that compares the same three explanation methods but replaces fraud analysts are replaced with ML models. We compare the relative performance of SimEval across a variety of candidate explainers against results across comparable arms of Amarasinghe et al. (2022) and we find that SimEvals support the negative result in the user study (i.e., none of the explainers outperform showing the no explanation condition).
### Prior findings from real-world user study
The user study by Amarasinghe et al. (2022) investigated the real-world application of detecting financial fraud.1 In the fraud detection use case, a machine learning model \(f\) is trained to estimate \(\hat{y}\) which is the probability that a given transaction was actually fraud given a piece of transaction data \(x\). The user study introduced an explanation \(E(x,f)\) of the model \(f\) for a given transaction \(x\) to the fraud analysts, hypothesizing that this additional information could improve decision outcomes and speed. Specifically, each \(E(x,f)\) is a sparse vector which contains feature importance for the top-6 highest magnitude importance features for a given transaction \(x\) and model \(f\) and 0 otherwise. To decide whether \(x\) was fraudulent, analysts were given the transaction \(x\) (which had 112 features), the explanation \(E(x,f)\), and the model score \(\hat{y}=f(x)\). In addition, there were two baseline arms in which the analysts were given only \(x\) or \((x,\hat{y})\) to predict fraud.
Footnote 1: Note that this use case was studied by both Amarasinghe et al. (2022) and an earlier study by Jesus et al. (2021). We chose to compare our SimEvals with findings from the more recent user study because it improved the experimental design in multiple ways to be more representative of the real-world use case.
Analysts in the study were shown 500 transactions for each of three different explainers (LIME (Ribeiro et al., 2016), TreeSHAP (Lundberg and Lee, 2017), TreeInterpreter (Saabas, 2015)) and for each baseline arm. Amarasinghe et al. (2022) proposed a metric called _Percent Dollar Regret_ (PDR) to better reflect operational goals. PDR measures the amount of revenue lost due to incorrect decisions relative to what would be realized if all the reviewed transactions were perfectly classified:
\[PDR=1-\frac{\text{Realized~{}Revenue}}{\text{Possible~{}Revenue}} \tag{1}\]
The more detailed equation is found in Amarasinghe et al. (2022). Given this set-up, the main findings of the experiment were: (1) No explanation improved analyst performance in terms of PDR over the baseline of showing analysts the model score only; (2) There was no statistical difference in analyst performance between the three explanation methods. In this work, we evaluate SimEvals for both claims to determine whether there was predictive information in any of the explanations that did not translate to improved analyst performance.
### Setting up SimEvals to reflect user study
SimEvals are ML models trained to predict the ground truth label (e.g., whether a transaction is fraud) given the same information that would be presented in a user study. Specifically, the information in the user study can be represented by the tuple \((x,\hat{y},E(x,f))\), where the explainer \(E(\cdot)\) was either TreeInterpreter, LIME, TreeSHAP, or in the baseline case, no explanation and \(\hat{y}\) the predicted probability of fraud by \(f\). Each SimEval model corresponds to one candidate explanation method. Validation set PDR is used to evaluate SimEvals. As noted in Chen et al. (2022), SimEvals do not aim to replicate a user's decision-making process and their results should be interpreted as measures of the predictive power of their given explanations.
Each SimEval was trained and evaluated on \(n=1500\) total transactions, which were split into \(1000\) train and \(500\) validation transactions. The transactions in each split were chosen to match the ones shown to the analysts in the original user study to reduce the impact of the validation set choice on final conclusions. Note that this means the validation split is not the same across different explainers because different transactions were shown to the analysts across different arms of the experiment to avoid showing repeated transactions, as shown in Figure 2. However, we verified that the different training and validation splits followed roughly the same distribution. To select a family of SimEval models, we ran a hyperparameter grid search over the parameters in Table 5. We found that the best validation performance was achieved using a Random Forest model with a minimum of 5 samples per leaf node. We use this as the base model family for SimEvals in the remaining experiments.
### Checking for parroting
One unique challenge of decision support use cases like the fraud detection one we consider is the similarity of information provided to users (e.g., the \(\hat{y}\), the model prediction for \(x\)) and the output that the analysts aimed to predict. An effective degenerate strategy emerges for the corresponding SimEval model to simply apply a threshold to \(\hat{y}\). A SimEval model which parrots \(\hat{y}\) in this manner
will disregard the explainers entirely. When we compare the Original Model results with SimEvals in Table 1, we find that all explanations improved in PDR when explainers were included, suggesting that the SimEvals could not have utilized only \(\hat{y}\) when making predictions.
### Comparing SimEvals to user study findings
To evaluate whether SimEvals can corroborate user study findings, we test whether or not the inclusion of explanations generated by any explainer yields higher SimEval PDR as compared to a baseline SimEval trained without explanations. We present both the aggregate SimEval PDR scores across the validation set for each explainer, which is equivalent to the metric from the user study, as well as a transaction-based analysis where we compare SimEval predictions on individual instances with analyst predictions. A high correspondence on individual transactions would suggest that SimEvals make decisions in a way that is similar to the analysts.
**Comparison of aggregate performance.** In Table 1, we compared SimEvals to the aggregate analyst performance as found in Amarasinghe et al. (2022). The SimEval results show that the overlap in the actual information provided to the analysts across explanations are roughly the same (noting the error bars in the SimEvals column), which supports the general finding from Amarasinghe et al. (2022) that all explainers lead to comparable performance as the baseline (noting the error bars in the Analyst column). Additionally, while not statistically significant, we do observe that both TreeInterpreter and TreeSHAP allowed _both_ SimEvals and analysts to achieve lower PDR compared to LIME.2
\begin{table}
\begin{tabular}{l|l|l|l} & Original Model & SimEvals & Analyst \\ \hline TreeInterpreter & 0.102 (0.075, 0.121) & 0.071 (0.054, 0.087) & 0.100 (0.081, 0.119) \\ LIME & 0.164 (0.099, 0.226) & 0.103 (0.061, 0.131) & 0.116 (0.087, 0.145) \\ TreeSHAP & 0.109 (0.078, 0.142) & 0.081 (0.052, 0.105) & 0.097 (0.078, 0.116) \\ Model Score & 0.133 (0.082, 0.172) & 0.097 (0.047, 0.127) & 0.092 (0.068, 0.112) \\ \end{tabular}
\end{table}
Table 1: Performance measured using PDR (lower is better) of the original fraud detection model, SimEvals and actual analysts across explainers. Parentheses contain 90 percent CIs. CIs were obtained by bootstrapping test samples to generate a pivotal confidence interval – we share CIs rather than standard errors as the CIs are not symmetric. Note that the Original Model column does not depend on the explanations, but the PDR differs due to the different validation split associated with each explainer.
Figure 2: A diagram illustrating the train/test split used for each SimEval experiment. As we did not have explanations available for the 500 transactions used in the ‘no explanation’ arm of the original user study, we perform the above data split. The above data split ensures that the validation dataset associated with each explanation matches the transactions used in the original user study. It also ensures that each SimEval receives the same dataset size (1000 train, 500 validation).
**Comparison on individual transactions.** We performed an analysis on the association between analyst and SimEval predictions on individual transactions. Table 2 shows the ROC AUC when using SimEval output as an estimate of the probability that an analyst predicted a given transaction to be fraud. This analysis yielded results which were significantly above 0.5, indicating some association. However, this association was not significantly stronger than when using predictions from the fraud model (\(\hat{y}\)) as an estimate for analyst predictions directly, implying SimEvals are may not be as informative at the individual transaction level.
### Discussion & Limitations
Since SimEvals corroborate findings from the user study, we believe the original study could have benefited from running SimEvals to potentially select better explanation methods before conducting a full user study. However, we emphasize that it is not a replacement for running actual user studies. The aggregate analysis only provides an estimate of which explainers have the highest performance with no guarantee of how large the difference will actually be in a user study. In particular, we might expect more divergence between human and SimEval behavior for a few potential reasons: outside domain knowledge is especially important and the analysts lack the time to carefully examine each piece of information as a model would. This divergence is reflected in the modest association between human and SimEval predictions shown in Table 2. Although SimEvals are intended to find predictive information in explanations, it is possible that our choice of base model family or learning procedure may fail to extract this information. We also note that only the aggregate analysis is possible to conduct before running a user study, whereas the transaction-level comparison is only possible after a user study has already been run.
## 3 Using SimEvals to guide new hypotheses
Once SimEvals are set up for a use case, it is easy to vary the parameters of the set-up, which include the choice of inputs. In particular, we explore whether SimEvals would perform as well when \(x\) was excluded from the input (i.e., we train SimEvals models in the same way as described
\begin{table}
\begin{tabular}{l c c} \hline \hline & Original Model & SimEvals \\ \hline TreeInterpreter & 0.734 & 0.727 \\ LIME & 0.695 & 0.675 \\ TreeSHAP & 0.703 & 0.731 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The ROC AUC (higher is better) achieved when using either \(\hat{y}\) or SimEvals output to predict the analyst predictions, separated by the type of explainer. We see that SimEvals and the Original Model are about equally predictive of analyst predictions.
Figure 3: We use SimEvals to explore a new study design which would only provide the explanation without the input transaction and find that for two explanations there is minimal difference in performance between the two SimEval variants. We also conduct short interviews with fraud analysts to get preliminary signal on this study design.
in Section 2.2 using \((\hat{y},E(x,f))\) as the inputs). Since \(\hat{y}\) is trained on \(x\), we hypothesized that it may be redundant to include \(x\). When comparing the canonical SimEval models with ones that _exclude_\(x\), we find that two of the three explainers (TreeInterprete and TreeSHAP) had only a small performance boost while LIME's performance gap was statistically significant as shown in Table 3. This finding suggests providing analysts only \((\hat{y},E(x,f))\) for TreeInterpreter and TreeSHAP explainers could reduce the time an analyst spends on each transaction with minimal loss in information content. We summarize this line of analysis in Figure 3. Recall that the visualization of each explanation only shows the top 6 features out of 112.
To evaluate whether a set-up in which analysts are only shown \((\hat{y},E(x,f))\) may be justified, we perform some initial verification with the analysts. In particular, we investigate whether the features used in \(E(x,f)\) across different transactions \(x\) have reasonable alignment with features that analysts think are important because it may be unnatural for the analysts to see only explanations consisting of features which they would not typically use.
### Identifying which features are important to the analysts
To evaluate whether a set-up in which analysts are only shown \((\hat{y},E(x,f))\) may be justified, we perform some initial verification with the analysts. In particular, we investigate whether the features used in \(E(x,f)\) across different transactions \(x\) have reasonable alignment with features that analysts think are important because it may be unnatural for the analysts to see only explanations consisting of features which they would not typically use.
### Identifying which features are important to the analysts
To obtain analyst's perceived feature importances, we conducted brief interviews with analysts from the original user study by Amarasinghe et al. (2022). In the interview, we asked analysts to fill out a spreadsheet, where they were asked to rank the importance of each feature in a transaction. They were asked to do this with the context which contained a row for each feature in a transaction. There was also a column for each potential "transaction reason", where each reason could be considered as a fraudulent or a legitimate concept (e.g., a suspicious address is a justification for fraud). For each "transaction reason", analysts were asked to rank the importance of each feature on a scale of 0-4, where 0 corresponded to the feature being unimportant and 4 corresponded to the feature being most important. To compute a feature alignment, we average over all of the analyst scores. For each analyst indexed by \(i\), we refer to their provided importances as \(\text{score}_{i}\) which maps a transaction \(x\) and the \(j\)th feature of an explanation \(E(x,f)_{j}\) to a value ranging from 0 to 4. If a transaction \(x\) had multiple reasons labeled to it, we would select the reason that gave it the maximum score. We use the following formula to compute the average feature alignment (AVG FA) for a given explainer \(E\):
\[\text{AVG FA}(E)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[\frac{1}{n|A|}\sum_{i=1}^ {|A|}\sum_{j=1}^{n}\text{score}_{i}(x,E(x,f)_{j})] \tag{2}\]
where in our setting, the number of features in an explanation \(n=6\), which is the number of non-zero features in the sparse explanation, and the number of analysts \(|A|=3\). An explainer with a higher AVG FA value means that it uses features that align more with analyst priors.
### Feature Alignment Results
As shown in Table 4, we find that particularly for _Fraudulent Concepts_, there is higher feature alignment, AVG FA, for both TreeInterpreter and TreeSHAP compared to LIME. This aligns with
\begin{table}
\begin{tabular}{l|l|l} & Fraudulent Concepts & Legitimate Concepts \\ \hline TreeInterpreter & 2.21 (2.09, 2.37) & 1.28 (1.26, 1.29) \\ LIME & 2.05 (1.95, 2.15) & 1.22 (1.21, 1.23) \\ TreeSHAP & 2.15 (2.02, 2.32) & 1.22 (1.20, 1.23) \\ \end{tabular}
\end{table}
Table 4: Average feature alignment (higher is better) for each explainer across fraudulent and legitimate transactions. Parentheses contain 90 percent CIs. Legitimate transactions score lower due to analysts considering all features similarly important when fraud is not detected.
\begin{table}
\begin{tabular}{l|l|l} & SimEval excluding \(x\) & SimEvals \\ \hline TreeInterpreter & 0.072 (0.049, 0.089) & 0.071 (0.054, 0.087) \\ LIME & 0.152 (0.124, 0.197) & 0.103 (0.061, 0.131) \\ TreeSHAP & 0.097 (0.076, 0.120) & 0.081 (0.052, 0.105) \\ \end{tabular}
\end{table}
Table 3: Performance of canonical SimEvals compared with a variant which excludes \(x\) as part of the input, measured in PDR (lower is better). Parentheses contain 90 percent CIs. CIs for simulated users were obtained by bootstrapping test samples to generate a pivotal confidence interval.
Table 3 where we found that TreeInterpreter and TreeSHAP outperformed LIME for SimEVals excluding \(x\). One potential reason for why LIME has higher PDR and lower AVG FA is because the same feature appears more often in the LIME explanation compared to the two other explainers (as shown in Table 7). Given the explanations are sparsely populated, an explanation which consistently ranks one feature as being important would likely be less useful in distinguishing fraudulent from legitimate transactions. These trends do not hold for _Legitimate Concepts_. Interviews with analysts revealed that explainers' AVG FA scores for legitimate transactions were significantly lower across the board because all features could be considered to be similarly important when the transaction is legitimate. The fact that analysts consider all features somewhat important for legitimate transactions might mean that any drastic dimensionality reduction may be unnatural. However, it is possible that analysts can adapt to this set-up over time. These results provide mixed evidence for the use of explainers as a dimensionality reduction technique, though a user study would be necessary to evaluate its benefits and drawbacks.
## 4 Conclusion
We conduct the first comparison of SimEVals against existing user study findings for the real-world use case of decision support for fraud detection. We find that SimEVals results generally agreed with findings from the user study by Amarasinghe et al. (2022), which is that there was no statistical difference in predictiveness of fraud between the three explanation methods considered, despite limited statistical power due to sample size. This finding suggests that SimEVals could have been used to identify better choices of explainers for the use case and provides additional evidence in favor of using SimEVals before running expensive user studies. Furthermore, we use SimEVals to evaluate new hypotheses and find promising evidence in favor of using explanations as a dimensionality reduction technique. We hope this work serves as a guideline to illustrate the potential uses of SimEVals in real-world contexts both as a way to both verify whether candidate explanation methods are predictive of a use case as well as to explore experimental design set-ups. |
2310.17446 | Bockstein cohomology of Maximal Cohen-Macaulay modules over Gorenstein
isolated singularities | Let $(A,\mathfrak{m})$ be an excellent equi-charateristic Gorenstein isolated
singularity of dimension $d \geq 2$. Assume the residue field of $A$ is
perfect. Let $I$ be any $\mathfrak{m}$-primary ideal. Let $G_I(A) =
\bigoplus_{n \geq 0}I^n/I^{n+1}$ be the associated graded ring of $A$ with
respect to $I$ and let $\mathcal{R}_I(A) = \bigoplus_{n \in \mathbb{Z}}I^n$ be
the extended Rees algebra of $A$ with respect to $I$. Let $M$ be a finitely
generated $A$-module.
Let $G_I(M) = \bigoplus_{n \geq 0}I^nM/I^{n+1}M$ be the associated graded
ring of $M$ with respect to $I$ (considered as a $G_I(A)$-module). Let
$BH^i(G_I(M))$ be the $i^{th}$-Bockstein cohomology of $G_I(M)$ with respect to
$\mathcal{R}_I(A)_+$-torsion functor.
We show there exists $a \geq 1$ depending only on $A$ such that if $I$ is any
$\mathfrak{m}$-primary ideal with $I \subseteq \mathfrak{m}^a$ and $G_I(A) $
generalized Cohen-Macaulay then the Bockstein cohomology $BH^i(G_I(M))$ has
finite length for $i = 0, \ldots, d-1$ for any maximal Cohen-Macaulay
$A$-module $M$. | Tony J. Puthenpurakal | 2023-10-26T14:56:38Z | http://arxiv.org/abs/2310.17446v1 | # Bockstein cohomology of maximal Cohen-Macaulay modules over Gorenstein isolated singularities
###### Abstract.
Let \((A,\mathfrak{m})\) be an excellent equi-characteristic Gorenstein isolated singularity of dimension \(d\geq 2\). Assume the residue field of \(A\) is perfect. Let \(I\) be any \(\mathfrak{m}\)-primary ideal. Let \(G_{I}(A)=\bigoplus_{n\geq 0}I^{n}/I^{n+1}\) be the associated graded ring of \(A\) with respect to \(I\) and let \(\mathcal{R}_{I}(A)=\bigoplus_{n\in\mathbb{Z}}I^{n}\) be the extended Rees algebra of \(A\) with respect to \(I\). Let \(M\) be a finitely generated \(A\)-module. Let \(G_{I}(M)=\bigoplus_{n\geq 0}I^{n}M/I^{n+1}M\) be the associated graded ring of \(M\) with respect to \(I\) (considered as a \(G_{I}(A)\)-module). Let \(BH^{i}(G_{I}(M))\) be the \(i^{th}\)-Bockstein cohomology of \(G_{I}(M)\) with respect to \(\mathcal{R}_{I}(A)_{+}\)-torsion functor. We show there exists \(a\geq 1\) depending only on \(A\) such that if \(I\) is any \(\mathfrak{m}\)-primary ideal with \(I\subseteq\mathfrak{m}^{a}\) and \(G_{I}(A)\) generalized Cohen-Macaulay then the Bockstein cohomology \(BH^{i}(G_{I}(M))\) has finite length for \(i=0,\ldots,d-1\) for any maximal Cohen-Macaulay \(A\)-module \(M\).
Key words and phrases:Associated graded rings, Rees Algebras, Local cohomology, isolated singularities 2020 Mathematics Subject Classification: Primary 13A30; Secondary 13D40, 13D07
## 1. introduction
Let \((A,\mathfrak{m})\) be a Cohen-Macaulay local ring of dimension \(d\) and let \(I\) be an \(\mathfrak{m}\)-primary ideal. Let \(G_{I}(A)=\bigoplus_{n\geq 0}I^{n}/I^{n+1}\) be the associated graded ring of \(A\) with respect to \(I\) and let \(\mathcal{R}_{I}(A)=\bigoplus_{n\in\mathbb{Z}}I^{n}\) be the extended Rees algebra of \(A\) with respect to \(I\). Let \(M\) be a Cohen-Macaulay \(A\)-module. Let \(G_{I}(M)=\bigoplus_{n\geq 0}I^{n}M/I^{n+1}M\) be the associated graded ring of \(M\) with respect to \(I\) (considered as a \(G_{I}(A)\)-module) and let \(\mathcal{R}_{I}(M)=\bigoplus_{n\in\mathbb{Z}}I^{n}M\) be the extended Rees module of \(M\) with respect to \(I\).
The _Hilbert function_ of \(M\) with respect to \(I\) is \(H^{I}(M,n)=\ell(I^{n}M/I^{n+1}M)\). Here \(\ell(-)\) denotes length as an \(A\)-module. A fruitful area of research has been to study the interplay between Hilbert functions and properties of \(G_{I}(M)\) and \(\mathcal{R}_{I}(M)\). See the texts [16, Section 6] and [17, Chapter 5] for nice surveys on this subject (when \(M=A\)). Traditionally only the case \(M=A\) was considered. However recently associated graded modules have been studied, see [13].
Graded local cohomology has played an important role in this subject. For various applications see [2, 4.4.3], [14], [7], [1], [6], [15] and [4]. Let \(H^{i}(G_{I}(M))\) denote \(i^{th}\)-local cohomology module of \(G_{I}(A)\) with respect to \(G_{I}(A)_{+}=\bigoplus_{n>0}I^{n}/I^{n+1}\). A line of inquiry in this subject is to find conditions on \(I\) such that \(G_{I}(A)\) (or \(G_{I^{n}}(A)\) for all \(n\gg 0\)) has high depth. This is equivalent to showing that \(H^{i}(G_{I}(A))\) (or \(H^{i}(G_{I^{n}}(A))\) for all \(n\gg 0\)) vanishes for some \(i<d\).
We note that \(t^{-1}\) is \(\mathcal{R}_{I}(M)\)-regular and \(\mathcal{R}_{I}(M)/t^{-1}\mathcal{R}_{I}(M)=G_{I}(M)\). So we have naturally defined Bockstein operators \(\beta^{i}\colon H^{i}(G_{I}(M))(-1)\to H^{i+1}(G_{I}(M))\) for \(i\geq 0\) (with respect to \(\mathcal{R}_{I}(A)_{+}\)-torsion functor). Since \(\beta^{i+1}(+1)\circ\beta^{i}=0\) we have _Bockstein cohomology_ modules \(BH^{i}(G_{I}(M))\) for \(i=0,\ldots,d\). Despite being natural, Bockstein cohomology groups of associated graded rings have not been investigated much. In [11] we studied some basic properties of Bockstein cohomology. We also showed that in some respects Bockstein cohomology behaves better than the usual local cohomology.
Maximal Cohen-Macaulay (MCM) modules encode a lot of information of the ring. In fact if \(A\) is Gorenstein then the stable category of MCM \(A\)-modules is isomorphic to the singularity category of \(A\), see [3, 4.4.1]. A particularly important case is when \(A\) is an isolated singularity (i.e., \(A_{P}\) is regular for all primes \(P\neq\mathfrak{m}\)). In this paper we take the view that information on \(G_{I}(A)\) yields structural information on \(G_{I}(M)\) when \(M\) is MCM.
The main result of this paper is:
**Theorem 1.1**.: _Let \((A,\mathfrak{m})\) be an excellent equi-characteristic Cohen-Macaulay isolated singularity of dimension \(d\geq 2\). Assume the residue field of \(A\) is perfect. There exists \(a\geq 1\) depending only on \(A\) such that if \(I\) is any \(\mathfrak{m}\)-primary ideal with \(I\subseteq\mathfrak{m}^{a}\) and \(H^{i}(G_{I}(A))\) has finite length for \(i<r\) then the Bockstein cohomology \(BH^{i}(G_{I}(M))\) has finite length for \(i<r\) for any MCM \(A\)-module \(M\) such that \(M=\operatorname{Syz}_{1}^{A}(L)\) where \(L\) is a MCM \(A\)-module._
We note that if \(A\) is Gorenstein then any MCM \(A\)-module is the syzygy of a MCM \(A\)-module. So we obtain as an easy corollary the result stated in the abstract.
**Corollary 1.2**.: _Let \((A,\mathfrak{m})\) be an excellent equi-characteristic Gorenstein isolated singularity of dimension \(d\geq 2\). Assume the residue field of \(A\) is perfect. There exists \(a\geq 1\) depending only on \(A\) such that if \(I\) is any \(\mathfrak{m}\)-primary ideal with \(I\subseteq\mathfrak{m}^{a}\) and \(G_{I}(A)\) generalized Cohen-Macaulay then the Bockstein cohomology \(BH^{i}(G_{I}(M))\) has finite length for \(i<d\) for any MCM \(A\)-module \(M\)._
_Techniques used to prove our result:_ It is elementary that we can reduce proof of Theorem 1.1 to the case when \(A\) is complete isolated singularity with infinite perfect residue field, see 3.1. The main technique used is the notion of cohomological annhilators, i.e., there exists \(a\geq 1\) such that \(\mathfrak{m}^{a}\operatorname{Ext}_{A}^{1}(X,Y)=0\) for any MCM \(A\)-modules \(X,Y\), see [18, 6.10] (also see [8, 15.14]). Let \(\underline{\operatorname{Hom}}_{A}(M,M)\) denote the stable Hom. If \(M=\operatorname{Syz}_{1}^{A}(L)\) where \(L\) is MCM \(A\)-module then \(\mathfrak{m}^{a}\operatorname{Hom}_{A}(M,M)=0\). In particular if \(I\subseteq\mathfrak{m}^{a}\) then for any \(x\in I\), the multiplication map \(\mu_{x}\colon M\to M\) factors through a free \(A\)-module. The assumptions on \(G_{I}(A)\) also yield conditions on the local cohomology of \(\mathcal{R}_{I}(A)\) with respect to its maximal homogeneous ideal. We then choose \(x\in I\) sufficiently general to conclude.
We now describe in brief the contents of this paper. In section two we discuss some preliminaries on Bockstein cohomology that we need. In section three we discuss some preliminaries on excellent isolated Cohen-Macaulay local rings. We also show that we may reduce to the case when \(A\) is complete with infinite perfect residue field. In section four we prove some results on the local cohomology of the extended Rees module with respect to the maximal homogeneous ideal of \(\mathcal{R}_{I}(A)\). Finally in section five we prove Theorem 1.1.
## 2. Bockstein Cohomology
In this paper all rings are commutative Noetherian and all modules are assumed to be finitely generated unless specified otherwise. In this section we first recall a very general construction of Bockstein cohomology. We then specialize to the case of associated graded modules.
**2.1**.: _General construction of Bockstein Cohomology_.
Let \(R\) be a ring, \(M\) an \(R\)-module and \(x\) a regular element on \(M\). We have a natural exact sequence
\[0\to\frac{M}{xM}\xrightarrow{\alpha}\frac{M}{x^{2}M}\xrightarrow{\pi}\frac{M }{xM}\to 0.\]
Here \(\pi\) is the natural projection map and \(\alpha(m+xM)=xm+x^{2}M\).
Let \(F\colon Mod(R)\to Mod(R)\) be any left exact functor. Then note that we have connecting homomorphisms
\[\beta^{i}\colon RF^{i}(M/xM)\to RF^{i+1}(M/xM).\]
We call \(\beta^{i}\) the \(i^{th}\)_Bockstein operator_ on \(M/xM\) with respect to \(F\).
**2.2**.: Consider the natural exact sequence
\[0\to M\xrightarrow{x}M\xrightarrow{\rho}M/xM\to 0.\]
So we have an exact sequence
\[(\dagger)\ \ \to RF^{i}(M/xM)\xrightarrow{\delta^{i}}RF^{i+1}(M)\to RF^{i+1}(M) \xrightarrow{RF^{i+1}(\rho)}RF^{i+1}(M/xM)\to\]
It can be easily shown that \(\beta^{i}=RF^{i+1}(\rho)\circ\delta^{i}\). Since \(\delta^{i+1}\circ RF^{i+1}(\rho)=0\) we get that \(\beta^{i+1}\circ\beta^{i}=0\) for all \(i\geq 0\). Thus we have a complex
\[\cdots\xrightarrow{\beta^{i-1}}RF^{i}(M/xM)\xrightarrow{\beta^{i}}RF^{i+1}( M/xM)\xrightarrow{\beta^{i+1}}RF^{i+2}(M/xM)\cdots\]
The cohomology of this complex is denoted by \(BF^{*}(M/xM)\) and is called the _Bockstein cohomology_ of \(M/xM\) with respect to \(F\).
**2.3**.: _Bockstein Cohomology of Associated graded modules_
Let \(\mathcal{R}_{I}(A)=\bigoplus_{n\in\mathbb{Z}}I^{n}\) be the extended Rees-ring of \(A\) with respect to \(I\). Here \(I^{n}=A\) for all \(n\leq 0\) and \(\mathcal{R}_{I}(A)\) is considered as a subring of \(A[t,t^{-1}]\). Let \(\mathcal{R}(I)_{+}\) be the ideal in \(\mathcal{R}(I)\) generated by \(\bigoplus_{n>0}I^{n}\). Let \(M\) be an \(A\)-module. Let \(\mathcal{R}_{I}(M)=\bigoplus_{n\in\mathbb{Z}}I^{n}M\) be the extended Rees-module of \(M\) with respect to \(I\).
Clearly \(t^{-1}\) is a non-zero divisor on \(\mathcal{R}_{I}(M)\). Note \(\mathcal{R}_{I}(M)/t^{-1}\mathcal{R}_{I}(M)=G_{I}(M)\). We have an exact sequence (after a shift)
\[0\to G_{I}(M)\to\mathcal{R}_{I}(M)/t^{-2}\mathcal{R}_{I}(M)(-1)\to G_{I}(M)(- 1)\to 0.\]
Here
\[\frac{\mathcal{R}_{I}(M)}{t^{-2}\mathcal{R}_{I}(M)}=M/IM\oplus M/I^{2}M\oplus IM /I^{3}M\oplus I^{2}M/I^{4}M\oplus\cdots\oplus I^{n-1}M/I^{n+1}M\cdots,\]
with \(M/IM\) sitting in degree \(-1\).
**2.4**.: Let \(\Gamma_{\mathcal{R}_{I}(A)_{+}}\colon Mod(\mathcal{R}_{I}(A))\to Mod( \mathcal{R}_{I}(A))\) be the \(\mathcal{R}_{I}(A)_{+}\)-torsion functor. Set \(G=G_{I}(A)\). So by the general theory we have Bockstein homomorphisms
\[\beta^{i}\colon H^{i}_{G_{+}}(G_{I}(M))(-1)\to H^{i+1}_{G_{+}}(G_{I}(M)),\]
and we have Bockstein cohomology modules
\[BH^{i}_{G_{+}}(G_{I}(M))=\ker(\beta^{i}(+1))/\operatorname{image}(\beta^{i-1}) \quad\text{ for all }i\geq 0.\]
Set \(\beta^{i}_{I}(M)=\beta^{i}(G_{I}(M))\).
**2.5**.: Assume \(I\) is \(\mathfrak{m}\)-primary. Let \(\mathfrak{M}\) be the maximal homogeneous ideal of \(\mathcal{R}_{I}(A)\). We may consider the Bockstein cohomology modules with respect to \(\mathfrak{M}\)-torsion functor. However we get the same maps since the natural map
\(H^{i}_{\mathfrak{M}}(E)\to H^{i}_{\mathcal{R}_{I}(A)_{+}}(E)\) is an isomorphisms when \(E=G_{I}(A)\) or
\(E=\mathcal{R}_{I}(M)/t^{-2}\mathcal{R}_{I}(M)(-1)\). However this is convenient as in 2.2 (equation (\(\dagger\))) we may work with \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(M))\) which is a \(*\)-Artinian \(\mathcal{R}_{I}(A)\)-module.
**Remark 2.6**.: Let \((A,\mathfrak{m})\to(A^{\prime},\mathfrak{m}^{\prime})\) be a flat extension with \(\mathfrak{m}A^{\prime}=\mathfrak{m}^{\prime}\). Set \(I^{\prime}=IA^{\prime}\) and \(M^{\prime}=M\otimes_{A}A^{\prime}\). Then it is clear that
\[\beta^{i}_{I^{\prime}}(M^{\prime})=\beta^{i}_{I}(M)\otimes_{A}A^{\prime}.\]
It follows that for all \(i\geq 0\) we have
\[BH^{i}_{G^{\prime}_{+}}(G_{I^{\prime}}(A^{\prime}))\cong BH^{i}_{G_{+}}(G_{I}( A))\otimes_{A}A^{\prime}.\]
## 3. Some preliminaries on excellent isolated Cohen-Macaulay local rings
In our arguments we need to assume that \(A\) is complete with infinite residue field (which is perfect). We show how to reduce our Theorem to this case. We also give consequences of assuming \(A\) is complete equi-characteristic Cohen-Macaulay isolated singularity in terms of annhilators of \(\operatorname{Ext}^{1}_{A}(X,Y)\) with \(X,Y\) MCM \(A\)-modules.
**3.1**.: Let \(A\) be an excellent equi-characteristic Cohen-Macaulay isolated singularity. Assume the residue field of \(A\) is perfect. We wish to consider a flat local extension \((A,\mathfrak{m})\to(B,\mathfrak{n})\) with \(\mathfrak{m}B=\mathfrak{n}\) such that \(\mathfrak{m}B=\mathfrak{n}\), \(B\) is complete, Cohen-Macaulay isolated singularity with infinite perfect residue field. Notice Bockstein cohomology behaves well under such extensions, see 2.6.
Note \(\widehat{A}\) is a Cohen-Macaulay isolated singularity containing a field isomorphic to \(k=A/\mathfrak{m}\). If \(k\) is infinite then we do not have to do anything further and set \(B=\widehat{A}\).
If \(k\) is finite the note that we _cannot_ do the standard technique of replacing \(A\) with \(B=A[X]_{\mathfrak{m}A[X]}\) as the residue field of \(B\) is \(k(X)\) which is NOT perfect. So when \(k\) is finite we do the following construction (from [12, section 2]):
First complete \(A\) (and so we assume \(A\) is complete). Note \(A\) will contain a field isomorphic to \(k\). For convenience denote it by \(k\) too. Fix an algebraic closure \(\overline{k}\) of \(k\). Let
\[\mathcal{C}=\{\ell\mid\ell\text{ is a finite extension of }k\text{ and contained in }\overline{k}\}.\]
For \(\ell\in\mathcal{C}\) let \(A_{\ell}=A\otimes_{k}l\). Then \(A_{\ell}\) is local with maximal ideal \(\mathfrak{m}_{\ell}=\mathfrak{m}A_{\ell}\) and residue field \(\ell\). Note clearly \(\mathcal{C}\) is obviously a directed set (using inclusion). So we have a direct system of rings \(\{A_{\ell}\}_{\ell\in\mathcal{C}}\). Let \(T\) be the direct limit. Then \(T\) is a Cohen-Macaulay local ring with maximal ideal \(\mathfrak{m}_{T}=\mathfrak{m}T\) and residue field \(\overline{k}\). If \(A\) is Gorenstein then so is \(T\), see [12, 3.4]. Furthermore \(T\) is excellent, see [12, 3.3]. By [8, 10.7], \(T\) is also an isolated singularity. Set \(B=\widehat{T}\).
**3.2**.: Assume \(A\) is complete equicharacteristic Cohen-Macaulay local ring which is an isolated singularity. Assume \(k=A/\mathfrak{m}\) is perfect. Then by [18, 6.10] (also see [8, 15.14]) it follows there exists \(a\geq 1\) such that \(\mathfrak{m}^{a}\operatorname{Ext}^{1}_{A}(X,Y)=0\) for all maximal Cohen-Macaulay \(A\)-modules.
**3.3**.: Let \(M,N\) be \(A\)-modules. Let \(\theta(M,N)\) be the \(A\)-submodule of \(\operatorname{Hom}_{A}(M,N)\) consisting of all maps \(f\colon M\to N\) which factor through a free \(A\)-module. Set \(\underline{\operatorname{Hom}}_{A}(M,N)=\operatorname{Hom}_{A}(M,N)/\theta(M,N)\).
With these preliminaries we have
**Lemma 3.4**.: _(with hypotheses as in 3.2) Let \(M,N\) be MCM \(A\)-modules. Assume \(M=\operatorname{Syz}^{A}_{1}(L)\) where \(L\) is a MCM \(A\)-module. Then \(\mathfrak{m}^{a}\underline{\operatorname{Hom}}_{A}(M,N)=0\)._
Proof.: Consider an exact sequence \(0\to M\to F\to L\to 0\) where \(F\) is a free \(A\)-module. So we have an exact sequence
\[0\to\operatorname{Hom}_{A}(L,N)\to\operatorname{Hom}_{A}(F,N)\xrightarrow{ \epsilon}\operatorname{Hom}_{A}(M,N)\to\operatorname{Ext}^{1}_{A}(L,N)\to 0.\]
It follows that \(\operatorname{Hom}_{A}(M,N)/\operatorname{image}(\epsilon)\cong\operatorname{ Ext}^{1}_{A}(L,N)\). The result follows as
\(\underline{\operatorname{Hom}}_{A}(M,N)\) is a quotient of \(\operatorname{Hom}_{A}(M,N)/\operatorname{image}(\epsilon)\).
## 4. Local cohomology of extended Rees module
The aim of this section is to prove the following result which we need to prove Theorem 1.1.
**Theorem 4.1**.: _Let \((A,\mathfrak{m})\) be complete, Cohen-Macaulay local ring of dimension \(d\geq 2\). Let \(I\) be an \(\mathfrak{m}\)-primary ideal in \(A\). Let \(\mathcal{R}_{I}(A)\) be the extended Rees-ring of \(A\) with respect to \(I\) and let \(\mathfrak{M}\) be its maximal homogeneous ideal. Let \(N\) be a maximal Cohen-Macaulay \(A\)-module. Then_
1. _The natural map_ \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(N))\to H^{i}_{\mathcal{R}_{I}(A)_{+}}( \mathcal{R}_{I}(N))\) _is an isomorphism for_ \(i<d\)_._
2. _The natural map_ \(H^{d}_{\mathfrak{M}}(\mathcal{R}_{I}(N))\to H^{d}_{\mathcal{R}_{I}(A)_{+}}( \mathcal{R}_{I}(N))\) _is an inclusion._
3. _For all_ \(i\leq d\) _we have_ \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(N))_{n}=0\) _for_ \(n\gg 0\)_. Furthermore_ \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(N))_{n}\) _has finite length for all_ \(n\in\mathbb{Z}\) _and all_ \(i\leq d\)_._
4. _If the residue field_ \(k\) _of_ \(A\) _is infinite then there exists_ \(x\in I\) _which is_ \(N\oplus A\)_-superficial such that the map_ \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(N))(-1)\xrightarrow{xt}H^{i}_{ \mathfrak{M}}(\mathcal{R}_{I}(N))\) _has finite length co-kernel (for all_ \(i\leq d\)_)._
5. _If_ \(H^{i}_{G_{I}(A)_{+}}(G_{I}(N))\) _has finite length for_ \(i<r\) _then_ \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(N))\) _has finite length for_ \(i<r+1\)__
Before proving this theorem we need to discuss a few preliminaries that we need.
**4.2**.: The following \(\mathcal{R}_{I}(A)\)-module (introduced in [9]) is very convenient to work with.
\[L^{I}(N)=\bigoplus_{n\geq 0}\frac{N}{I^{n+1}N}.\]
To see that \(L^{I}(M)\) is an \(\mathcal{R}(I)\)-module, note that we have an exact sequence
\[0\to\mathcal{R}_{I}(N)\to N[t,t^{-1}]\to L^{I}(N)(-1)\to 0.\]
By this exact sequence we give \(L^{I}(N)\) a structure of \(\mathcal{R}_{I}(A)\)-module. Note that \(L^{I}(N)\) is _not_ finitely generated as a \(\mathcal{R}_{I}(A)\)-module.
**4.3**.: Let \(N\) be a MCM \(A\)-module. Let \(x_{1},\ldots,x_{d}\) be an \(A\)-regular sequence. Then it a \(N\)-regular sequence. Notice \(x_{1}t,\ldots,x_{d}t\in\mathcal{R}_{I}(A)_{+}\) is a \(N[t,t^{-1}]\)-regular sequence.Thus \(H^{i}_{\mathcal{R}_{I}(A)_{+}}(N[t,t^{-1}])=0\) for \(i<d\). It follows that \(H^{i}_{\mathcal{R}_{I}(A)_{+}}(\mathcal{R}_{I}(N))=H^{i-1}_{\mathcal{R}_{I}(A )_{+}}(L^{I}(N))(-1)\) for \(i<d\).
**4.4**.: In [11, 3.1] we proved that the natural map \(H^{i}_{\mathfrak{M}}(L^{I}(N))\to H^{i}_{\mathcal{R}_{I}(A)_{+}}(L^{I}(N))\) is an isomorphism for all \(i\).
The following lemma is useful.
**Lemma 4.5**.: _(with hypotheses as in 4.1) Assume the residue field of \(k\) is infinite. Let \(E=\bigoplus_{n\in\mathbb{Z}}E_{n}\) be a finitely generated \(\mathcal{R}_{I}(A)\)-module with \(E_{n}=0\) for \(n\ll 0\). Then \(\ell_{A}(E_{n})\) is finite for all \(n\in\mathbb{Z}\). Furthemore there exists a non-empty Zariski open dense subset \(U\) of \(I/\mathfrak{m}I\) such that if the image of \(x\in I\) is in \(U\) then the map \(E(-1)\xrightarrow{xt}E\) has finite length kernel._
Proof.: Let \(E\) be generated by homogeneous elements \(e_{1},\ldots,e_{s}\). Then there exists \(a\) with \(t^{-a}e_{i}=0\) for all \(i\). So \(t^{-a}E=0\). Thus \(E\) is a \(S=\mathcal{R}_{I}(A)/t^{-a}\mathcal{R}_{I}(A)\)-module. As \(\ell_{A}(S_{n})\) is finite for all \(n\in\mathbb{Z}\) it follows that \(\ell_{A}(E_{n})\) is finite for all \(n\in\mathbb{Z}\).
Let \(L=It\cong I\) and set \(V=L/\mathfrak{m}L\cong I/\mathfrak{m}I\). If \(P\) is a prime ideal in \(\mathcal{R}_{I}(A)\) then set \(P_{1}=P\cap L\) and \(\overline{P_{1}}=(P_{1}+\mathfrak{m}L)/\mathfrak{m}L\). If \(P\nsubseteq\mathcal{R}_{I}(A)_{+}\) then \(P_{1}\neq L\) and so \(\overline{P_{1}}\) is a proper subspace of \(V\). As \(k\) is infinite
\[U=V\setminus\bigcup_{\stackrel{{ P\in\text{Ass}(E)}}{{P\nsubseteq \mathcal{R}_{I}(A)_{+}}}}\overline{P_{1}}\neq\emptyset.\]
Let \(xt\in U\). Let \(K=\ker(E(-1)\xrightarrow{xt}E)\). We prove \(\ell(K)<\infty\).
Claim-1: If \(Q\in\text{Ass}(K)\) then \(Q\supseteq\mathcal{R}(I)_{+}\).
Suppose if possible \(Q\nsubseteq\mathcal{R}(I)_{+}\). Note \(Q\in\text{Ass}(E)\). So \(xt\notin Q\). We have an exact sequence
\[0\to K_{Q}\to E_{Q}\xrightarrow{xt}E_{Q}.\]
As \(xt\notin Q\), it is a unit in \(\mathcal{R}_{I}(A)_{Q}\). It follows that \(K_{Q}=0\), a contradiction.
By claim 1 it follows that if \(w\in K\) then \((\mathcal{R}_{I}(A)_{+})^{m}w=0\) for some \(m>0\). Thus \(K\) is \(\mathcal{R}_{I}(A)_{+}\)-torsion. As \(\ell(K_{n})<\infty\) for all \(n\) it follows that \(K\) is \((\mathfrak{m}t^{0})\)-torsion. Also as \(t^{-a}K=0\) (as \(t^{-a}E=0\)) it follows that \(K\) is \(t^{-1}\)-torsion. So \(K\) is \(\mathfrak{M}\)-torsion. As \(K\) is finitely generated it follows that \(K\) has finite length. The result follows.
We now give
Proof of Theorem 4.1.: As \(I\) is \(\mathfrak{m}\)-primary it follows that \(\sqrt{(t^{-1},\mathcal{R}_{I}(A)_{+})}=\mathfrak{M}\). We have a long exact sequence in cohomology
\[H^{i-1}_{\mathcal{R}_{I}(A)}(\mathcal{R}_{I}(N))_{t^{-1}}\to H^{i}_{ \mathfrak{M}}(\mathcal{R}_{I}(N))\to H^{i}_{\mathcal{R}_{I}(A)}(\mathcal{R}_{ I}(N))\to H^{i}_{\mathcal{R}_{I}(A)}(\mathcal{R}_{I}(N))_{t^{-1}}\]
We note that for \(i<d\) by 4.3 we have an isomorphism
\(H^{i}_{\mathcal{R}_{I}(A)}(\mathcal{R}_{I}(N))\cong H^{i-1}_{\mathcal{R}_{I}(A )}(L^{I}(N))\). But \(L^{I}(N)_{t^{-1}}=0\). Thus the assertions (1), (2) follow.
(3) By an argument similar to [1, 3.8] it follows that
\(H^{i}_{\mathcal{R}_{I}(A)_{+}}(\mathcal{R}_{I}(N))_{n}=0\) for all \(n\gg 0\). Furthermore by an argument similar to [1, 4.1] it follows that \(H^{i}_{\mathcal{R}_{I}(A)_{+}}(\mathcal{R}_{I}(N))_{n}\) has finite length for all \(n\in\mathbb{Z}\). The result follows from (1), (2).
(4) Let \((-)^{\vee}\) denote the Matlis dual functor. Take \(E=G_{I}(M)\oplus G_{I}(A)\oplus(\oplus_{i\leq d}(H^{i}_{\mathfrak{M}}(\mathcal{R}_ {I}(M))^{\vee}\). Apply Lemma 4.5 to conclude.
(5) Note \(r\leq d\). We first consider the case when \(r\leq d-1\). Then by [10, 5.2], \(H^{i}_{\mathfrak{M}}(L^{I}(N))\) has finite length for \(i<r\). The result follows from 4.3, 4.4 and (1). Finally we consider the case when \(r=d\). Set \(E\) to be the Matlis dual of \(H^{d}_{\mathfrak{M}}(\mathcal{R}_{I}(N))\). Taking cohomology with respect to the exact sequence \(0\to\mathcal{R}_{I}(N)(+1)\xrightarrow{t^{-1}}\mathcal{R}_{I}(N)\to G_{I}(N)\to 0\) and as \(H^{d-1}_{M}(G_{I}(N))\) has finite length it follows that we have an exact sequence
\[E(+1)\xrightarrow{t^{-1}}E\to C\to 0\quad\text{with }\ell(C)<\infty.\]
Let \(P\) be homogeneous prime in the support of \(E\). As \(E\) is \(t^{-1}\)-torsion it follows that \(t^{-1}\in P\). If \(P\neq\mathfrak{M}\) then localizing the above exact sequence at \(P\) and applying Nakayama Lemma we get \(E_{P}=0\), a contradiction. So \(P=\mathfrak{M}\) and thus \(E\) has finite length.
## 5. Proof of Theorem 1.1
In this section we give:
Proof of Theorem 1.1.: We may assume that \(A\) is complete with infinite residue field (see 3.1). By 3.4 there exists \(a\geq 1\) such that \(\mathfrak{m}^{a}\,\underline{\operatorname{Hom}}_{A}(M,M)=0\) where \(M=\operatorname{Syz}_{1}^{A}(L)\) with \(L\) an MCM \(A\)-module.
Let \(I\) be an \(\mathfrak{m}\)-primary ideal of \(A\) with \(I\subseteq\mathfrak{m}^{a}\). Note we are assuming \(H^{i}(G_{I}(A))\) have finite length for \(i<r\). Note we have nothing to prove if \(r\leq 1\). So assume that \(t\geq 2\). By 4.1 we have that \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(A))\) has finite length for \(i<r+1\).
Let \(M=\operatorname{Syz}_{1}^{A}(L)\) with \(L\) an MCM \(A\)-module. By 4.1 we may choose \(x\in I\)-superficial with respect to \(M\) such that the map
\[H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(M))(-1)\xrightarrow{xt}H^{i}_{\mathfrak{ M}}(\mathcal{R}_{I}(M))\]
has finite length cokernel for all \(i\leq d\).
As \(I\subseteq\mathfrak{m}^{a}\) we get that \(x\,\underline{\operatorname{Hom}}_{A}(M,M)=0\). So the multiplication map \(\mu_{x}\colon M\to M\) factors through a free \(A\)-module \(F\).
It follows that we have a factorization of the multiplication map by \(xt^{0}\) on \(\mathcal{R}_{I}(M)\) as
So for all \(i<r+1\) and for all \(n\ll 0\) the multiplication map by \(x\) on \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(M))_{n}\) is zero. We have a commutative diagram
This induces a commutative diagram in cohomology
By construction of \(x\) the top-left map above is surjective for all \(n\ll 0\). It follows the top-right map (multiplication by \(t^{-1}\)) is zero on all components \(H^{i}_{\mathfrak{M}}(\mathcal{R}_{I}(M))_{n+1}\) for \(n\ll 0\) (with \(i<r+1\)).
We prove \(BH^{i}_{\mathfrak{M}}(G_{I}(M))_{n}=0\) for \(n\ll 0\) and all \(i<r\). Note for \(i=0\) we have \(BH^{0}_{M}(G_{I}(M))_{n}\subseteq H^{0}_{\mathfrak{M}}(G_{I}(M))_{n}=0\) for \(n<0\). Now assume \(i>0\). We set \(H^{i}(-)=H^{i}_{\mathfrak{M}}(-)\) on \(\mathcal{R}_{I}(A)\)-modules. Set \(G=G_{I}(M),\mathcal{R}=\mathcal{R}_{I}(M)\). Note we have exact sequence
\[H^{i}(\mathcal{R})_{n}\xrightarrow{\gamma^{i}_{n}}H^{i}(G)_{n}\xrightarrow{ \delta^{i}_{n}}H^{i+1}(\mathcal{R})_{n+1}\xrightarrow{t^{-1}}H^{i+1}( \mathcal{R})_{n}\xrightarrow{\gamma^{i+1}_{n}}H^{i+1}(G)_{n}\]
Recall \(\beta^{i}_{n}=\gamma^{i+1}_{n+1}\circ\delta^{i}_{n}\). We also have the map induced by \(t^{-1}\) is zero for \(n\ll 0\).
We first consider the case when \(i=1\). Then note that \(H^{1}(\mathcal{R})_{n}=0\) for \(n<0\) (see 4.3). So \(\delta^{1}_{n}\) is injective for \(n<0\). We also have \(\gamma^{2}_{n}\) is injective for \(n\ll 0\). So \(\beta^{1}_{n}\) is injective for all \(n\ll 0\). So \(BH^{1}(G)_{n}=0\) for \(n\ll 0\).
Now assume \(2\leq i<r\). Assume the map \(H^{j+1}(\mathcal{R})_{n+1}\xrightarrow{t^{-1}}H^{j+1}(\mathcal{R})_{n}\) is zero for all \(n\leq n_{0}\) and for \(j=i,i+1\). So \(\gamma^{i+1}_{n}\) is injective for all \(n\leq n_{0}\). We also have \(\delta^{j}_{n}\) is surjective for all \(n\leq n_{0}\) and \(j=i-1,i\). Fix \(n\leq n_{0}-1\). Let \(u\in H^{i}(G)_{n}\) be such that \(\beta^{i}_{n}(u)=0\). As \(\gamma^{i+1}_{n+1}\) is injective we have \(u\in\ker\delta^{i}_{n}=\operatorname{image}\gamma^{i}_{n}\). Say \(u=\gamma^{i}_{n}(v)\) where \(v\in H^{i}(\mathcal{R})_{n}\). However \(\delta^{i-1}_{n-1}\) is surjective. So \(v=\delta^{i-1}_{n-1}(w)\) for some \(w\in H^{i-1}(G)_{n-1}\). It follows that \(u=\beta^{i-1}_{n-1}(w)\). Thus \(u\) is a boundary in the Bockstein complex. It follows that \(BH^{i}(G)_{n}=0\) for all \(n\leq n_{0}-1\).
|
2303.16660 | Price Optimization Combining Conjoint Data and Purchase History: A
Causal Modeling Approach | Pricing decisions of companies require an understanding of the causal effect
of a price change on the demand. When real-life pricing experiments are
infeasible, data-driven decision-making must be based on alternative data
sources such as purchase history (sales data) and conjoint studies where a
group of customers is asked to make imaginary purchases in an artificial setup.
We present an approach for price optimization that combines population
statistics, purchase history and conjoint data in a systematic way. We build on
the recent advances in causal inference to identify and quantify the effect of
price on the purchase probability at the customer level. The identification
task is a transportability problem whose solution requires a parametric
assumption on the differences between the conjoint study and real purchases.
The causal effect is estimated using Bayesian methods that take into account
the uncertainty of the data sources. The pricing decision is made by comparing
the estimated posterior distributions of gross profit for different prices. The
approach is demonstrated with simulated data resembling the features of
real-world data. | Lauri Valkonen, Santtu Tikka, Jouni Helske, Juha Karvanen | 2023-03-29T13:10:56Z | http://arxiv.org/abs/2303.16660v2 | # Price Optimization Combining Conjoint Data and Purchase History: A Causal Modeling Approach
###### Abstract
Pricing decisions of companies require an understanding of the causal effect of a price change on the demand. When real-life pricing experiments are infeasible, data-driven decision-making must be based on alternative data sources such as purchase history (sales data) and conjoint studies where a group of customers is asked to make imaginary purchases in an artificial setup. We present an approach for price optimization that combines population statistics, purchase history and conjoint data in a systematic way. We build on the recent advances in causal inference to identify and quantify the effect of price on the purchase probability at the customer level. The identification task is a transportability problem whose solution requires a parametric assumption on the differences between the conjoint study and real purchases. The causal effect is estimated using Bayesian methods that take into account the uncertainty of the data sources. The pricing decision is made by comparing the estimated posterior distributions of gross profit for different prices. The approach is demonstrated with simulated data resembling the features of real-world data.
_Keywords:_ Pricing, Bayesian model, Causal inference, Data-fusion, Demand estimation
Introduction
Pricing decisions are vital for any business seeking profitability. In price optimization (Phillips, 2021), a company has to estimate the impact of a price change on the behavior of both current customers and potential new customers. This estimation task is essentially a problem of causal inference (Pearl, 2009) where the goal is to quantify the causal effect of price on the demand (Guelman and Guillen, 2014).
Price elasticity of demand is a traditional approach for measuring the effect of price change in demand (Bijmolt et al., 2005). However, elasticity is only the derivative of demand at one price point and may not correctly describe the change of demand for large price changes. A more general approach is to consider the price-response function (Phillips, 2021) that specifies the expected demand as a function of price. When customer-level data are available, the purchase probabilities of each customer can be modeled as a function of price and customer characteristics.
Historical data on sales and prices are readily available for most companies. If the product is a subscription-based digital service, purchases can be identified on the customer level and additional personal data on demographics and usage habits can be often collected via registration forms and browser cookies. Historical sales data alone are usually insufficient for price optimization. The price may have remained the same for a relatively long time and there may be confounders that have affected both the changes in pricing and the changes in sales (Tian and Feinberg, 2020).
In a pricing experiment, also known as an A/B test (Kohavi and Longbotham, 2017), customers are randomized into two or more groups and a different price is offered for each group. Ideally, a pricing experiment is the most reliable way to estimate the causal effect of price on the purchase. However, some challenges may render real-life pricing experiments impractical. For instance, browser cookies may not have a one-to-one correspondence with customers, which complicates the technical implementation of randomization in digital channels. In addition to the lack of full controllability, offering different prices may cause confusion among customers, affect their behavior, and even have negative effects on the brand.
A conjoint study (see e.g., Rao et al., 2014) is an experimental setting, where a customer is asked to choose among different variations of a product with modifications to the features,
such as price. A conjoint study as a well-designed randomized experiment can bring valuable information about the customers' sensitivity to prices and enables testing of many different prices at once. However, the behavior of the participants in the artificial setup may differ from their behavior in a real purchase situation (see e.g., Natter and Feurstein, 2002).
Combining data collected under various potentially heterogeneous conditions is known as the data-fusion problem (Bareinboim and Pearl, 2016). Such heterogeneity can be a result of for example inherent differences between the populations or the sampling methods used. Data-fusion is often not straightforward, as in the case of combining data from pricing experiments and conjoint studies due to the difference in participant behavior. This scenario is an example of a transportability problem where the goal is to use data from several source populations to make inferences about a target population of interest.
In this paper, we propose an approach for combining different data sources to estimate the causal effect of pricing on a purchase in the absence of real-life pricing experiments. We consider a subscription-based service, such as video or music streaming, an audiobook service, or a digital newspaper. The proposed work is motivated by a real business case that we carried out together with a company that offers subscription-based digital services. As publishing the results based on the real data would be against the business interest of the company, we use simulated data that aim to capture the most important features of real data.
The proposed approach consists of four steps: 1) The causal relations of the purchase process are described in the form of a directed acyclic graph (DAG). 2) The causal effect of price on purchases is identified from both observational and experimental data sources presented in a symbolic form. 3) A hierarchical Bayesian model is fitted to estimate the causal effect according to the obtained identifying functionals. 4) The posterior distribution of the optimal price is found by maximizing the expected gross profit defined as a function of the price and the purchase probabilities estimated in step 3.
The rest of the paper is organized as follows. We begin by describing the subscription-based business model, the data sources and the transportability problem in Section 2. The details of the simulation are described in Section 3. The optimization procedure and its results are presented in Section 4 and compared with the approach where only sales data is used for estimation and optimization. We conclude the paper with an evaluation of
our method and discuss further research possibilities in Section 5. Table 1 in Appendix summarizes the notation of this paper. Code and data to reproduce the results and figures are available at [https://github.com/lvalkone/Proptim](https://github.com/lvalkone/Proptim).
## 2 Causal model for a subscription-based service
### Problem definition and causal diagram
We consider a company that offers a subscription-based service for consumers. The subscription is automatically renewed monthly unless the customer cancels it. The total revenue of the company is the product of the price and the number of subscriptions, and the total profit is the difference of the revenue and the costs which can be divided into variable costs and fixed costs. The total profit is maximized when the gross profit, defined as the difference of the revenue and the variable costs, is maximized.
In addition to the price, the purchase behavior of a customer depends on background variables such as age, gender, and location. It is also clear that the probability of retention (an existing customer continues to subscribe) is higher than the probability of acquisition (a new customer subscribes) (Reinartz et al., 2005). In our model, we assume that each customer has a latent personal reference price (see e.g., Winer, 1986; Mazumdar et al., 2005; Cao et al., 2019) that they implicitly or explicitly compare with the actual price. When the price of the product is changed, the subscription probability will change as well, and the impact will be different for new and existing subscriptions because the distribution of reference prices differs between these groups due to selection. Estimating the demand after the price change is essential for the optimization of the profit.
Figure 1 shows the DAG that represents the causal relations of the variables of interest in our scenario. At the time \(t\), the purchase \(Y_{t}\) is affected directly by the price of the service \(X_{t}\). We also assume that the purchase \(Y_{t}\) is affected by a latent variable \(Q\) that represents the customer's reference price. The larger the difference between the reference price and the product price is, the more likely the customer is willing to buy. The reference price is also affected by background variables (age, gender, location) denoted commonly by cluster \(B\)(Tikka et al., 2021a), which for simplicity of the exposition are assumed to be constant in time. Accumulated subscription history is assumed to have a positive effect
on repurchasing, and thus we also assume that \(Y_{t}\) is affected by the number of consequent subscription periods \(S_{t}\).
The DAG also contains transportability nodes that describe differences between populations in the underlying causal model (Bareinboim et al., 2013). The transportability node \(H\) describes that the distribution of the background variables \(B\) may differ between subscribers and non-subscribers. The transportability node \(C\) (presented separately for time points \(t\) and \(t+1\)) describes that the purchase probability of a customer may be different between a conjoint scenario and a real purchase scenario.
### Data sources
The company gathers data on the customers, purchases \(Y_{t}\), and the purchase history \(S_{t}\). The customers register for the service and provide information on their age (\(A\)), gender (\(G\)), and location (\(L\)), commonly denoted by \(B\). We denote the distributions related to the subscriber population (\(H=1\)) as \(P^{(1)}\) and to the non-subscriber population (\(H=0\)) as \(P^{(0)}\).
The company conducts a conjoint study (Rao et al., 2014) to increase its understanding on the expected consequences of a price change. This allows the company to test a variety
Figure 1: Cross-section of the data generating process for purchase decision at time points \(t\) and \(t+1\). Circles denote observed variables, triangles denote transportability nodes, and squares denote variables targeted by interventions.
of prices without interfering with the actual business. The conjoint is targeted at both current and earlier subscribers as well those who have never subscribed.
In a symbolic form, the purchase history data is denoted by \(P^{(1)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q)\). The conjoint data contains both subscribers and non-subscribers and thus provides information from two domains via two distributions: \(P^{(1)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q,C)\) and \(P^{(0)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q,C)\), respectively. The purchase history and the conjoint data cannot be combined in a straightforward manner because the behavior of customers is expected to differ between the real purchase situation and the artificial purchase situation in the conjoint study. Formally, we can show that our causal effects of interest, i.e., the effect of price on the purchase in the context of a real purchase scenario for subscribers and non-subscribers, respectively, cannot be identified from the conjoint data alone, because the transportability node \(C\) directly affects the purchase decision \(Y\). Therefore, the conjoint data cannot be used without further assumptions. We assume here that the difference in the purchase probability between the conjoint scenario and the real purchase scenario can be parametrized via a level shift in the reference price:
\[P^{(i)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q=q)=P^{(i)}(Y_{t},S_{t}\,|\, \mathrm{do}(X_{t}),Q=q+\kappa,C),\quad i=0,1, \tag{1}\]
where unknown parameter \(\kappa\) models the conjoint effect.
Population level data on the background variables are needed when potential new customers are included in the analysis. These data are available in an aggregated form from official statistics and they provide information on the distribution \(P^{(0)}(B)\). Data on background variables of the subscribers is available from the price history data as \(P^{(1)}(B)\). We assume that the reference price is a latent variable and that the parametric form of its distribution \(P^{(0)}(Q|B)\) is known. Given the background variables, the distribution of the reference price is same across domains, i.e., \(P^{(0)}(Q|B)=P^{(1)}(Q|B)\), which we simply denote by \(P(Q|B)\).
### Identifying the causal effect and formulating the model
Our goal is to identify the causal effect of price on the purchase probability for non-subscribers and subscribers, i.e., \(P^{(0)}(Y_{t}\,|\,\mathrm{do}(X_{t}))\) and \(P^{(1)}(Y_{t}\,|\,\mathrm{do}(X_{t}))\) from the available data sources in the DAG of Figure 1. Due to the challenging nature of the task, we apply
the do-search algorithm (Tikka et al., 2021b) to identify the effects. Do-search accepts arbitrary data sources (in a symbolic form) as input and provides the identifying functional when the effect in question is identifiable. The DAG of Figure 1 and the data sources \(P^{(0)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q),\ P^{(1)}(Y_{t},S_{t}\,|\,\mathrm{do }(X_{t}),Q),\ P^{(0)}(B),\ P^{(1)}(B),\) and \(P(Q\,|\,B)\) are given as input to the algorithm. We note that the conjoint data also provides information on \(P^{(0)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q)\) and \(P^{(1)}(Y_{t},S_{t}\,|\,\mathrm{do}(X_{t}),Q)\) due to our assumption in (1). Do-search returns the following identifying functionals for our causal effects of interest:
\[P^{(0)}(Y_{t}\,|\,\mathrm{do}(X_{t})) =\sum_{B,S_{t}}\int_{Q}P^{(0)}(Y_{t}\,|\,\mathrm{do}(X_{t}),Q,S_{t })P^{(0)}(S_{t}\,|\,\mathrm{do}(X_{t}),Q)P(Q|B)P^{(0)}(B),\] \[P^{(1)}(Y_{t}\,|\,\mathrm{do}(X_{t})) =\sum_{B,S_{t}}\int_{Q}P^{(1)}(Y_{t}\,|\,\mathrm{do}(X_{t}),Q,S_{t })P^{(1)}(S_{t}\,|\,\mathrm{do}(X_{t}),Q)P(Q|B)P^{(1)}(B).\]
We need to model the conditional distribution \(P(Q|B)\) of the latent reference price \(Q\). As the reference price is assumed to vary between individuals, we fit a log-normal model for each individual \(i\). Our model for the reference price is then
\[Q_{i}=\exp(\beta_{0}+\beta_{1,A_{i}}+\beta_{2,G_{i}}+\beta_{3,L_{i}}+u_{i}),\]
where \(\beta_{j,k}\) refers to the \(k\)th category of the \(j\)th predictor, and \(u_{i}\) is an individual normally distributed random effect. The purchase choice is modeled via a Bernoulli distribution using logit-link as
\[\mathrm{logit}(\pi_{i,t})=\alpha_{1}(Q_{i}+I(C_{i}=1)\kappa-X_{i,t})+\alpha_{ 2}\log(S_{i,t}+1)+\alpha_{3}I(S_{i,t}=0),\]
where parameter \(\alpha_{1}\) describes the impact of the difference between the reference price \(Q_{i}\) (adjusted by the conjoint effect \(\kappa\) when \(C_{i}=1\)) and the price \(X_{i,t}\), parameter \(\alpha_{2}\) is the effect of consecutive subscription periods, and parameter \(\alpha_{3}\) describes the impact of the difference between customers who are just starting their subscription and those who are simply continuing their earlier subscription.
## 3 Data simulation
We simulate our data based on the structure of the DAG in Figure 1. The simulations are implemented in the R environment (R Core Team, 2021) with the R6Causal package
(Karvanen, 2021). The statistics related to the choice of parameters are described in Table 2 of the Appendix.
We use real data from Statistics Finland (Statistics Finland, 2021) as a basis for the population demographics, covering a joint distribution of age, gender, and location. The variables are categorized for our study such that the age covers four groups: 18-30, 31-45, 46-60, and 61-75. The two other variables consist of dichotomous classifications as male and female in gender, and urban and rural in location. According to the data above, the company's target market is here limited to Finland and individuals who are between 18-75 years old. The market size as the population of our study in 2020 is thus \(N=3\,956\,294\). Other variables of the DAG are simulated according to the flowchart presented in Figure 2. The chosen parameters defining the functional forms of the causal relationships used in the simulation are presented in Table 2 of the Appendix.
At the time \(t=1\), the company launches the product to the market, and every time point \(t\), a share of the population signs into the service of which a subset makes a subscription. We simulate the subscription history of two years (\(t=1,2,...,24\)). The company launches the product for 16 euros. The price is then raised by 0.50 euros two times during the two years: after 6 months and after 18 months, i.e. at the times \(t=7\), and \(t=19\).
The customer choice (purchase or not) is modeled by a Bernoulli distribution with purchase probability \(\pi_{i,t}\). Probability \(\pi_{i,t}\) depends on the difference between the product price and the customer's reference price which is affected by background variables \(B\) (see the DAG in Figure 1). Besides this, the number of earlier subscription periods \(S_{i,t}\) directly affects the choice.
### Simulate punchase history for \(t=25\)
Figure 2: Algorithmic description of the data simulation: The first block describes the elements needed to be initialized for the purchase history simulation in the second block. The third block covers the simulation of conjoint studies after \(t=24\), and the fourth block returns to the second block for future purchase simulations of \(t=25\).
Using the sales data alone for finding the optimal price is unreliable since it contains information on the effect of the price for only three points: 16, 16.5, and 17 euros. Thus, after two years in the market, the company implements a conjoint study to explore the price sensitivity of customers. The participants of conjoint studies include current and earlier customers as well as those who have never subscribed.
In the conjoint study setting, 10 separate tasks of product alternatives are shown to a customer who is asked to choose whether to purchase the option offered. A modification of the product comprises varying prices ranging between 12 and 22 euros by 0.50 euros. With this price range, 10 out of 21 possible tasks are randomly sampled without replication for a participant to be shown. The conjoint effect \(\kappa\) of Equation 1 is set to a moderate value of 0.75 euros.
## 4 Optimizing the price
### Estimating the model
We estimate the joint Bayesian (see e.g., Gelman et al., 2014) model for the reference prices and purchases using Markov chain Monte Carlo with the R package NIMBLE (de Valpine et al., 2017). Prior distributions for the regression coefficients are defined as \(N(0,0.5)\) distributions. Individual random effects \(u_{i}\) are assumed to be distributed as \(N(0,\tau^{2})\), and for the standard deviation \(\tau\) we set a prior \(\tau\sim\text{Gamma}(2,0.2)\) using the shape-scale parameterization.
### Maximizing the expected gross profit
Next, we describe how to optimize the price of the product by using the Bayesian model defined in Section 2.3. We aim to maximize the expected gross profit at the time \(t\), defined as
\[f(x,\mu^{0}(x,t),\mu^{1}(x,t)\left|N^{0}(t),N^{1}(t),\mathcal{V}\right)=N^{0}( t)\mu^{0}(x,t)(x-\mathcal{V})+N^{1}(t)\mu^{1}(x,t)(x-\mathcal{V}), \tag{2}\]
where \(x\) is the price of the product, \(\mu^{0}(x,t)\) and \(\mu^{1}(x,t)\) are the purchase probabilities of earlier and current customers correspondingly. \(N^{0}(t)\) is the number of potential customers making a purchase decision at time \(t\), whereas \(N^{1}(t)\) is the number of current customers
i.e. those who made a purchase at time \(t-1\) and are now making a decision to continue or cancel the subscription. The cost structure related to gross profit includes the variable costs, which are denoted as \(\mathcal{V}\).
As we optimize the expected gross profit by the price at the following time point \(t=25\), we can formalize the optimization problem as
\[\max_{x}\left(f(x,\mu^{0}(x,t),\mu^{1}(x,t)\,|\,N^{0}(25),N^{1}(2 5),\mathcal{V})\right)\] \[= \max_{x}\left(N^{0}(25)\mu^{0}(x,25)(x-\mathcal{V})+N^{1}(25)\mu ^{1}(x,25)(x-\mathcal{V})\right).\]
We assume that every month a constant number of potential customers \(N^{0}(t)=1000\) make a purchase choice. On the other hand, the number of current customers \(N^{1}(t)\) can be calculated directly from the historical sales data, which is \(1148\) for \(t=25\). In addition, we assume that the variable costs \(\mathcal{V}\) are constant over time, such that \(\mathcal{V}=5\).
In Equation (2), the pricing affects both current and potential customers. Therefore, we divide the upcoming customer base data \(\mathcal{D}\) into two sets: The data set \(\mathcal{D}^{0}\) includes the customers that either canceled the subscription at \(t\leq 24\) or have never purchased. This data set of customers can be obtained by random sampling from the population with a size of \(N^{0}(25)=1000\). The data set \(\mathcal{D}^{1}\) consists of those customers who purchased at \(t=24\), so they are automatically about to choose at time \(t=25\). This customer base in \(\mathcal{D}^{1}\) can be obtained directly from the historical sales data.
We simulate \(1\,000\,000\) samples from the posterior distributions of the model parameters with \(500\,000\) burn-in period and \(500\) thinning interval, resulting in the final sample size of \(1000\). We denote the model parameters commonly as \(\mathbf{\Theta}\). To study the effect of pricing by inspecting the posterior predictive distributions, we repeat the following for a set of different prices: For a given posterior sample, we calculate the reference prices \(Q_{i}^{\mathcal{D}^{0}}\) and \(Q_{j}^{\mathcal{D}^{1}}\) and purchase probabilities \(\pi_{i}^{\mathcal{D}^{0}}\) and \(\pi_{j}^{\mathcal{D}^{1}}\) for all \(i=1,...,N^{0}(25)\) and \(j=1,...,N^{1}(25)\). As all of the customers in \(\mathcal{D}^{1}\) do not have an estimated individual level \(u_{j}\), we generate the value from \(N(0,\tau^{2})\)-distribution. The expected purchase probability for decisions \(Y^{0}\) and
\(Y^{1}\) for \(k\)th posterior samples \(\mathbf{\Theta}^{k}\) are then estimated as
\[\mu^{0,k} =E(Y^{0}\,|\,\mathrm{do}(X),C=0,\mathbf{\Theta}^{k})\approx\frac{1}{N^ {0}(25)}\sum_{i=1}^{N^{0}(25)}\pi_{i}^{\mathcal{D}^{0},k}, \tag{3}\] \[\mu^{1,k} =E(Y^{1}\,|\,\mathrm{do}(X),C=0,\mathbf{\Theta}^{k})\approx\frac{1}{ N^{1}(25)}\sum_{j=1}^{N^{1}(25)}\pi_{j}^{\mathcal{D}^{1},k}. \tag{4}\]
The estimates from (3) and (4) are then input into (2), and finally, the mean and 95% quantiles are calculated over the posterior samples to obtain the estimated expected gross profits and their credible intervals.
### Optimization results
Figure 3 considers the price optimization in the case of combining the purchase history and conjoint data. The left panel in Figure 3 shows the estimated expected gross profits and their 95% credible intervals against the different price interventions (from 14 to 18 euros by increments of 0.25 euros), and the true expected gross profits, which can be calculated from (2), where \(\mu^{0}\) and \(\mu^{1}\) are obtained from the purchase history simulation for \(t=25\). By calculating which price yields the highest expected gross profit in each posterior sample, we can also obtain a probability distribution of the prices that maximize the expected gross profit, which is presented in the right panel in Figure 3. The price with the highest probability, i.e., the optimal price, can be found at \(X=15\) with the expected gross profit of 14 417 euros and (14 236, 14 600) 95% credible interval.
We attempted to estimate the model and optimize the price without the conjoint data. As expected, we encountered severe convergence issues. This may indicate that the model is not identifiable from the purchase history data alone, when the number of different prices is small.
## 5 Discussion
We proposed an approach for optimizing the price of a subscription-based service when real-life pricing experiments are unavailable. We described the customers' behavior in the form of a causal graph and identified the causal effect of the price by combining population statistics, purchase history and conjoint data. The causal effect was estimated by Bayesian methods and as a result we obtained a posterior distribution for the optimal price that maximizes the expected gross profit.
The strengths of the proposed approach are related to the use of causal inference to guide the fusion of data sources. Instead of mixing the historical observational data with experimental data from the conjoint studies in an ad hoc fashion, we illustrated how such data sources can be combined in a theoretically sound fashion using causal inference. While we relied on simulated data due to the confidentiality of the real data, the causal graph used in the simulation and subsequent analysis was designed to closely emulate the assumed causal relations of the motivating real scenario.
We demonstrated that combining data sources is often necessary for price optimization.
Figure 3: Price optimization for time \(t=25\). In the left panel, the estimated expected gross profits with 95% credible intervals for different price interventions and the true expected gross profits are presented. The right panel depicts the probability of each price being the optimal price.
In our scenario, we were unable to estimate the model when the conjoint data were not available due to identifiability issues. Purchase history alone was insufficient for price optimization as the variation in the historical prices was low.
The proposed approach requires a good understanding of the causal mechanisms of customer behavior. These mechanisms can be communicated in the form of a graph as illustrated in Figure 1. The most critical assumptions are those related to unobserved confounders because the causal effect of price on the purchase may not be identifiable in the presence of unobserved confounders.
Another important assumption is the stability of market conditions and customer behavior in time. Without this assumption, we could not use past data to make decisions about the future. In practice, this assumption means that the data used for the analysis must be recent, not decades old.
For the clarity of the presentation, the presented scenario included some simplifying assumptions that could be extended in real use cases. For instance, the number of background variables could be higher and customer-level data on the service usage could be available. In some cases, it may be necessary to model the seasonal variation in the demand and the number of upcoming customers. The design of the conjoint study could also be more refined than the described setting. If a real-life pricing experiment is available, it should naturally be included as one of the data sources. The proposed approach can be easily adapted to these more complicated settings.
A possible direction for future research includes extension to multiple products. In the case of a subscription-based business model, there could be alternative subscription plans with different coverage, quality, and price. The model would be expanded to incorporate product-specific reference prices that are correlated on the customer level. Along similar lines, we could take into account the price changes implemented by the competitors. As the competitors are expected to react to price changes by the company in focus, dynamical modeling, and game theoretic considerations come into play.
Our work is one of the first applications of causality-driven data-fusion in the business context. We anticipate that the proposed methodology can be applied to various other causal inference and price optimization problems.
## Declaration of interests
ST and JK have applied the proposed method in consulting.
|
2305.12788 | GraphCare: Enhancing Healthcare Predictions with Personalized Knowledge
Graphs | Clinical predictive models often rely on patients' electronic health records
(EHR), but integrating medical knowledge to enhance predictions and
decision-making is challenging. This is because personalized predictions
require personalized knowledge graphs (KGs), which are difficult to generate
from patient EHR data. To address this, we propose \textsc{GraphCare}, an
open-world framework that uses external KGs to improve EHR-based predictions.
Our method extracts knowledge from large language models (LLMs) and external
biomedical KGs to build patient-specific KGs, which are then used to train our
proposed Bi-attention AugmenTed (BAT) graph neural network (GNN) for healthcare
predictions. On two public datasets, MIMIC-III and MIMIC-IV, \textsc{GraphCare}
surpasses baselines in four vital healthcare prediction tasks: mortality,
readmission, length of stay (LOS), and drug recommendation. On MIMIC-III, it
boosts AUROC by 17.6\% and 6.6\% for mortality and readmission, and F1-score by
7.9\% and 10.8\% for LOS and drug recommendation, respectively. Notably,
\textsc{GraphCare} demonstrates a substantial edge in scenarios with limited
data availability. Our findings highlight the potential of using external KGs
in healthcare prediction tasks and demonstrate the promise of
\textsc{GraphCare} in generating personalized KGs for promoting personalized
medicine. | Pengcheng Jiang, Cao Xiao, Adam Cross, Jimeng Sun | 2023-05-22T07:35:43Z | http://arxiv.org/abs/2305.12788v3 | # GraphCare: Enhancing Healthcare Predictions with Open-World Personalized Knowledge Graphs
###### Abstract
Clinical predictive models often rely on patients' electronic health records (EHR), but integrating medical knowledge to enhance predictions and decision-making is challenging. This is because personalized predictions require personalized knowledge graphs (KGs), which are difficult to generate from patient EHR data. To address this, we propose GraphCare, an open-world framework that leverages external KGs to improve EHR-based predictions. Our method extracts knowledge from large language models (LLMs) and external biomedical KGs to generate patient-specific KGs, which are then used to train our proposed Bi-attention AugmenTed (BAT) graph neural network (GNN) for healthcare predictions. We evaluate GraphCare on two public datasets: MIMIC-III and MIMIC-IV. Our method outperforms baseline models in four vital healthcare prediction tasks: mortality, readmission, length-of-stay, and drug recommendation, improving AUROC on MIMIC-III by average margins of 10.4%, 3.8%, 2.0%, and 1.5%, respectively. Notably, GraphCare demonstrates a substantial edge in scenarios with limited data availability. Our findings highlight the potential of using external KGs in healthcare prediction tasks and demonstrate the promise of GraphCare in generating personalized KGs for promoting personalized medicine.
## 1 Introduction
The digitization of healthcare systems has led to the accumulation of vast amounts of electronic health record (EHR) data that encode valuable information about patients, treatments, etc. Machine learning models have been developed on these data and demonstrated huge potential for enhancing patient care and resource allocation via predictive tasks, including mortality prediction [1; 2], length-of-stay estimation [3; 4], readmission prediction [5; 6], and drug recommendations [7; 8].
To improve predictive performance and integrate expert knowledge with data insights, clinical knowledge graphs (KGs) were adopted to complement EHR modeling [9; 10; 11]. These KGs represent medical concepts (e.g., diagnoses, procedures, drugs) and their relationships, enabling effective learning of patterns and dependencies. However, existing approaches mainly focus on simple hierarchical relations [12; 13; 10] rather than leveraging comprehensive relationships among biomedical entities despite incorporating valuable contextual information from established biomedical knowledge bases (e.g., UMLS [14]) could enhance predictions. Moreover, large language models (LLMs) such as GPT [15; 16; 17; 18] pre-trained on web-scale biomedical literature could serve as alternative resources for extracting clinical knowledge given their remarkable reasoning abilities on open-world data. There is a substantial body of existing research demonstrating their potential use as knowledge bases [19; 20; 21].
To fill the gap in personalized medical KGs, we propose to leverage the exceptional reasoning abilities of LLMs to extract and integrate personalized KG from open-world data. Our proposed method
GraphCare (Personalized **Graph**-based Health**Care** Prediction) is a framework designed to generate patient-specific KGs by effectively harnessing the wealth of clinical knowledge. As shown in Figure 1, our patient KG generation module first takes medical codes as input and generates code-wise KGs by prompting LLMs or retrieving subgraphs from existing graphs. It then clusters nodes and edges to create a more aggregated KG for each medical code. Next, it constructs a personalized KG for each patient by merging their associated medical code KGs and incorporating temporal information from their sequential visit data. These patient-specific graphs are then fed into our **Bi**-attention **A**ugmen**T**ed (BAT) graph neural network (GNN) for diverse downstream prediction tasks.
We evaluated the effectiveness of GraphCare using two widely-used EHR datasets, MIMIC-III [22] and MIMIC-IV [23]. Through extensive experimentation, we found that GraphCare outperforms several baselines, while BAT outperforms state-of-the-art GNN models [24; 25] on four common healthcare prediction tasks: mortality prediction, readmission prediction, LOS prediction, and drug recommendation. Our experimental results demonstrate that GraphCare, equipped with the BAT, achieves average AUROC improvements of 10.4%, 3.8%, 2.0%, 1.5% and 4.3%, 1.3%, 1.2%, 1.5% over all baselines on MIMIC-III and MIMIC-IV, respectively. Furthermore, our approach requires significantly fewer patient records to achieve comparable results, providing compelling evidence for the benefits of integrating open-world knowledge into healthcare predictions.
## 2 Related Works
**Clinical Predictive Models.** EHR data has become increasingly recognized as a valuable resource in the medical domain, with numerous predictive tasks utilizing this data [5; 7; 1; 3]. A multitude of deep learning models have been designed to cater to this specific type of data, leveraging its rich, structured nature to achieve enhanced performance [26; 27; 28; 29; 30; 8; 31; 10; 32; 33; 34; 35; 36]. Among these models, some employ a graph structure to improve prediction accuracy, effectively capturing underlying relationships among medical entities [10; 37; 38; 39; 40; 41; 36; 8]. However, a limitation of these existing works is that they do not link the local graph to an external knowledge base, which contains a large amount of valuable relational information [42; 43]. We propose to
Figure 1: **Overview of GraphCare.**_Above_: Given a patient record consisting of conditions, procedures and medications, we generate a concept-specific KG for each medical code, either by relational knowledge prompting from a large language model or by subgraph sampling from an existing KG; and we perform node and edge clustering among all graphs (§3.1). _Below_: For each patient, we compose a patient-specific graph by combining the medical concept KGs associated with them and attach the sequential data labeling the temporal information across visits (§3.2). To utilize the patient graph for predictions, we employ a bi-attention augmented graph neural network (GNN) model, which highlights essential visits and nodes with attention weights (§3.3). With three types of patient representations (patient-node, patient-graph, and joint embeddings), GraphCare is able to handle a variety of healthcare predictions (§3.4).
create a customized knowledge graph for each medical concept in an open-world setting by probing relational knowledge from either LLMs or KGs, enhancing its predictive capabilities for healthcare.
**Personalized Knowledge Graphs.** Personalized KGs have emerged as promising tools for enhancing clinical decision-making and healthcare predictions [44; 45; 46; 47; 48]. Previous approaches such as GRAM [12] and its successors [49; 50; 51; 52] incorporated external hierarchical graphs to improve predictions of deep learning-based models; however, they primarily focus on simple parent-child relationships, overlooking the rich complexities found in large knowledge bases. MedML [53] employs graph data for COVID-19 related prediction. However, the KG in this work has a limited scope and relies heavily on curated features. To address the aforementioned gaps, we propose two solutions for automatically constructing comprehensive personalized KGs from open sources: utilizing prompting techniques [54] derived from LLMs to generate KGs tailored to healthcare predictive tasks, or employing random sampling from existing KGs [14] to ensure a diverse and representative knowledge base.
**Attention-augmented GNNs.** Attention mechanisms [55] have been widely utilized in GNNs to capture the most relevant information from the graph structure for various tasks [24; 56; 57; 58; 59; 60]. The incorporation of attention mechanisms in GNNs allows for enhanced graph representation learning, which is particularly useful in the context of EHR data analysis [10]. In the GraphCare framework, we introduce a new GNN BAT which leverages both visit-level and node-level attention, along with edge weights, for predictions with personalized KGs in the healthcare domain.
## 3 Personalized Graph-based HealthCare Prediction (GraphCare)
In this section, we present GraphCare, a comprehensive framework designed to generate personalized KGs and utilize them for enhanced healthcare predictions.
**Overview.** The GraphCare framework operates through three general steps.
_Step 1:_ we generate a concept-specific KG for each medical code in the dataset, either by prompting LLMs or by subsampling from existing KGs. Once gathered, we conduct node and edge clustering across these concept-specific KGs.
_Step 2:_ we construct a personalized KG for each patient. This is done by merging the concept-specific KGs associated with their individual medical codes.
_Step 3:_ we apply our novel Bi-attention Augmented (BAT) Graph Neural Network (GNN) to make predictions based on the personalized KGs.
### Step 1: Concept-Specific Knowledge Graph Generation.
To create a KG for each unique medical code in the provided coding system, we employ one of two approaches: (1) We extract knowledge from LLMs using a prompting method, or (2) We sample a subgraph from an existing biomedical KG. Denote a medical code as \(e\in\{\mathbf{c},\mathbf{p},\mathbf{d}\}\), where \(\mathbf{c}=(c_{1},c_{2},...,c_{|\mathbf{c}|})\), \(\mathbf{p}=(p_{1},p_{2},...,p_{|\mathbf{p}|})\), and \(\mathbf{d}=(d_{1},d_{2},...,d_{|\mathbf{d}|})\) correspond to sets of medical codes for conditions, procedures, and drugs, with sizes of \(|\mathbf{c}|\), \(|\mathbf{p}|\), and \(|\mathbf{d}|\), respectively. For each medical code \(e\), we extract a knowledge graph \(G^{e}=(\mathcal{V}^{e},\mathcal{E}^{e})\), where \(\mathcal{V}^{e}\) represents nodes, and \(\mathcal{E}^{e}\) denotes edges in the graph.
**(1) LLM-based KG extraction via prompting** involves a template with three components: instruction, example, and prompt. For example, with an instruction "Given a prompt, extrapolate as many relationships as possible of it and provide a list of updates", an example "prompt: systemic lupus erythematosus. updates: [systemic lupus erythematosus is, treated with, steroids]..." and a prompt "prompt: tuberculosis updates:<place holder>", the LLM would respond with a list of KG triples such as "[tuberculosis, may be treated with, antibiotics], [tuberculosis, affects, lungs]..." where each triple contains a head entity, a relation, and a tail entity. We showcase our carefully designed prompts in Appendix C.1. For each medical code, we run \(\chi\) times prompting and aggregate multiple outputs into one in order to compose a more comprehensive KG for each code, which is then parsed to construct \(G^{e}_{\mathrm{LLM}(\chi)}=(\mathcal{V}^{e}_{\mathrm{LLM}(\chi)},\mathcal{E}^{e }_{\mathrm{LLM}(\chi)})\).
**(2) Subgraph sampling from existing KGs.** To leverage existing biomedical knowledge graphs [61; 14; 62], we also extract the concept-specific graph for a medical code through subgraph sampling,
which involves selecting a subset of nodes and edges from the original KG relevant to the medical code. To perform subgraph sampling, we first identify the entity in the existing biomedical KG corresponding to the medical code \(e\). Then, we randomly sample a \(\kappa\)-hop subgraph originating from the entity. We describe more details of this in C.2. The output is \(G^{e}_{\mathrm{sub}(\kappa)}=(\mathcal{V}^{e}_{\mathrm{sub}(\kappa)},\mathcal{ E}^{e}_{\mathrm{sub}(\kappa)})\).
**Node and Edge Clustering.** Next, we perform node and edge clustering based on the similarity of their word embeddings, which are retrieved from LLMs. The similarity between two nodes or edges is computed using the cosine similarity between their word embeddings. We apply the agglomerative clustering algorithm [63] on the cosine similarity with a distance threshold \(\delta\), to group similar nodes and edges in the global graph \(G=(G^{e_{1}},G^{e_{2}},...,G^{e(e|e|+|p+|4|)})\) of all concepts. After the clustering process, we obtain \(\mathcal{C}_{\mathcal{V}}:\mathcal{V}\rightarrow\mathcal{V}^{{}^{\prime}}\) and \(\mathcal{C}_{\mathcal{E}}:\mathcal{E}\rightarrow\mathcal{E}^{{}^{\prime}}\) which map the nodes \(\mathcal{V}\) and edges \(\mathcal{E}\) in the original graph \(G\) to new nodes \(\mathcal{V}^{{}^{\prime}}\) and edges \(\mathcal{E}^{{}^{\prime}}\), respectively. With these two mappings, we obtain a new global graph \(G^{{}^{\prime}}=(\mathcal{V}^{{}^{\prime}},\mathcal{E}^{{}^{\prime}})\), and we create a new graph \(G^{{}^{\prime}e}=(\mathcal{V}^{{}^{\prime}e},\mathcal{E}^{{}^{\prime}e}) \subset G^{{}^{\prime}}\) for each medical code. The node embedding \(\mathbf{H}^{\mathcal{V}}\in\mathbb{R}^{|\mathcal{C}_{\mathcal{V}}|\times w}\) and the edge embedding \(\mathbf{H}^{\mathcal{R}}\in\mathbb{R}^{|\mathcal{C}_{\mathcal{E}}|\times w}\) are initialized by the averaged word embedding in each cluster, where \(w\) denotes the dimension of the word embedding.
### Step 2: Personalized Knowledge Graph Composition
For each patient, we compose their personalized KG by merging the clustered knowledge graphs of their medical codes. We create a patient node (\(\mathcal{P}\)) and connect it to its direct EHR nodes in the graph. The personalized KG for a patient can be represented as \(G_{\mathrm{pat}}=(\mathcal{V}_{\mathrm{pat}},\mathcal{E}_{\mathrm{pat}})\), where \(\mathcal{V}_{\mathrm{pat}}=\mathcal{P}\cup\{\mathcal{V}^{{}^{\prime}e_{1}}, \mathcal{V}^{{}^{\prime}e_{2}},...,\mathcal{V}^{{}^{\prime}e_{w}}\}\) and \(\mathcal{E}_{\mathrm{pat}}=\mathcal{E}^{\mathcal{P}}\cup\{\mathcal{E}^{{}^{ \prime}e_{1}},\mathcal{E}^{{}^{\prime}e_{2}},...,\mathcal{E}^{{}^{\prime}e_{w }}\}\), with \(\{e_{1},e_{2},...,e_{\omega}\}\) being the medical codes directly associated with the patient with \(\omega\) being the number of medical codes for this patient. Further, as a patient is represented as a sequence of \(J\) visits [29], the _visit-subgraphs_ for patient \(i\) can be represented as \(G_{\mathrm{pat}(i)}=\{G_{i,1},G_{i,2},...,G_{i,J}\}=\{(\mathcal{V}_{i,1}, \mathcal{E}_{i,1}),(\mathcal{V}_{i,2},\mathcal{E}_{i,2}),...,(\mathcal{V}_{i,J },\mathcal{E}_{i,J})\}\) for visits \(\{x_{1},x_{2},...,x_{J}\}\) where \(\mathcal{V}_{i,j}\subseteq\mathcal{V}_{\mathrm{pat}(i)}\) and \(\mathcal{E}_{i,j}\subseteq\mathcal{E}_{\mathrm{pat}(i)}\) for \(1\leq j\leq J\).
### Step 3: Bi-attention Augmented Graph Neural Network
Given that each patient's data encompasses multiple visit-subgraphs, it becomes imperative to devise a specialized model capable of managing this temporal graph data. Graph Neural Networks (GNNs), known for their proficiency in this domain, can be generalized as follows:
\[\mathbf{h}^{(l+1)}_{k}=\sigma\left(\mathbf{W}^{(l)}\text{AGGREGATE}^{(l)} \left(\mathbf{h}^{(l)}_{k^{\prime}}|k^{\prime}\in\mathcal{N}(k)\right)+ \mathbf{b}^{(l)}\right), \tag{1}\]
where \(\mathbf{h}^{(l+1)}_{k}\) represents the updated node representation of node \(k\) at the \((l+1)\)-th layer of the GNN. The function \(\text{AGGREGATE}^{(l)}\) aggregates the node representations of all neighbors \(\mathcal{N}(k)\) of node \(k\) at the \(l\)-th layer. \(\mathbf{W}^{(l)}\) and \(\mathbf{b}^{(l)}\) are the learnable weight matrix and bias vector at the \(l\)-th layer, respectively. \(\sigma\) denotes an activation function. Nonetheless, the conventional GNN approach overlooks the temporal characteristics of our patient-specific graphs and misses the intricacies of personalized healthcare. To address this, we propose a Bi-attention Augmented (BAT) GNN that better accommodates temporal graph data and offers more nuanced predictive healthcare insights.
**Our model.** In GraphCare, we incorporate attention mechanisms to effectively capture relevant information from the personalized KG. We first reduce the size of node and edge embedding from the word embedding to the hidden embedding to improve model's efficiency and handle the sparsity problem. The dimension-reduced hidden embeddings are computed as follows:
\[\mathbf{h}_{i,j,k}=\mathbf{W}_{v}\mathbf{h}^{(y)}_{(i,j,k)}+\mathbf{b}_{v}\quad \mathbf{h}_{(i,j,k)\leftrightarrow(i,j^{\prime},k^{\prime})}=\mathbf{W}_{r} \mathbf{h}^{\mathcal{R}}_{(i,j,k)\leftrightarrow(i,j^{\prime},k^{\prime})}+ \mathbf{b}_{r} \tag{2}\]
where \(\mathbf{W}_{v},\mathbf{W}_{r}\in\mathbb{R}^{w\times q}\), \(\mathbf{b}_{v},\mathbf{b}_{r}\in\mathbb{R}^{q}\) are learnable vectors, \(\mathbf{h}^{\mathcal{V}}_{(i,j,k)},\mathbf{h}^{\mathcal{R}}_{(i,j,k) \leftrightarrow(i,j^{\prime},k^{\prime})}\in\mathbb{R}^{w}\) are input embedding, \(\mathbf{h}_{i,j,k},\mathbf{h}_{(i,j,k)\leftrightarrow(i,j^{\prime},k^{\prime})} \in\mathbb{R}^{q}\) are hidden embedding of the \(k\)-th node in \(j\)-th visit-subgraph of patient, and the hidden embedding of the edge between the nodes \(v_{i,j,k}\) and \(v_{i,j^{\prime}k^{\prime}}\), respectively. \(q\) is the size of the hidden embedding.
Subsequently, we compute two sets of attention weights: one set corresponds to the subgraph associated with each visit, and the other pertains to the nodes within each subgraph. The node-level attention weight for the \(k\)-th node in the \(j\)-th visit-subgraph of patient \(i\), denoted as \(\alpha_{i,j,k}\), and the visit-level attention weight for the \(j\)-th visit of patient \(i\), denoted as \(\beta_{i,j}\), are shown as follows:
\[\alpha_{i,j,1},...,\alpha_{i,j,M}=\text{Softmax}(\mathbf{W}_{\alpha} \mathbf{g}_{i,j}+\mathbf{b}_{\alpha}),\] \[\beta_{i,1},...,\beta_{i,N}=\boldsymbol{\lambda}^{\top}\text{Tanh} (\mathbf{w}_{\beta}^{\top}\mathbf{G}_{i}+\mathbf{b}_{\beta}),\quad\text{where} \quad\boldsymbol{\lambda}=[\lambda_{1},...,\lambda_{N}], \tag{3}\]
where \(\mathbf{g}_{i,j}\in\mathbb{R}^{M}\) is a multi-hot vector representation of visit-subgraph \(G_{i,j}\), indicating the nodes appeared for the \(j\)-th visit of patient \(i\) where \(M=|C_{\mathcal{V}}|\) is the number of nodes in the global graph \(G^{{}^{\prime}}\). \(\mathbf{G}_{i}\in\mathbb{R}^{N\times M}\) represents the multi-hot matrix of patient \(i\)'s graph \(G_{i}\) where \(N\) is the maximum visits across all patients. \(\mathbf{W}_{\alpha}\in\mathbb{R}^{M\times M}\), \(\mathbf{w}_{\beta}\in\mathbb{R}^{M}\), \(\mathbf{b}_{\alpha}\in\mathbb{R}^{M}\) and \(\mathbf{b}_{\beta}\in\mathbb{R}^{N}\) are learnable parameters. \(\boldsymbol{\lambda}\in\mathbb{R}^{N}\) is the decay coefficient vector, \(J\) is the number of visits of patient \(i\), \(\lambda_{j}\in\boldsymbol{\lambda}\) where \(\lambda_{j}=\exp{(-\gamma(J-j))}\) when \(j\leq J\) and \(0\) otherwise, is coefficient for the visit \(j\), with decay rate \(\gamma\), initializing higher weights for more recent visits.
**Attention initialization**. To further incorporate prior knowledge from LLMs and help the model converge, we initialize \(\mathbf{W}_{\alpha}\) for the node-level attention based on the cosine similarity between the node embedding and the word embedding \(\mathbf{w}_{\text{task}}\) of a specific term for the prediction task at hand (e.g., "death" for mortality prediction). Formally, we first calculate the weights for the nodes in the global graph \(G^{{}^{\prime}}\) by \(w_{m}=(\mathbf{h}_{m}\cdot\mathbf{w}_{\text{task}})/(||\mathbf{h}_{m}||_{2} \cdot||\mathbf{w}_{\text{task}}||_{2})\) where \(\mathbf{h}_{m}\in\mathbf{H}^{\mathcal{V}}\) is the input embedding of the \(m\)-th node in \(G^{{}^{\prime}}\), and \(w_{m}\) is the weight computed. We normalize the weights such that \(0\leq w^{m}\leq 1,\forall 1\leq m\leq M\). We initialize \(\mathbf{W}_{\alpha}=\text{diag}(w_{1},...,w_{M})\) as a diagonal matrix.
Next, we update the node embedding by aggregating the neighboring nodes across all visit-subgraphs incorporating the attention weights for visits and nodes computed in Eq (3) and the weights for edges. Based on Eq (1), we design our convolutional layer BAT as follows:
\[\mathbf{h}_{i,j,k}^{(l+1)}=\sigma\left(\mathbf{W}^{(l)}\sum_{j^{ \prime}\in J,k^{\prime}\in\mathcal{N}(k)\cup\{k\}}\left(\underbrace{\alpha_{ i,j^{\prime},k^{\prime}}^{(l)}\beta_{i,j^{\prime}}^{(l)}\mathbf{h}_{i,j^{\prime},k^{ \prime}}^{(l)}}_{\text{Node aggregation term}}+\underbrace{\mathbf{w}_{\mathcal{R}(k,k^{\prime})}^{(l)}\mathbf{h}_{(i,j,k)\leftrightarrow(i,j^{\prime},k^{\prime} )}}_{\text{Edge aggregation term}}\right)+\mathbf{b}^{(l)}\right), \tag{4}\]
where \(\sigma\) is the ReLU function, \(\mathbf{W}^{(l)}\in\mathbb{R}^{q\times q},\mathbf{b}^{(l)}_{l}\in\mathbb{R}^ {q}\) are learnable parameters, \(\mathbf{w}_{\mathcal{R}}^{(l)}\in\mathbb{R}^{|\mathcal{C}_{\mathcal{E}}|}\) is the edge weight vector at the layer \(l\), and \(\mathbf{w}_{\mathcal{R}(k,k^{\prime})}^{(l)}\in\mathbf{w}_{\mathcal{R}}^{(l)}\) is the scalar weight for the edge embedding \(\mathbf{h}_{(i,j,k)\leftrightarrow(i,j^{\prime},k^{\prime})}^{\mathcal{R}}\). In Eq (4), the node aggregation term captures the contribution of the attention-weighted nodes, while the edge aggregation term represents the influence of the edges connecting the nodes. This convolutional layer integrates both node and edge features, allowing the model to learn a rich representation of the patient's EHR data. After several layers of convolution, we obtain the node embeddings \(\mathbf{h}_{i,j,k}^{(L)}\) of the final layer (\(L\)), which are used for the predictions:
\[\mathbf{h}_{i}^{G_{\text{pat}}}=\text{MEAN}(\sum_{j=1}^{J}\sum_{k= 1}^{K_{j}}\mathbf{h}_{i,j,k}^{(L)}),\quad\mathbf{h}_{i}^{\mathcal{P}}=\text{ MEAN}(\sum_{j=1}^{J}\sum_{k=1}^{K_{j}}\mathds{1}_{i,j,k}^{\Delta}\mathbf{h}_{i,j,k}^{(L )}),\] \[\mathbf{z}_{i}^{\text{graph}}=\text{MLP}(\mathbf{h}_{i}^{G_{\text {pat}}}),\quad\mathbf{z}_{i}^{\text{node}}=\text{MLP}(\mathbf{h}_{i}^{ \mathcal{P}})\quad\mathbf{z}_{i}^{\text{joint}}=\text{MLP}(\mathbf{h}_{i}^{G_ {\text{pat}}}\oplus\mathbf{h}_{i}^{\mathcal{P}}), \tag{5}\]
where \(J\) is the number of visits of patient \(i\), \(K_{j}\) is the number of nodes in visit \(j\), \(\mathbf{h}_{i}^{G_{\text{pat}}}\) denotes the patient graph embedding obtained by averaging the embeddings of all nodes across visit-subgraphs and the various nodes within each subgraph for patient \(i\). \(\mathbf{h}_{i}^{\mathcal{P}}\) represents the patient node embedding computed by averaging node embeddings of the direct medical code linked to the patient node. \(\mathds{1}_{i,j,k}^{\Delta}\in\{0,1\}\) is a binary label indicating whether a node \(v_{i,j,k}\) corresponds to a direct medical code for patient \(i\). Finally, we apply a multilayer perception (MLP) to either \(\mathbf{h}_{i}^{G_{\text{pat}}}\), \(\mathbf{h}_{i}^{\mathcal{P}}\), or the concatenated embedding \((\mathbf{h}_{i}^{G_{\text{pat}}}\oplus\mathbf{h}_{i}^{\mathcal{P}})\) to obtain logits \(\mathbf{z}_{i}^{\text{graph}}\), \(\mathbf{z}_{i}^{\text{node}}\) or \(\mathbf{z}_{i}^{\text{joint}}\) respectively. We discuss more details of the patient representations in Appendix D.
### Training and Prediction
The model can be adapted for a variety of healthcare prediction tasks. Consider a set of samples \(\{(x_{1}),(x_{1},x_{2}),\ldots,(x_{1},x_{2},\ldots,x_{t})\}\) for each patient with \(t\) visits, where each tuple represents a sample consisting of a sequence of consecutive visits.
**Mortality (MT.) prediction** predicts the mortality label of the subsequent visit for each sample, with the last sample dropped. Formally, \(f:(x_{1},x_{2},\ldots,x_{t-1})\to y[x_{t}]\) where \(y[x_{t}]\in\{0,1\}\) is a binary label indicating the patient's survival status recorded in visit \(x_{t}\).
**Readmission (RA.) prediction** predicts if the patient will be readmitted into hospital within \(\sigma\) days. Formally, \(f:(x_{1},x_{2},\ldots,x_{t-1})\to y[\tau(x_{t})-\tau(x_{t-1})],y\in\{0,1\}\) where \(\tau(x_{t})\) denotes the encounter time of visit \(x_{t}\). \(y[\tau(x_{t})-\tau(x_{t-1})]\) equals 1 if \(\tau(x_{t})-\tau(x_{t-1})\leq\sigma\), and 0 otherwise. In our study, we set \(\sigma=15\) days.
**Length-Of-Stay (LOS) prediction**[64] predicts the length of ICU stays for each visit. Formally, \(f:(x_{1},x_{2},\ldots,x_{t})\to y[x_{t}]\) where \(y[x_{t}]\in\mathbb{R}^{1\times C}\) is a one-hot vector indicating its class among \(C\) classes. We set 10 classes \([\mathbf{0},\mathbf{1},\ldots,\mathbf{7},\mathbf{8},\mathbf{9}]\), which signify the stays of length \(<1\) day (\(\mathbf{0}\)), within one week \((\mathbf{1},\ldots,\mathbf{7})\), between one and two weeks (\(\mathbf{8}\)), and longer than two weeks (\(\mathbf{9}\)).
**Drug recommendation** predicts medication labels for each visit. Formally, \(f:(x_{1},x_{2},\ldots,x_{t})\to y[x_{t}]\) where \(y[x_{t}]\in\mathbb{R}^{1\times|\mathbf{d}|}\) is a multi-hot vector where \(|\mathbf{d}|\) denotes the number of all drug types.
The loss functions used to train the model are as follows:
\[\mathcal{L}_{bce}=-\frac{1}{P}\sum_{i=1}^{P}[y_{i}\log(\sigma(z_{i }))+(1-y_{i})\log(1-\sigma(z_{i}))],\ \mathcal{L}_{ce}=-\frac{1}{P}\sum_{i=1}^{P}\sum_{c=1}^{C}y_{i,c}\log(\text{ softmax}(z_{i,c})), \tag{6}\]
where \(\mathcal{L}_{bce}\) is the binary cross-entropy loss, \(\mathcal{L}_{ce}\) is the cross-entropy loss, \(y_{i}\) is the ground truth label for patient \(i\), \(P\) denotes the number of patients, \(C\) is the number of classes, and \(z_{i}\) and \(z_{i,c}\) are logits obtained from Eq (5). The sigmoid function \(\sigma(z_{i})=\frac{1}{1+e^{-z_{i}}}\) is applied for binary (MT. and RA.) and multi-label (Drug.) classification tasks, while the softmax function, \(\text{softmax}(z_{i,c})=\frac{e^{z_{i,c}}}{\sum_{c^{\prime}=1}^{C}e^{z_{i,c^{ \prime}}}}\), is used for multi-class (LOS) classification tasks.
## 4 Experiments
We evaluate the performance of GraphCare on four healthcare prediction tasks: mortality prediction, LOS prediction, readmission prediction, and drug recommendation.
### Experimental Setting
**Data.** For the EHR data, we use the publicly available MIMIC-III [22] and MIMIC-IV [23] datasets. Table 1 presents statistics for these processed datasets. For the generation of relational knowledge and knowledge graph construction, we utilize GPT-4 [18] as our chosen LLM. Additionally, we utilize the UMLS-KG [14] as an existing large-scale biomedical KG, which contains 300K nodes and 1M edges. We retrieve EHR subgraphs from this KG. To retrieve the word embeddings of the entities and relations within the constructed KGs, we employ the second-generation GPT-3 embedding model.
**Baselines.** Our baselines include GRU [65], Transformer [66], RETAIN [28], GRAM [12], Deepr [67], StageNet [35], AdaCare [34], GRASP [68], SafeDrug [69], MICRON [31], GAMENet [8], and MoleRec [36]. AdaCare and GRASP are evaluated only on binary prediction tasks given their computational demands. For drug recommendation, we also consider task-specific models SafeDrug, MICRON, GAMENet, and MoleRec. Our GraphCare model's performance is examined under three different GNNs: GAT [24], GIN [25], and our BAT. We do not compare to models such as GCT [10] and CGL [41] as they incorporate lab results and clinical notes, which are not used in this study. Implementation details are discussed in Supplementary.
**Evaluation Metrics.** We consider the following metrics: (a) **Accuracy** - the proportion of correctly predicted instances out of the total instances; (b) **F1** - the harmonic mean of precision and recall; (c) **Jaccard score** - the ratio of the intersection to the union of predicted and true labels; (d) **AUPRC** - the area under the precision-recall curve, emphasizing the trade-off between precision and recall; (e) **AUROC** - the area under the receiver operating characteristic curve, capturing the trade-off between the true positive and the false positive rates. (f) **Cohen's Kappa** - measures inter-rater agreement for categorical items, adjusting for the expected level of agreement by chance in multi-class classification.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \#patients & \#visits & \#visits/patient & \#conditions/patient & \#procedures/patient & \#drugs/patient \\ \hline MIMIC-III & 35,707 & 44,399 & 1.24 & 12.89 & 4.54 & 33.71 \\ MIMIC-IV & 123,488 & 232,263 & 1.88 & 21.74 & 4.70 & 43.89 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of pre-processed EHR datasets. “#”: “the number of”, “/patient”: “per patient”.
### Experimental Results
As demonstrated in Table 2, the GraphCare framework consistently outperforms existing baselines across all prediction tasks for both MIMIC-III and MIMIC-IV datasets. For example, when combined with BAT, GraphCare exceeds StageNet's best result by \(+8\%\) in AUROC for mortality prediction on MIMIC-III. Within our GraphCare framework, our proposed BAT GNN consistently outperforms GAT and GIN, underlining the effectiveness of the bi-attention mechanism. The improvement is less pronounced on MIMIC-IV than on MIMIC-III (e.g., +2.0% versus +5.3% average AUPRC improvement for the readmission prediction). This smaller margin may be due to MIMIC-IV's richer dataset providing the baselines with more patients and more features per patient for learning, as shown in Table 1. In the following, we will analyze the effects of incorporating the personalized KG and our proposed BAT in detail. The ablation study of bi-attention is placed in Appendix E.
#### 4.2.1 Effect of Personalized Knowledge Graph
**Effect of data size.** To examine the impact of training data volume on model performance, we conduct a comprehensive experiment where the size of the training data fluctuates between 0.1% and 100% of the original training set, while the validation/testing data remain constant. Performance metrics are averaged over 10 runs, each initiated with
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{**Task 1: Mortality Prediction**} & \multicolumn{4}{c}{**Task 2: Readmission Prediction**} \\ \cline{2-9} & **MIMIC-III** & **MIMIC-IV** & **MIMIC-III** & **MIMIC-IV** & **MIMIC-IV** \\ \hline & **AUPRC** & **AUROC** & **AUROC** & **AUROC** & **AUROC** & **AUROC** & **AUROC** \\ \hline GRU & \(0.12_{(0.01)}\) & \(0.61_{(0.01)}\) & \(0.04_{(0.00)}\) & \(0.69_{(0.01)}\) & \(0.68_{(0.00)}\) & \(0.65_{(0.01)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) \\ Transformer & \(0.10_{(0.01)}\) & \(0.57_{(0.01)}\) & \(0.03_{(0.00)}\) & \(0.65_{(0.01)}\) & \(0.67_{(0.01)}\) & \(0.64_{(0.01)}\) & \(0.66_{(0.00)}\) & \(0.65_{(0.00)}\) \\ RETAIN & \(0.10_{(0.01)}\) & \(0.59_{(0.01)}\) & \(0.04_{(0.00)}\) & \(0.65_{(0.02)}\) & \(0.65_{(0.01)}\) & \(0.64_{(0.01)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) \\ GRAM & \(0.11_{(0.01)}\) & \(0.60_{(0.01)}\) & \(0.04_{(0.00)}\) & \(0.67_{(0.01)}\) & \(0.67_{(0.01)}\) & \(0.64_{(0.00)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) \\ Deepr & \(0.13_{(0.01)}\) & \(0.61_{(0.00)}\) & \(0.04_{(0.00)}\) & \(0.69_{(0.01)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) & \(0.65_{(0.00)}\) \\ AdaCare & \(0.11_{(0.00)}\) & \(0.58_{(0.01)}\) & \(\mathbf{0.05_{(0.00)}}\) & \(0.69_{(0.01)}\) & \(0.68_{(0.01)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) \\ GRASP & \(0.10_{(0.01)}\) & \(0.59_{(0.01)}\) & \(\mathbf{0.05_{(0.00)}}\) & \(0.68_{(0.01)}\) & \(0.69_{(0.00)}\) & \(0.66_{(0.01)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) \\ StageNet & \(0.12_{(0.00)}\) & \(0.62_{(0.01)}\) & \(0.04_{(0.00)}\) & \(0.70_{(0.01)}\) & \(0.69_{(0.01)}\) & \(0.67_{(0.00)}\) & \(0.66_{(0.00)}\) & \(0.66_{(0.00)}\) \\ \hline GraphCare w/ GAT & \(0.14_{(0.01)}\) & \(0.67_{(0.01)}\) & \(0.05_{(0.00)}\) & \(0.71_{(0.01)}\) & \(0.71_{(0.01)}\) & \(0.67_{(0.01)}\) & \(0.67_{(0.00)}\) & \(\mathbf{0.67_{(0.00)}}\) \\ GraphCare w/ GIN & \(0.14_{(0.00)}\) & \(0.66_{(0.01)}\) & \(\mathbf{0.05_{(0.00)}}\) & \(0.71_{(0.01)}\) & \(0.71_{(0.01)}\) & \(0.68_{(0.00)}\) & \(\mathbf{0.68_{(0.00)}}\) & \(\mathbf{0.67_{(0.00)}}\) \\ GraphCare w/ BAT & \(\mathbf{0.16_{(0.00)}}\) & \(\mathbf{0.70_{(0.01)}}\) & \(\mathbf{0.05_{(0.00)}}\) & \(\mathbf{0.72_{(0.01)}}\) & \(\mathbf{0.73_{(0.01)}}\) & \(\mathbf{0.69_{(0.00)}}\) & \(\mathbf{0.68_{(0.00)}}\) & \(\mathbf{0.67_{(0.00)}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Performance comparison of four prediction tasks on MIMIC-III/MIMIC-IV. We report the average performance and the standard deviation (in bracket) of each model over 100 runs for MIMIC-III and 50 runs for MIMIC-IV. The best results are highlighted for both datasets.**
a different random seed. The results, depicted in Figure 2, suggest that GraphCare exhibits a considerable edge over other models when confronted with scarce training data. For instance, GraphCare, despite being trained on a mere 0.1% of the training data (equivalent to 36 patient samples), accomplished an LOS prediction accuracy comparable to the best baseline StageNet that trained on 0.9% of the training data (about 320 patient samples). A similar pattern appears in drug recommendation tasks. Notably, both GAMENet and GRAM also show a certain level of resilience against data limitations, likely due to their use of internal EHR graphs or external ontologies.
**Effect of knowledge graph sizes.** To understand how the size of the KG affects the performance of GraphCare, we randomly sample 10 sub-KGs from the clustered global knowledge graph (i.e. \(G^{{}^{\prime}}\) in SS3.1) with different random seeds for each ratio in range \([0.0,0.1,0.3,0.5,0.7,0.9,1.0]\) and report the results in Table 3. In all cases, the trend of improved performance with larger KG size is clear, highlighting the value of a more comprehensive knowledge graph in making more accurate predictions across various healthcare tasks. Furthermore, it indicates that our method is scalable and can evolve in tandem with the rapid growth of LLMs. It is also interesting to note that different tasks demonstrate different degrees of improvement from the KG; readmission prediction improvements are more modest than those observed with mortality prediction despite using the same sub-KGs.
#### 4.2.2 Patient Representation Learning
We further discuss the performance of different patient representations in GraphCare, as depicted in Figure 3. We calculate the average over 20 independent runs for each type of patient representation and for each task. Our observations reveal that the patient node embedding presents more stability as it is computed by averaging the direct EHR nodes. These nodes are rich in precise information, thereby reducing noise, but they offer limited global information across the graph. On the other hand, patient graph embedding consistently exhibits the most significant variance, with the largest distance observed between the maximum and minimum outliers. Despite capturing a broader scope of information, the graph embedding performs less effectively due to the increased noise. This is
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{6}{c}{**Feature: Condition + Procedure + Drug**} & \multicolumn{6}{c}{**Feature: Condition + Procedure**} \\ \hline
**KG Ratio** & **0** & **0.1** & **0.3** & **0.5** & **0.7** & **0.9** & **1.0** \\ \hline Nodes/pat. & 242 & 362 & 618 & 883 & 1113 & 1359 & 1479 \\ Edge/pat. & 288 & 485 & 886 & 1296 & 1690 & 2170 & 2374 \\ \hline
**MT.** & AUPRC & 0.13 & 0.13 & 0.14 & 0.15 & 0.15 & 0.15 & **0.16** \\ & AUROC & 0.54 & 0.65 & 0.67 & 0.68 & 0.68 & 0.69 & **0.70** \\
**RA.** & AUPRC & 0.70 & 0.71 & 0.71 & 0.71 & 0.72 & 0.72 & **0.73** \\ & AUROC & 0.67 & 0.68 & 0.68 & 0.68 & **0.69** & **0.69** & **0.69** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **The performance of GraphCare with different KG sizes.** All KGs are randomly sampled from the global KG while keeping the nodes of medical codes fixed. For each ratio, we report the average nodes/edges of the patient’s personalized KG as well as the performance across four tasks on MIMIC-III.
Figure 3: **Performance of healthcare predictions with three types of patient representations (§3.3): (1) graph - patient graph embedding obtained through mean pooling of node embedding; (2) node - patient node embedding connected to the direct EHR node; (3) joint - embedding concatenated by (1) and (2).**
Figure 2: **Performance by the sizes of training data.** Values on the x-axis indicate % of the entire training data. The dotted lines separate three ranges: [0.1, 1], [1, 10] and [10, 100] (%).
attributed to its derivation method that averages all node embeddings within a patient's personalized KG, inherently incorporating a more diverse and complex set of information. The joint embedding operates as a balanced compromise between the node and graph embeddings. It allows GraphCare to learn from both local and global information. Despite the increased noise, the joint embedding provides an enriched context that improves the model's understanding and prediction capabilities.
#### 4.2.3 Interpretability of GraphCare
Figure 4 showcases an example of a personalized KG for mortality prediction tied to a specific patient (predicted mortality 1), who was accurately predicted only by our GraphCare method, while other baselines incorrectly estimated the outcome.
In Figure 4(a), important nodes and edges contributing to mortality prediction, such as "_deadly cancer_", are emphasized with higher importance scores. This demonstrates the effectiveness of our BAT model in identifying relevant nodes and edges. Additionally, Figure 4(b) shows the direct Electronic Health Record (EHR) nodes connected to the patient node, enhancing interpretability of predictions using patient node embedding. Figure 4(c) and (d) show KG triples linked to the direct EHR nodes "_bronchiectasis_" and "_pneumonia_". These nodes are connected to important nodes like "_mortality_", "_respiratory failure_", "_lung cancer_", and "_shortness of breath_", indicating their higher weights. In Figure 4(e), the "_lung cancer_" node serves as a common connector for "_bronchiectasis_" and "_pneumonia_". It is linked to both "_mortality_" and "_deadly cancer_", highlighting its significance. Removing this node had a noticeable impact on the model's performance, indicating its pivotal role in accurate predictions. This emphasizes the value of comprehensive health data and considering all potential health factors, no matter how indirectly connected they may seem.
## 5 Conclusion
We introduced GraphCare, a novel framework that enriches healthcare predictions by leveraging open-world knowledge graphs (KGs) to generate personalized knowledge graphs. The primary innovation of GraphCare lies in its ability to import external relational knowledge linked with medical codes found in a patient's electronic health record (EHR). This information is processed by our proposed bi-attention augmented (BAT) Graph Neural Network (GNN), which captures crucial information from temporal personalized KG to aid healthcare predictions. Our empirical results demonstrate GraphCare's superior performance, outperforming baselines across four prediction tasks on two real-world datasets. GraphCare proves robust with limited training data, highlighting its scalability and real-world applicability. The effectiveness of GraphCare scales with the size of the used KGs, indicating its capabilities will grow with advancements in large language models, our principal source for KG triple retrieval. We therefore assert that GraphCare holds immense potential for widespread applications in the medical domain.
Figure 4: Example showing a patient’s personalized KG with importance scores (Appendix F) visualized. For better presentation, we hide the nodes of drugs. The red node represents the patient node. Nodes with higher scores are **med** and larger. Edges with higher scores are **med** and thicker. Subgraph (a) shows a central area overview of this personalized KG, and other subgraphs show more details with a focused node highlighted. |
2307.15846 | Education 5.0: Requirements, Enabling Technologies, and Future
Directions | We are currently in a post-pandemic era in which life has shifted to a
digital world. This has affected many aspects of life, including education and
learning. Education 5.0 refers to the fifth industrial revolution in education
by leveraging digital technologies to eliminate barriers to learning, enhance
learning methods, and promote overall well-being. The concept of Education 5.0
represents a new paradigm in the field of education, one that is focused on
creating a learner-centric environment that leverages the latest technologies
and teaching methods. This paper explores the key requirements of Education 5.0
and the enabling technologies that make it possible, including artificial
intelligence, blockchain, and virtual and augmented reality. We analyze the
potential impact of these technologies on the future of education, including
their ability to improve personalization, increase engagement, and provide
greater access to education. Additionally, we examine the challenges and
ethical considerations associated with Education 5.0 and propose strategies for
addressing these issues. Finally, we offer insights into future directions for
the development of Education 5.0, including the need for ongoing research,
collaboration, and innovation in the field. Overall, this paper provides a
comprehensive overview of Education 5.0, its requirements, enabling
technologies, and future directions, and highlights the potential of this new
paradigm to transform education and improve learning outcomes for students. | Shabir Ahmad, Sabina Umirzakova, Ghulam Mujtaba, Muhammad Sadiq Amin, Taegkeun Whangbo | 2023-07-29T00:31:11Z | http://arxiv.org/abs/2307.15846v1 | # Education 5.0: Requirements, Enabling Technologies, and Future Directions
###### Abstract
We are currently in a post-pandemic era in which life has shifted to a digital world. This has affected many aspects of life, including education and learning. Education 5.0 refers to the fifth industrial revolution in education by leveraging digital technologies to eliminate barriers to learning, enhance learning methods, and promote overall well-being. The concept of Education 5.0 represents a new paradigm in the field of education, one that is focused on creating a learner-centric environment that leverages the latest technologies and teaching methods. This paper explores the key requirements of Education 5.0 and the enabling technologies that make it possible, including artificial intelligence, blockchain, and virtual and augmented reality. We analyze the potential impact of these technologies on the future of education, including their ability to improve personalization, increase engagement, and provide greater access to education. Additionally, we examine the challenges and ethical considerations associated with Education 5.0 and propose strategies for addressing these issues. Finally, we offer insights into future directions for the development of Education 5.0, including the need for ongoing research, collaboration, and innovation in the field. Overall, this paper provides a comprehensive overview of Education 5.0, its requirements, enabling technologies, and future directions, and highlights the potential of this new paradigm to transform education and improve learning outcomes for students.
Education 5.0, personalized learning, adaptive learning, blended learning.
## I Introduction
Education is a basic human right, and delivering knowledge effectively has been a success metric for ages. Education systems have undergone extensive revamp over the past decade with the rise of the Internet of Things (IoT) and information communication technology (ICT). The integration of sensors and the processing of data through artificial intelligence (AI) technologies paved the way for the next-generation education systems [1]. Education is linked to the industrial revolution, and the fourth industrial revolution has witnessed tremendous work towards the realization of Education 4.0 to enhance the learning experience through the use of ICT and IoT technology [2, 3]. However, despite its many improvements over the conventional education systems, the demand for personalized tutoring and education and game-based learning envisioned a fifth revolution in Education.
Education 5.0 [4] is a futuristic term that aims to integrate advanced ICT technologies into the education system to enhance the learning experience and remove barriers to educating an individual. Thus, one of the fundamental goals of the Education 5.0 is to promote personalized learning, collaboration, and well-being through the use of digital tools such as AI, virtual reality (VR), and the IoT. Additionally, Education 5.0 focuses on developing 21st-century skills such as critical thinking, creativity, and problem-solving, rather than just rote learning and adds immersive experience in the classrooms using augmented reality and mixed reality applications [5]. The ultimate goal of Education 5.0 is to create a more efficient, effective, and equitable education system that can adapt to the changing needs of society in the fifth industrial revolution.
### _Market statistics and existing surveys_
According to a report by [6], the global education technology market is expected to grow at a compound annual growth rate of 17% between 2021 and 2025. The report cites the growing demand for personalized learning and the increasing availability of digital content as driving factors behind this growth. Furthermore, a report by [7] predicts that the global e-learning market will reach $374.3 billion by 2026, with a compound annual growth rate of 14.6% from 2021 to 2026. This growth is attributed to the increased adoption of mobile and digital learning, the rising need for skill-based education, and the growing use of artificial intelligence in education. The concept of Education 5.0 is relatively new, so there is limited academic literature specifically focused on this approach to education. However, several articles discuss related topics, such as personalized learning, digital technology, and the use of data in education. For example, a study by Means, Toyama, Murphy, and Baki (2013) [8] found
that technology can support personalized learning by providing students with access to various resources and opportunities for feedback. Another study by [9] examined the use of data analytics in education and how it can be used to support personalized learning and improve student outcomes. A search on popular databases such as Google Scholar and Scopus using the keywords "Education 5.0" returns relatively only 9 records, indicating that this is a relatively new and emerging field. However, a search for related topics such as "personalized learning" and "digital education" returns a significantly large number of articles, as shown in Figure 1 and Figure 2. Therefore, though the idea of Education 5.0 centers around personalized education and collaborative learning has been around for the last decade, the notion of Education 5.0 as the next wave of Education is still a gap in the current state-of-the-art.
There have been several surveys and tutorials published that address various aspects of education, including personalized
Fig. 1: Evolution of Education over the past years. The goal of the latest form of education has always been to use the state-of-the-art technology to aid in the learning process and improve students’ engagement
Fig. 2: Evolution of Education over the past years. The goal of the latest form of education has always been to use state-of-the-art technology to aid in the learning process and improve students’ engagement
learning, digital technology, and data analytics. Table I highlights some of the studies which emphasize the need for one or some of the fundamental components in Education 5.0.
### _Motivation and Scope_
Education 5.0 covers a wide spectrum of attributes, such as using ICT and data analytics to make a more informed educational system. This survey proposes a two-dimensional framework for which the x-axis has different educational goals, such as
Fig. 3: Evolution of Education over the past years. The goal of the latest form of education has always been to use the state-of-the-art technology to aid in the learning process and improve students’ engagement
personalized learning, collaborative learning and decision-making, while on the y-axis are the technologies needed to pursue a particular goal, such as Big data analysis, IoT, and VR, to name a few. Thus, the scope of this survey is to review the literature on the above two-dimensional pairs and highlight their impact on overall education. Since these attributes are widely dispersed and inaccessible, this survey will make a foundation platform to propose the roadmap for realizing Education 5.0 by combining all these attributes and technologies.
### _Survey organization_
The rest of this survey is organized as follows; Section 2 provides Background and foundation and discusses the evolution of Education over the years. Section 3 highlights some of the requirements. Section 4 summarizes the enabling technologies. Section 5 portrays the current state-of-the-art and future roadmap, section 6 identifies some crucial challenges, and section 7 concludes the paper.
## II Background and Foundation
In this section, we outline the background of the use of contemporary technologies to aid in elevating the quality of education is discussed to serve as a foundation for upcoming sections. The core foundation concepts are elucidated with the gradual evolution of Education from 1.0 to 5.0 in the subsequent sections. The evolution of Education 5.0 results from the advancements in technology and the changing needs of the workforce and society. It can be broken down into several stages, each building on the previous one and incorporating new technologies and approaches. The evolution of Education 5.0 was a long and lengthy process that incrementally added new approaches to support and assess the learning process and methodology. The schematic representation of this evolution is shown in figure 4.
### _Education 1.0_
Education 1.0, also known as traditional classroom teaching, is characterized by the teacher being the primary source of information and students being expected to memorize and repeat information. The primary focus of Education 1.0 was on rote learning, which is the memorization of information without understanding the underlying concepts [18, 19].
In Education 1.0, teaching methods were typically based on lecture-style instruction, and there was little use of technology in the classroom [20, 21]. Students were expected to take notes, memorize information, and complete assignments and exams that were designed to test their ability to recall the information. The emphasis was on the teacher as the authority figure, and the students were expected to be passive learners [18, 19].
Fig. 4: Evolution of Education over the past years. The goal of the latest form of education has always been to use the state-of-the-art technology to aid in the learning process and improve students’ engagement
This type of education was often seen as a one-size-fits-all approach, and it did not take into account individual differences in learning styles and abilities [22, 23]. The lack of technology also limited the resources and materials that were available to teachers and students [20, 21].
Several shortcomings in Education 1.0 led to the development of Education 2.0. Some of these include:
**Rote learning:** The primary focus of Education 1.0 was on rote learning, which is the memorization of information without understanding the underlying concepts [18, 19]. This approach did not foster critical thinking or problem-solving skills, and it did not take into account individual differences in learning styles and abilities [22, 23].
**Lack of technology:** Education 1.0 had little use of technology in the classroom [20, 21], which limited the resources and materials that were available to teachers and students. This made it difficult for teachers to create interactive and engaging learning experiences, and it limited the opportunities for students to learn through new and innovative methods [20, 21].
**One-size-fits-all approach:** The traditional classroom teaching in Education 1.0 was a one-size-fits-all approach [22, 23], which did not take into account the individual needs of students. This made it difficult for teachers to provide personalized instruction and support to students with different learning styles and abilities [22, 23].
**Passive learning:** The emphasis in Education 1.0 was on the teacher as the authority figure, and the students were expected to be passive learners [18, 19]. This approach did not encourage students to take an active role in their own learning, and it did not foster creativity or independent thinking [18, 19].
These shortcomings in Education 1.0 led to the development of Education 2.0, which introduced technology in the classroom and shifted the focus from rote learning to more active and collaborative learning [24]. Education 2.0 also introduced the concept of using technology to enhance the learning experience and make education more accessible to all [20].
### _Education 2.0_
Education 2.0 represents an evolution in the field of education, building upon the shortcomings of Education 1.0. Education 2.0 introduced technology in the classroom, and it shifted the focus from rote learning to more active and collaborative learning. One of the key features of Education 2.0 is the use of technology to create more interactive and engaging learning experiences. For example, the use of computers and the internet allowed for the use of digital resources and materials, such as videos, animations, and interactive simulations, which can be used to supplement traditional teaching methods and make learning more engaging and interactive [25, 26].
Additionally, Education 2.0 introduced the concept of using technology to make education more accessible to all. With the internet and online resources, students can access a wealth of information and educational materials, regardless of their location or socio-economic background [27]. This has helped to remove barriers to education and make education more equitable. Education 2.0 also started to address the shortcomings of Education 1.0 in terms of a one-size-fits-all approach, by incorporating blended learning, and distance learning, which allowed students to learn at their own pace, in their own time and place [28, 29]. Furthermore, it started to provide opportunities for students to take an active role in their own learning and it fostered creativity and independent thinking, by encouraging the use of technology for collaborative and interactive learning [30].
However, despite its many advantages, it still had some shortcomings that led to the development of Education 3.0. One of the main shortcomings of Education 2.0 was that it mainly used technology as a supplement to traditional teaching methods, and it was not fully integrated into the one-size-fits-all approach of Education 1.0, and it still had a focus on the teacher as the primary source of information and students were still expected to be passive learners [31]. Furthermore, it did not fully take advantage of the potential for collaborative and teamwork learning [32]. Education 3.0 addresses these shortcomings by fully integrating technology into the teaching and learning process and emphasizing active and collaborative learning, personalization, and student-centered learning [33, 34]. It also introduced the concept of the "flipped classroom," which allows for more personalized and student-centered learning [35].
### _Education 3.0_
Education 3.0 represents an evolution in the field of education, building upon the advancements of Education 2.0 and addressing its shortcomings. Education 3.0 fully integrates technology into the teaching and learning process and strongly emphasizes active and collaborative learning. One of the key features of Education 3.0 is the use of the "flipped classroom" approach, where students watch lectures and complete homework assignments at home and then use class time for discussion and interactive activities. This approach allows for more personalized and student-centered learning, allowing students to work at their own pace and on their own level. It also fosters critical thinking, creativity and problem-solving skills, which are important for the 21st century. [36]
Education 3.0 also strongly emphasizes collaboration and teamwork, encouraging students to take an active role in their learning. This allows students to learn from one another and to develop important 21st-century skills such as communication, collaboration, and critical thinking. [37] Additionally, Education 3.0 uses data and analytics to track student progress and identify learning gaps, which allows teachers to tailor instruction to meet the needs of individual students and make data-driven decisions to improve student outcomes. [38]
Despite its many benefits over earlier Education paradigms, it still had some shortcomings that led to the development of Education 4.0. These shortcomings include the limited use of technology, which is primarily used for the delivery of content, limited opportunities for personalization and student-centered learning, lack of focus on well-being, stress, and mental health support for students and teachers, and limited opportunities for collaboration and teamwork. Education 4.0 addresses these shortcomings by fully integrating technology to support student-centered learning, collaboration, and personalization, placing a strong emphasis on well-being, stress, and mental health support for students and teachers, and utilizing advanced technologies such as Artificial Intelligence, Virtual and Augmented Reality, IoT, Cloud Computing, Big Data and Analytics, Blockchain and 5G networks to enhance the teaching and learning process. [39][40][41][42].
### _Education 4.0_
This stage built on the concept of Education 3.0 and focused on using technology to enhance the learning experience. Education 4.0 is a holistic education approach incorporating the latest technologies to enhance the learning experience. It is built on the principles of personalized, student-centered, and adaptive learning and is designed to develop 21st-century skills such as digital literacy, collaboration, communication, and critical thinking. Recent research and technological breakthroughs have enabled the integration of AI and machine learning (ML) in education. AI and ML can be used to personalize the learning experience by analyzing student data and adapting the content and pace of instruction to match the student's individual needs and abilities [43, 44, 45, 46, 47].
Virtual and augmented reality (VR/AR) are also being increasingly used in education to provide immersive and interactive learning experiences. VR/AR can be used to simulate real-world scenarios, allowing students to learn through hands-on experiences in a safe and controlled environment [48, 49, 50, 51, 52]. The IoT is also playing a significant role in Education 4.0, by enabling the integration of smart devices and sensors in the classroom. This allows for real-time monitoring and tracking of student engagement and progress and enables teachers to provide more targeted and personalized instruction [53, 54, 55, 17, 56].
In addition, the use of gamification in education is also on the rise, with games and interactive simulations being used to make learning more engaging and enjoyable. Gamification can also be used to teach complex and abstract concepts and to develop problem-solving and critical thinking skills. Consequently, it has significantly more positive changes and integration of technologies compared to another version, yet it faces some notable shortcomings, which led the researchers to start thinking about Education 5.0. For instance, One of the main shortcomings of Education 4.0 is the lack of accessibility, as it relies heavily on technology and the internet, which can create barriers for students who do not have access to these resources. Inequalities can also be exacerbated as the integration of technology and the internet in Education 4.0 can also exacerbate existing educational inequalities, with students from disadvantaged backgrounds being disadvantaged. Another shortcoming is limited human interaction, as the use of technology in Education 4.0 can lead to a decrease in human interaction, which is essential for developing social and emotional skills. Over-reliance on technology is also one of the shortcomings, as over-reliance on technology can create a situation where students become dependent on technology to learn and lack the necessary skills to learn independently. Education 5.0 aims to address these shortcomings by creating a more inclusive and equitable education system that utilizes technology to enhance human interaction and promote independent learning. It will likely focus on more personalized and adaptive learning, with technology being used to enhance the human experience rather than replace it. Additionally, it will incorporate the latest advancements in technology, such as the use of big data, blockchain, and quantum computing, and it will focus on developing an interdisciplinary approach to learning, integrating different fields of study and fostering more collaboration between students and teachers.
### _Summary and Insights_
In summary, the current stage, Education 5.0, represents a new level of technology integration in education. It builds on the concepts of the previous stages and incorporates advanced technologies such as Artificial Intelligence, Virtual and Augmented Reality, IoT, Cloud Computing, Big Data and Analytics, Blockchain and 5G networks to create a more efficient, effective, and equitable education system. Education 5.0 aims to promote personalized learning, collaboration, and well-being and to develop 21st-century skills such as critical thinking, creativity, and problem-solving by utilizing modern ICT and related technologies such as blockchain, IoT, AI, ML and AR/VR-based Metaverse. The evolution discussed in this chapter led to some of the crucial requirements for an Education 5.0 system, which is exhibited in the subsequent section.
## III Requirements
In order to realize the vision of Education 5.0, the core set of requirements is highlighted based on the current state of the art, as shown below.
### _Personalized Learning_
Education 5.0 focuses on providing personalized learning experiences that adapt to the needs and abilities of individual students. This can be achieved using artificial intelligence and machine learning to create personalized learning plans and adjust instruction in real time based on student progress. Personalized Learning is an important aspect of Education 5.0, which aims to tailor the learning experience to each student's individual needs and abilities. Evaluation metrics used to measure the effectiveness of personalized learning may include student engagement, motivation, and achievement, as well as the ability of the instruction to adapt to the student's individual needs and abilities [11, 57, 58]. Another important metric for evaluating personalized learning is the use of student data, such as formative and summative assessments, to inform instruction and track progress [59, 60, 61].
### _Collaboration and Connectedness_
Education 5.0 promotes collaboration and connectedness among students, teachers, and other stakeholders. This can be achieved through the use of technology such as virtual and augmented reality and the IoT, which allow for immersive and interactive learning experiences. Evaluation metrics for collaboration and connectedness in Education 5.0 can include measures of student engagement and participation in group work and collaborative projects, as well as assessments of the quality and effectiveness of these collaborations. Additionally, communication and teamwork skills, such as conflict resolution and problem-solving abilities, can be used to evaluate collaboration and connectedness. The use of technology, such as social media and online collaboration tools, can also be evaluated for its effectiveness in promoting connectedness and collaboration among students and teachers [62, 63, 64].
### _Development of 21st-century skills_
Education 5.0 focuses on developing 21st-century skills such as critical thinking, creativity, and problem-solving rather than just rote learning. This can be achieved through the use of technology such as game-based learning, project-based learning, and other active learning methods. Evaluation metrics for developing 21st-century skills in Education 5.0 could include assessments of students' abilities in digital literacy, collaboration, communication, and critical thinking. These metrics include standardized tests, performance-based assessments, self-reflection, and peer evaluations. Additionally, observation of student engagement and participation in collaborative and technology-rich learning activities can also be used as an evaluation metric. Furthermore, incorporating project-based learning, problem-based learning, and other forms of experiential learning can also provide an opportunity to evaluate the development of 21st-century skills in students. [65]
### _Flexibility and Accessibility_
Education 5.0 aims to make education more flexible and accessible by removing barriers to education, such as geographic and financial constraints. This can be achieved through the use of technology such as cloud computing [66], which allows students to access digital resources and materials from anywhere and at any time. Flexibility and accessibility are two important characteristics of Education 5.0, which refers to the integration of technology in education to enhance the learning experience [4].
Flexibility in education refers to the ability to adapt to different learning styles and needs [4]. A common metric used to evaluate the flexibility of an educational program is the Learner Satisfaction Index (LSI) [67], which measures student satisfaction with the program's flexibility. The LSI can be measured through surveys or interviews with students [68], and it can be used to identify areas of improvement in the program [69].
Accessibility in education refers to the ability to provide education to all students, regardless of their physical, mental, or financial abilities. A commonly used metric to evaluate the accessibility of an educational program is the Universal Design for Learning (UDL) framework [70], which measures the extent to which a program is designed to be accessible to all students. The UDL framework includes three main components: representation, action and expression, and engagement [71]. These components can be evaluated through the use of surveys [72], interviews [73], or observations [74].
### _Data-Driven Decision Making_
Education 5.0 relies on data-driven decision-making. Evaluating the success of Data-Driven Decision Making (DDDM) in Education 5.0 involves using a variety of metrics to assess student outcomes, learning analytics, technology adoption, user satisfaction, and return on investment. Student outcome metrics such as grades, test scores, graduation rates, and post-graduation success provide insight into student achievement [75]. Learning analytics involves collecting and analyzing data about student learning, including engagement, interaction, and assessment data [76]. Technology adoption metrics evaluate the extent to which teachers and students are using educational technology [77]. User satisfaction metrics measure the satisfaction of students, teachers, and administrators with the DDDM process [78]. Return on investment metrics assess the financial benefits of DDDM, such as cost savings and increased revenue [79]. [80], using big data and analytics [81] to track student progress and identify learning gaps, and to tailor instruction to meet individual needs [58].
### _Security and Privacy_
Education 5.0 requires secure and private handling of student data, using blockchain technology to ensure data privacy and integrity. Ensuring the security and privacy of data in Education 5.0 is critical to successfully implementing this new educational model. The security of the data stored and transmitted in the educational system must be protected from unauthorized access, theft, and cyber attacks. The privacy of student data must be protected, including personal information such as name, address, and social security number, as well as academic data such as grades and test scores [82].
Several metrics can be used to evaluate the security and privacy of the Education 5.0 system. These include data breach incidents, security and privacy audits, and user perceptions of data security and privacy [83]. Data breach incidents can be evaluated by tracking the number of unauthorized accesses, data theft incidents, and cyber attacks on the system [84]. Third-party security experts can perform security and privacy audits to evaluate the strength of the system's security and privacy measures [85]. User perceptions of data security and privacy can be evaluated through surveys or interviews with students, teachers, and administrators to determine their level of trust and confidence in the security and privacy of the system [86].
### _High-speed networks_
High-speed networks are an essential aspect of Education 5.0, as they facilitate real-time delivery of digital resources and materials to students and teachers. The use of high-speed networks in education enables seamless communication and collaboration between students, teachers, and educational institutions. Moreover, high-speed networks ensure that students can access educational materials remotely. To evaluate the performance of high-speed networks in education, several metrics can be used, such as network reliability, network capacity, and network latency.
[87] and [88] have studied the impact of high-speed networks on the delivery of education services and found that high-speed networks have a significant positive impact on the quality of education.
### _Well-being_
Education 5.0 strongly emphasizes student well-being, including physical, mental, and emotional health. This requires using technologies such as the IoT, which can monitor students' engagement and progress and provide teachers with real-time feedback on student performance. a variety of measures can be used to assess various aspects of well-being, such as mental and emotional health, physical health, and academic performance [89]. Some common metrics include student satisfaction surveys [90], academic performance indicators (e.g., grades, test scores) [91], attendance and engagement metrics [92], mental health and wellness assessments [93], and physical health indicators (e.g. fitness test results, sport participation rates) [94]. However, it's important to note that the specific metrics used may vary depending on the context and the individual goals of the educational institution [95].
### _Adaptability_
Education 5.0 is designed to be adaptable to the changing needs of the workforce and society. This requires the use of technologies such as cloud computing and blockchain, which allow for sharing of resources and materials and the secure storage of student data. Assessing the adaptability of Education 5.0 requires evaluating several key metrics. Flexibility, scalability, and interoperability are three important factors to consider. Flexibility refers to the education system's ability to adjust to changing needs and circumstances, such as workforce demand shifts or student population changes. Scalability refers to the capacity of the education system to expand or contract to meet changing needs and demands, such as growth in student enrollment or new educational programs. Interoperability refers to the ability of different education systems, institutions, and platforms to work together seamlessly and share data, resources, and materials. Another important aspect of adaptability is user-centeredness. This refers to the degree to which the education system is designed to meet the needs and preferences of students, educators, and other stakeholders. This can be evaluated through user satisfaction, engagement, and ease of use metrics.
In addition, data privacy and security are a critical components of adaptability in Education 5.0. As more data is generated and shared through educational systems, it is important to ensure that sensitive information is protected and not misused. Evaluation metrics for data privacy and security can include measures of data security, data privacy, and regulatory compliance. Finally, continuously improving and evolving is an important part of adaptability in Education 5.0. This can be evaluated through metrics such as the frequency and speed of updates, the effectiveness of new features and functions, and the degree to which the education system can incorporate feedback from users and stakeholders.
### _Accessibility_
Education 5.0 aims to remove barriers to education and make education more accessible to all. This requires using technologies such as 5G networks and cloud computing, which can support the high-bandwidth requirements of many new educational technologies and make education more flexible and accessible. Assessing the accessibility of Education 5.0 is critical in evaluating its success. Accessibility refers to the degree to which educational resources and opportunities are available to
individuals regardless of their location or socio-economic status. To evaluate the accessibility of Education 5.0 initiatives, it is important to consider a number of metrics. One metric to consider is the availability of educational resources, such as online courses, textbooks, and instructional materials. This includes not only the quantity of resources available but also their quality and relevance to the needs of students. A study by [96] found that the availability and quality of educational resources is a critical factors in the adoption and effectiveness of technology-enhanced learning.
Another important metric is the availability and quality of internet connectivity, which is critical for accessing online resources and participating in virtual learning environments. Physical accessibility, including the availability of transportation and accessible facilities, must also be considered. The demographic representation among individuals with access to educational resources, including gender, race, and socio-economic status, must also be examined. Finally, the cost of educational resources, including tuition and related expenses, can also be a barrier to access.
### _Gamification and Game-based Learning_
Game-based learning is a crucial requirement for Education 5.0 because it can engage students in a way that traditional classroom teaching may not. Game-based learning can help increase motivation, engagement, and learning outcomes. Evaluating the effectiveness of gamification in education is important to determine its impact on student learning outcomes. Several metrics can be used to assess the effectiveness of game-based learning. One important metric is student engagement, which refers to students' motivation, interest, and involvement in learning activities. A study by Papasteriou found that game-based learning can increase student engagement and motivation in computer science education [97]. Another study by [98] showed that game-based learning can improve student engagement and motivation in science education, leading to higher academic achievement.
Another important metric is learning outcomes, which refers to the knowledge and skills that students acquire through game-based learning activities. A study by Cheng et al. found that game-based learning can improve learning outcomes in science education [99]. Another study by Deng et al. found that game-based learning was effective in teaching complex concepts in math education [100].
Student feedback is also an important metric, as it provides insight into the students' perception of game-based learning activities and their effectiveness. A study by Hartt et al. found that student feedback was positive for game-based learning activities, indicating that students valued the interactive and engaging nature of these activities [101].
Finally, the design and implementation of game-based learning activities must be considered, as they can impact the effectiveness of these activities. A study by Ramli et al. found that well-designed game-based learning activities can improve student engagement and learning outcomes [102].
### _Summary and Insights_
The above requirements must be met to make an Education 5.0 system. We envision top five parameters for each requirement which can be used for the assessment of whether a particular requirement is met or not. Table II summarizes the requirements and the top 5 parameters for evaluation. In conclusion, these metrics provide valuable information for evaluating the effectiveness of gamification or game-based learning in Education 5.0. By considering these metrics, educators, and researchers can better understand the impact of game-based learning on student motivation, engagement, and learning outcomes.
## IV Enabling Technologies
In this section, we will elucidate the enabling technologies that will act as a building blocks for Eduction 5.0. We envision that the enabling technologies for Education 5.0 include but are not limited to Artificial Intelligence (AI),Virtual and Augmented Reality (VR and AR), Internet of Things (IoT), Big Data and Analytics, Blockchain and 5G Networks. While AI-powered tools can personalize learning experiences, provide real-time feedback, and assist teachers in assessing students' progress, VR and AR can provide immersive and interactive learning experiences, allowing students to explore new environments and subjects in a more engaging way.
Additionally, IoT can be used to connect various devices and sensors to the internet, allowing for real-time monitoring and analysis of student engagement and progress wheras Cloud computing enables access to digital resources and materials from anywhere and at any time, making education more flexible and accessible. Moreover, Big data and analytics can be used to track and analyze student progress, identify learning gaps, and tailor instruction to meet individual needs whereas Blockchain can be used to secure and share student data, and ensure data privacy and integrity. Finally, 5G Networks can support the high-bandwidth requirements of many new educational technologies, such as virtual reality, and help to provide a seamless learning experience. In the following subsections, we will outline the progress being made in these core enabling technologies towards the realization of the vision of Education 5.0. The overall 2-way representation of Education 5.0 end its enabling technologies are portrayed in Fig 5.
Fig. 5: Two-way stack of Education 5.0. The left side depicts the technologies while the right side represents the goals to achieve. For instance, the use of AI and machine learning assess adaptive learning and decision-making. The central part exhibits the processes which form the overall learning methodology.
### _Artificial Intelligence_
Computers, machines, and other artifacts now exhibit human-like intelligence that is defined by cognitive capacities, learning, adaptability, and decision-making capabilities thanks to the field of research known as artificial intelligence (AI) and the inventions and developments that have followed. According to the study, AI has been widely adopted and employed in education in a variety of ways, especially by educational institutions [103]. Computers and computer-related technologies were the first forms of AI, which later evolved into web-based and online intelligent education systems, embedded computer systems, and other technologies [104]. The use of humanoid robots and online chat-bots to carry out the tasks and obligations of instructors alone or in conjunction with teachers is an excellent example. These platforms have helped teachers improve the quality of their instructional activities and carry out other administrative tasks, such as reviewing and grading students' assignments, more quickly and effectively [105]. The systems make use of machine learning and flexibility, and the curriculum and content have been customized to meet the needs of the students. This has encouraged uptake and retention, which has enhanced the learning experience for students as a whole [106]. All researchers in the field of artificial intelligence in education (AIED) are virtually undoubtedly driven by moral considerations, such as enhancing students' possibilities for lifelong learning [107].
The fact that adaptive AIED systems frequently collect a lot of data, which can then be computed to dynamically improve the pedagogy and domain models, is one of their advantages. This method not only tests and improves our comprehension of the processes of teaching and learning, but also provides new information on how to offer support that is more effective, individualized, and contextualized [108]. We concentrate on two groups of AIED software programs: intelligent support for collaborative learning and personal tutors for every learner. These programs have been built to directly help learning.
#### Iv-A1 Personalized Learning based on AI
The tendency toward lifelong learning and self-directed learning has been pushed by the pandemic, intelligent tutors have developed into a crucial tool for fostering both individual learning and teacher-student interaction [109]. Intelligent tutor applications use AI to create a personalized learning path for each student depending on their accomplishments and needs and to help students understand a specific topic or lesson [110]. AI examines learning patterns to assist students in learning more effectively and acquiring the necessary skills. Intelligent or personalized studies are focused on tackling the largest barriers to education [111]. The individualized learning experience, which is accessible anywhere and at any time, has become the norm. The core tenet of personalized learning is the automation of the teaching process and the function of the human tutor [112]. A tutor is a private educator who offers one-on-one instruction; this kind of teaching has been found to be more successful than the conventional approach of one teacher per classroom [11]. Unfortunately, not everyone is able to afford a private tutor, but an intelligent AI-based tutor is less expensive and affordable. A personalized study program powered by Intelligent AI not only fills in knowledge gaps for learners but also gives parents the tools they need to better assist their kids' learning. Intelligent AI-based personalized study also helps the optimization of learning ways and decision tree progress [113]. Current AIED, in contrast, don't have pre-determined learning paths, which means optimizing instructional methods in accordance with the performance of each student [114].
To better meet each student's unique learning needs, AIED replicates personal human tutoring, offers focused real-time feedback, and creates tailored learning ways. AIED adjust to each learner and encourage them to focus on their own challenge areas as users gain more knowledge since they are more aware of what additional support each learner needs to progress. AIED is able to make the learning process personalized, engaging, inclusive, and flexible leading to improved study results.
#### Iv-A2 Collaborative Learning based on AI
Collaborative learning based on AI refers to the use of artificial intelligence and machine learning techniques to enhance the process of learning in a collaborative setting [115]. This approach leverages technology to support communication, collaboration, and knowledge sharing between individuals and groups, with the goal of improving the learning experience and outcomes [103]. This can include AI-powered tools such as natural language processing, recommendation systems, and adaptive learning algorithms that can support students in their learning process by providing personalized feedback and guidance [116]. AI techniques can make educational settings accessible to all students, including those who have hearing or vision impairments [117] or speak various languages [118], on a worldwide available. In this approach, AI-powered tools and algorithms are used to facilitate communication, collaboration, and knowledge sharing between students and teachers [119]. This can include features such as personalization, real-time feedback, and adaptive learning, which can help students learn more effectively and efficiently. Additionally, AI-based education collaborative learning can also support educators in their teaching, by providing insights into student performance and facilitating data-driven decision-making. The goal of this approach is to improve the overall learning experience and outcomes for students.
### _Internet of Things_
Education 5.0 is the next evolution in education, which aims to leverage modern technologies to achieve the goal of lifelong learning. One of the key technologies that plays a pivotal role in Education 5.0 is the Internet of Things (IoT) [53]; [120]. IoT-enabled devices such as wearables, sensors, and smart devices can be used to collect real-time data on student learning and behavior, providing a deeper understanding of individual student needs and abilities [121]; [120].
IoT devices can also be used to facilitate personalized learning, an approach that leverages technology to individualize instruction, by providing adaptive learning systems that adjust the pace and content of instruction based on student performance [53]; [120]. Personalized feedback can also be provided to students through IoT devices, such as wearables that track movements
and provide feedback on posture and movement patterns [120]; [121]. Smart content, which adapts to the individual needs and abilities of each student, can also be created using IoT devices, such as connected textbooks with interactive elements and multimedia resources.
IoT devices can also be used to facilitate collaborative learning, an approach that emphasizes group work and the sharing of knowledge and skills among students, by enabling students to connect and collaborate with each other and with educational resources from anywhere, creating a more dynamic, interactive, and engaging learning environment for students.
Furthermore, IoT devices can be used to create smart classrooms that facilitate collaboration among students and teachers. This can enhance their collaborative skills and improve their learning outcomes.
Overall, the use of IoT devices in education can facilitate the implementation of personalized and collaborative learning, resulting in a more effective and efficient learning experience for students.
#### Iii-B1 Personalized Learning
Personalized learning, an approach that leverages technology to individualize instruction, has gained significant attention in recent years [122]; [123]. IoT devices, such as wearables and sensors, have emerged as powerful tools for implementing personalized learning in the classroom [112]; [11]. By collecting real-time data on student learning and behavior, IoT devices can provide a deeper understanding of individual student needs and abilities [11]; [124]. Furthermore, adaptive learning systems can be implemented using IoT technology, allowing for adjustments in pace and content of instruction based on student performance. Personalized feedback can also be provided to students through IoT devices, such as wearables that track movements and provide feedback on posture and movement patterns. Smart content, which adapts to the individual needs and abilities of each student, can also be created using IoT devices, such as connected textbooks with interactive elements and multimedia resources [123]. Lastly, IoT devices can be used to monitor student engagement and behavior in real time, providing teachers with valuable insights to improve their instruction. Overall, the use of IoT devices in education can facilitate the implementation of personalized learning, resulting in a more effective and efficient learning experience for students [124]; [123].
#### Iii-B2 Collaborative Learning
Collaborative learning, an approach that emphasizes group work and the sharing of knowledge and skills among students, has been shown to be an effective way to improve student learning outcomes [125]; [126]; [127]. The use of Internet of Things (IoT) devices in education can facilitate collaborative learning by enabling students to connect with each other and with educational resources from anywhere [127]; [128].
One of the key aspects of collaborative learning using IoT is the ability to facilitate remote and distributed collaboration
Fig. 6: Illustration of Personalized Learning based on AI.
among students, regardless of their location [128]; [129]. IoT-enabled devices such as smartphones, tablets, and laptops can be used to share documents, communicate and collaborate on projects in real-time [129]. Additionally, virtual and augmented reality (VR/AR) can be used to create immersive virtual and augmented reality experiences that enable students to collaborate and interact with each other in a shared digital space [127]; [129].
Furthermore, IoT-enabled devices can be used to create smart classrooms that facilitate collaboration among students and teachers. For example, smartboards can be used to display and share information, and students can use connected tablets to participate in interactive activities and quizzes.
#### Iv-B3 Remote Learning
Remote learning, also known as distance learning or e-learning, refers to the ability to access educational content and resources from a remote location, typically through the use of technology such as computers, smartphones, or tablets. With the advent of the Internet of Things (IoT), remote learning has become an increasingly important aspect of Education 5.0.
IoT technology can facilitate remote learning by enabling students to access educational content and resources from anywhere, at any time, using a variety of devices such as smartphones, tablets, and laptops [130, 131]. This allows students to learn at their own pace, and to access educational resources that may not be available in their immediate environment.
IoT-enabled devices such as wearables and sensors can also be used to collect real-time data on student learning and behavior, providing a deeper understanding of individual student needs and abilities [131]. This information can be used to create personalized learning experiences that are tailored to the unique needs and abilities of each student.
Furthermore, IoT technology can be used to facilitate remote collaboration among students and teachers, regardless of their location. For example, students can use IoT-enabled devices to share documents, communicate, and collaborate on projects in real-time, while teachers can use IoT-enabled devices to monitor student engagement and provide feedback on student progress.
Virtual and augmented reality (VR/AR) can also be used in remote learning to create immersive virtual and augmented reality experiences that enable students to collaborate and interact with each other in a shared digital space [130, 52]. This can enhance their collaborative skills and improve their learning outcomes.
Overall, the use of IoT technology in remote learning can facilitate the implementation of personalized and collaborative learning, resulting in a more effective and efficient learning experience for students.
### _AR, VR, MR, XR, and Metaverse_
Augmented Reality (AR) [132, 17], Virtual Reality (VR) [52, 105, 130], Mixed Reality (MR) [133, 134], and Extended Reality (XR) [133, 135] are different forms of technology used to create immersive experiences. AR enhances the real world by overlaying digital information on the physical environment. This allows users to see the real world while looking at virtual objects, creating a hybrid environment. On the other hand, VR creates fully simulated environments, allowing users to experience and interact with computer-generated worlds as if they were real. MR is a combination of AR and VR, allowing virtual objects to interact with the physical environment. Meanwhile, Metaverse [133, 134] is a virtual shared space created by the fusion of the physical and virtual worlds, where users can interact with each other and interact with digital objects in a seemingly real environment. XR is an umbrella term that encompasses AR, VR, MR, and other related technologies used to create immersive experiences.
Education 5.0 is a revolutionary approach to education, leveraging the latest technology to provide students with a personalized and adaptive learning experience. The usage of AR, VR, MR, and XR is transforming the way students learn, making education more interactive and engaging. The use of these technologies is explained the in the following.
Augmented Reality (AR) technology allows users to layer virtual objects onto their physical environment, making learning more interactive and visually engaging. With Education 5.0, teachers can use AR to provide students with interactive learning materials such as animations, videos, and 3D models. This technology helps bring textbooks to life and makes learning more engaging. For example, students can use AR to view virtual images of the human body and study anatomy by examining different parts in detail.
Virtual Reality (VR) technology creates fully simulated environments that allow users to experience and realistically interact with computer-generated worlds. Education 5.0 lets users use VR to simulate real-world scenarios and deliver hands-on learning experiences without being physically there. This technology provides students with a more immersive and interactive learning experience and helps make teaching more engaging and effective. For example, students can use VR to experience historical events, participate in virtual field trips, and practice skills in simulated environments.
Mixed Reality (MR) technology combines the real and virtual worlds, allowing virtual objects to interact with the physical environment. With Education 5.0, MR can be used to deliver blended learning experiences, allowing students to enhance physical textbooks with digital information and animations. This technology can help make learning more interactive and engaging, and can also provide students with more information and resources than can be accessed in traditional textbooks. For example, students can use MR to explore virtual models of historic sites and landmarks and manipulate virtual objects in real-time.
Extended Reality (XR) technology is an umbrella term that encompasses AR, VR, MR, and other related technologies used to create immersive experiences. Education 5.0 can use XR to deliver seamless, integrated learning experiences that bridge the
real and virtual worlds. This technology will help create a new age of education that is personalized, adaptive, and focused on student success.
Finally, Metaverse is a virtual shared space created by the fusion of the physical and virtual worlds, where users can interact with each other and interact with digital objects in a seemingly real environment. In Education 5.0, teachers can use the Metaverse to provide students with a virtual learning environment where they can participate in virtual classes, interact with classmates and teachers, and participate in virtual excursions. Not only can this technology help make education more accessible, allowing students to participate in classes anytime and anywhere, but it can also provide students with a more immersive and interactive learning experience.
In conclusion, the use of AR, VR, MR, XR, and Metaverse technologies in Education 5.0 has the potential to revolutionize the way students learn, making education more accessible, engaging, and effective. These technologies provide students with personalized and adaptive learning experiences, making teaching more interactive, immersive, and effective.
### _Blockchain_
Blockchain technology has been identified as a key enabler of Education 5.0 due to its potential to transform the way educational data is managed and used [136]. The decentralized and secure nature of blockchain allows for a secure and tamper-proof way of storing and sharing educational data, such as student records, credentials, and educational achievements. This helps to improve data privacy and security, and reduces the risk of data breaches and other security threats [136].
In the area of verification and credentializing, blockchain can provide a secure and reliable way of tracking and verifying educational achievements through the use of digital credentials that are immutable and easily verifiable [137]. This not only helps to ensure the authenticity of educational credentials but also provides a streamlined way for individuals to demonstrate their qualifications and skills to potential employers or further educational institutions.
Furthermore, blockchain technology has the potential to increase access to education by enabling the creation of decentralized educational platforms [138]. These platforms can provide accessible, affordable, and equitable education for all, regardless of their location or financial background. This can help to address the issue of unequal access to education and provide opportunities for individuals to learn and grow.
Another important aspect of blockchain technology is its ability to increase transparency and accountability in the education sector [139]. This can help to build trust between educators, students, and other stakeholders by providing a clear understanding of the educational landscape. This increased transparency and trust can also help to improve the overall quality of education by enabling stakeholders to make informed decisions based on accurate and up-to-date information.
### _Big Data Analytics_
Big data analytics refers to the use of advanced data processing and analysis techniques to extract valuable insights from large and complex data sets. Educational process information analysis and decision-making processes in areas including academic success, faculty effectiveness, organizational expansion, and technical efficiency are being transformed by big data [140].
At the moment, AIEd uses learning analytic and data mining to provide social, cognitive, or emotional views to comprehend the cognitive changes in collaborative learning [75].By analyzing data from various sources, such as student test scores, attendance records, and demographic information, educators can gain insights into what factors contribute to student success and identify areas where they need to focus their efforts to improve student outcomes [141]. Big data analysis can help teachers understand each student's strengths, weaknesses, and learning preferences, allowing them to provide customized educational experiences and improve student engagement to improve personalizing learning [142]. Recently big data analytics using to enhance research and gain insights into learning patterns and educational outcomes, which can inform the development of new teaching methods and educational technologies. To improve administrative processes big data analytics streamline administrative processes, such as student enrollment, scheduling, and data management [143]. Porter's value chain model is described by Ghazwan H [144] as a useful tool for pinpointing chances for big data analisys to enhance the higher education value chain and explained how big data analities enhance the production of value in higher education. Using a partial data analysis model, Wanli X et al. [145] presented research using the social cognitive theory to pinpoint potential environmental, individual, and behavioral elements that influence students' usage of data. In order to extract education behavior from large-scale student open-ended answers and to verify the convergent validity of the results by comparing them with theory-driven, Bilge G. et al [146] provide a latent Dirichlet allocation topic modeling study along with a visualization tool.
### _Gamification_
Game-based learning is designed to make learning more interactive and engaging by incorporating elements of games such as points, leaderboards, and rewards. This can help increase student motivation and engagement, as well as improve learning outcomes. Additionally, game-based learning can promote problem-solving and critical thinking skills, which are important 21st-century skills. Furthermore, game-based learning can be personalized to the needs of individual students, this allows for students to work at their own pace and on their own level, which can help improve learning outcomes for all students, regardless
of their ability level. Moreover, game-based learning can be used to create immersive and interactive learning experiences that can help students visualize and understand difficult concepts, this can be especially effective in STEM subjects.
There are several research papers that have studied the effectiveness of game-based learning in education.
For example, a study published in the Journal of Computer Assisted Learning found that game-based learning can improve student motivation, engagement, and learning outcomes in science education. The study showed that students who used game-based learning in a science classroom had a better understanding of the subject matter and were more motivated to learn.
Another study published in the Journal of Educational Technology Development and Exchange found that game-based learning can help improve problem-solving and critical-thinking skills in students. The study showed that students who used game-based learning had better problem-solving and critical thinking skills compared to students who did not use game-based learning.
Research conducted by the International Journal of Emerging Technologies in Learning found that game-based learning can create immersive and interactive learning experiences that can help students visualize and understand difficult concepts. The study showed that students who used game-based learning in a mathematics classroom had a better understanding of the subject matter and were more motivated to learn.
### _Insight and Lesson Learned_
Gamified learning of game-based learning is a crucial requirement for Education 5.0 because it can help increase student motivation, engagement, and learning outcomes, promote problem-solving and critical thinking skills and create immersive and interactive learning experiences that can help students visualize and understand difficult concepts. In conclusion, existing research has shown that game-based learning can improve student motivation, engagement, and learning outcomes, promote problem-solving and critical thinking skills, and create immersive and interactive learning experiences that can help students visualize and understand difficult concepts. These findings support the idea that game-based learning is a crucial requirement for Education 5.0.
Moreover, the recent shift towards Generative AI like ChatGPT and similar chatbots can provide instant, personalized support to students. Using AI and natural language processing, ChatBots can quickly answer questions, provide feedback on assignments, and even suggest resources to help students better understand the material. This can help students who are struggling to keep up with the pace of the course or who have special needs outside of class. Additionally, Chatbots can help create more engaging and interactive learning experiences. For example, a ChatBot can be a virtual tutor or mentor, guiding students through interactive learning activities and providing feedback on their progress. This makes learning more fun and engaging for students and allows for more individualized attention.
Finally, the metaverse could also significantly impact on the way current education systems are operating. For instance, injecting VR/AR into the classroom has improved the learning experience and make it more interesting and playful in contrast to the traditional teaching methods.
## V Current state-of-the-art and Future roadmap
The existing benchmarks for AI in Education 5.0 are described from three perspectives. The first perspective is centered on students, the second on instructors, and the third on institutions.
Fig. 7: Illustration of Big data analytics in education system.
### _Student-focused AI in education 5.0_
A small digression is required before analyzing the different benchmarks and modern tools of student-focused AI in education, i.e., AI-assisted tools expressly developed to aid students. Not all AI-assisted technology utilized by students have been intended for students. Instead, these technologies have been "repurposed" for educational purposes. Typically, these technologies are not considered purely to assist students, but they must be accounted for in any exhaustive overview of student-focused AI in education. Google's suite of collaborative tools, which includes Google Docs [147], is probably the most powerful AI-assisted technology that has been repurposed for education. In addition, social networking services such as WhatsApp [148], ZOOM [149], and content-sharing platforms such as YouTube [150] are increasingly being utilized to promote student learning in a variety of ways (a trend that escalated during recent pandemic lockdowns across the globe). The following benchmarks for student-focused AI in education are described in detail.
#### Iv-A1 Smart teaching systems
Smart teaching systems (STS) that are increasingly advanced are the most widespread and undoubtedly best-funded AI deployments in education. Typically, they deliver computer-based, step-by-step instruction across topics in clearly delineated, organized courses. An STS provides a variety of individualized instruction, tasks, and assessments to each learner. While the learner participates in a certain activity, the system collects many points, such as what is viewed, what is written, which objectives have been successfully completed, and any misunderstandings that have been exhibited. This data is analyzed based on artificial intelligence training models to identify the next information, activity, and quiz to be supplied, thus creating a personalized track through the content to be taught. The process is then repeated.STS contains a teacher dashboard that helps to assess student outcomes accurately. As an example iTalk2Learn [151] is a project developed in Europe; the developers of the system require that machine learning is used to compile personal lesson plans.
#### Iv-A2 AI based Applications
There is a rapidly expanding collection of commercially accessible AI-assisted educational applications in the top online services. There are, for instance, increasingly amazing AI-assisted mathematics applications, such as Photomath [132], which some predict could make mathematics education to a new horizons. These apprehensions resemble those that accompanied the introduction of calculators in schools roughly five decades ago: if the tool can do it (automatically calculate a basic arithmetic, automatically translate between languages, or instantly determine solutions), there may be no need for children to learn how to do it, thereby impacting learning. Prahani [152] creates artificial intelligence-based solutions for K-12 and higher education organizations. Additionally, it is used in corporate training situations. One of Prahani's most important AI tools is its virtual learning assistant, which uses conversational technology to aid students in formulating open-format replays and enhancing their critical thinking abilities. In addition, the virtual assistant delivers tailored one-on-one teaching and real-time feedback for each student.
#### Iv-A3 VR/AR/Mr based learning
Virtual Reality (VR) and Augmented Reality (AR) simulation models and virtual games-based learning are regularly integrated with artificial intelligence machine learning, computer vision, and natural language processing and are increasingly employed in educational contexts. The advancement of AI-assisted display technologies, virtual reality (VR), augmented reality (AR), and mixed reality (MR) have been extensively used in thoracic surgery to convert 2D images into 3D models, which aids surgical education, planning, and simulation [153]. Google has created over a hundred VR and AR Expeditions appropriate for academic contexts. Likewise, digital games-based learning (DGBL) is progressively using artificial intelligence (AI) technology to personalize gaming to each learner [154].
#### Iv-A4 AI-based Education 5.0 for People with Disabilities
Education 5.0 is concentrated on the application of AI techniques for the identification of learning disorders such as ADHD (e.g., [155]), dyslexia [156], and dysgraphia [157].In addition, a substantial study has been conducted on using robots in education 5.0, particularly to assist youngsters on the autistic spectrum [158]. Several predominant AI technologies, such as text-to-speech applications and automated image tagging, have been reconfigured for kids with learning disabilities, along with a limited number of specialized AI-assisted apps, such as those that automatically interpret for children with hearing impairments.
#### Iv-A5 Advance Generative Pre-trained Transformers for learners
The introduction of Chat GPT [159] can be mile stone towards education 5.0. Globally, essays continue to be a vital part of educational evaluation, however, plagiarism has been a widespread practice for centuries. With online essay mills selling customized writings on any subject, the Internet has facilitated this process with chat GPT alike tools. Recent AI advancements known as 'big language models,' such as the GPT-3 [160] from Open AI outlined above, are set to have an even larger effect (GPT-3, 2020). There are currently a number of commercial companies that provide students with automated essay writing (AEW) systems that, in response to a stimulus such as an essay question, may write single paragraphs or full articles. Numerous companies now provide students with automated essay writing (AEW) programs that can automatically write individual paragraphs or full compositions in response to a topic such as an essay question. Despite the fact that the writing created by AEW is sometimes shallow and illogical, it is often impossible to discern whether the content was generated by a chatbot or a human. It is uncertain if AEW tools promote or hinder student learning. Nevertheless, because of their rising complexity and what might be defined as a rivalry among AEWs and AEW analyzers, they are likely to have an effect on how we evaluate learners [161].
#### Iv-A6 Automated Formative Assessment Model for Learning
Automated formative assessment in education 5.0 employs natural language processing, semantic processing, and other AI-assisted approaches to deliver meaningful feedback on learner outputs. There are fewer researched and commercial automated formative assessment programs that exist despite their promise to promote student learning; this is likely due to the challenges associated with automatically giving accurate and useful feedback
[162]. Principally, no AI algorithm is now capable of the level of comprehension or accuracy of assessment that an instructor can provide, rather depending on the superficial characteristics of the writing. Research conducted at Stanford University examined an auto grader automated formative assessment system that offered feedback on coding assignments by 12,000 computer science students. Approximately 98 percent of programmers agreed with the provided comments, somewhat higher than their acceptance of instruction from human teachers [163].
### _Teachers focused Education 5.0_
Many student-centered education, particularly STS, provide portals or dashboards for instructors, often based on open learner models, that provide a dynamic depiction of what individual learners and groups of learners have accomplished or their mistakes [164].One innovative method involves the use of augmented reality (AR) spectacles used by the instructor to overlay dashboard-like statistics over the heads of their learners as the students participate in an STS [165].Although amazing, this is an example of employing an AI technology to solve a problem produced by an AI technology (in this case, to address the fact that when students are using an STS, their instructor cannot readily see what they are doing and hence cannot give proper assistance).Regardless, STS and other dashboard-enabled AI in education 5.0 are primarily designed with the student in mind. In fact, if we disregard overlaps for the sake of study, there are few instances of truly teacher-focused AI in education 5.0. Here, we consider six contentious prospects: plagiarism identification, intelligent curation of learning resources, classroom supervision, automated summative evaluation, AI teaching assistants, and classroom coordination.
#### V-B1 Detection of plagiarism
Educators make extensive use of commercially accessible plagiarism detection technologies, for which machine learning techniques have been extensively adopted in the recent year. The teachers and institutions widely uses Turnitin [166] an AI driven plagiarism detector. Education 5.0 greatly focuses on detection as there are widely available GPTs which are trained on internet data proned to hugely plagiarised data.
#### V-B2 Pedagogical supervision
AI-assisted pedagogical supervision technologies that have been researched, developed, and made commercially accessible are becoming more prevalent.For instance, AI-assisted video apps have been designed to monitor where a student is looking, allowing the system to determine whether or not the student is concentrating on the instructor or the job at hand [167].Students are being requested to wear portable EEG (electroencephalography) headsets to monitor their cognitive function, which is potentially even more invasive.Across many academic institutions, AI-assisted technologies are also used to track a student's campus activities (often through a mobile app), what they obtain from the online learning system, and what they purchase at campus book shops and cafeteria [105].
#### V-B3 AI based smart grading
There has been anticipation for a long time that AI would help instructors save time and increase productivity the tedious and expensive task of grading student projects, tests, and other tasks [168]. AI based auto-graders have been introduced for the assessment of student written tasks [169].Some state-of-the-art autograders additionally conclude to identify the nature of the issue and propose to the learner how to remedy it, while others, dependent on the discipline, offer to evaluate student responses with approximately 90 percent reliability [170].
#### V-B4 Technology enhanced classroom orchestration
Orchestration is an approach to Technology Enhanced Learning that focuses on the issues of integrating technology into the classroom, with a special emphasis on strengthening instructors' responsibilities.While still in its infancy, research into the ways in which AI may aid classroom orchestration is expanding [171].To assist the instructor in orchestrating such a complicated curriculum design, we designed a tablet application that enabled the teacher to observe the real-time status of the class, regulate the flow of activities, and know when and where he was required within the flow of class activities. The tablet used a collection of specifically created real-time software agents to handle student interactions in real time, allowing for the dynamic coordination of student groups, material distribution, and instructor alerts [172].
### _Institution focused education 5.0_
Institutional priorities in education 5.0 include technology that facilitate the distribution of financial assistance, curriculum, scheduling, and work scheduling, and the identification of dropouts and students at risk, as well as the quality of education and learning [173].These technologies have a distinct administrative purpose and have many similarities with business-oriented Artificial Intelligence. In light of this, we will only discuss two essential and contentious institution-focused education topics: admissions (one of the high-risk use cases outlined in the proposed EU AI Act) and web security for educational institutions.
#### V-C1 Admissions
Not without controversy, many institutions of higher education, mostly in the United States, employ commercially accessible AI-assisted admissions software to help their admissions procedures. The objective is to minimize expenses and improve the fairness of the admissions process by removing invisible human biases (such as grouphink and racial and gender prejudices) that might influence decision-making [174].
#### V-C2 Web security for institutions
Early on in the COVID-19 epidemic, a substantial amount of instruction and assessments went online, resulting in the explosive growth of various exam-monitoring and web-security organizations [175]. Web security strives to ensure academic integrity by employing AI-assisted cameras and microphones to automatically monitor students taking an online test by scanning their faces and recording their keystrokes and mouse movements [176].They are suspected of interference, incompetence, prejudice, stopping students from taking tests, and aggravating mental health issues. In reality,web
security is one of the most egregious instances of employing AI to automate ineffective pedagogical techniques rather than to build creative alternatives.
## VI Challenges
Education 5.0 is a concept that refers to the next generation of education, which is characterized by a more personalized, learner-centric approach that utilizes technology to enhance the learning experience. It is based on the idea that education should be flexible, adaptable, and responsive to the needs of individual learners [177, 178]. Some of the major challenges of Education 5.0 are explained below.
One of the major challenges for Education 5.0 is implementation cost. To implement a personalized, technology-enabled education system, schools and colleges must invest in new technologies such as learning management systems, digital content, and hardware [1, 179]. This can be a huge financial burden for schools and colleges with limited budgets. The cost of maintaining and upgrading these technologies over time can also be high. This can make it difficult for schools and universities to implement Education 5.0 fully. This is especially true in developing countries with limited resources.
Another major challenge for Education 5.0 is the lack of teacher training [180]. To effectively implement a personalized, technology-enabled education system, teachers must be trained to use new technologies and teaching methods. This can be a major challenge, especially in developing countries with limited resources. Additionally, many teachers are resistant to change and may not be comfortable using new technology in the classroom [130]. Without proper training, a teacher may not be able to effectively use the core technology and teaching methods of her Education 5.0.
The digital divide is also a big challenge for Education 5.0 [181, 182]. Not all students have access to the same technologies and resources. This can create a digital divide where some students have access to the latest technology and resources while others do not. This could further widen the gap between students from different socioeconomic backgrounds. This can make it difficult for educators to provide personalized, technology-enabled education for all students, regardless of background. Limited resources for content creation are also a challenge in implementing Education 5.0 [183]. Developing interactive and personalized learning materials can be a time-consuming and expensive process and can be a barrier for schools and colleges with limited resources. Additionally, the lack of standardization in the technologies and platforms used by educational institutions can make resource sharing and exchange difficult.
Privacy and security are also major challenges for Education 5.0 [184]. Privacy and security concerns are becoming more and more important as personal and sensitive information is shared and stored online. This can be a major challenge for schools and colleges that need to ensure the safety and security of student data. The threat of cyberattacks and data breaches is becoming more prevalent, and schools and colleges must take the necessary steps to protect student data [185].
Another challenge is the limited research on the effectiveness and impact of Education 5.0 [186]. As a relatively new concept, research on its efficacy and impact is still limited. This makes it difficult for educators and policymakers to make informed decisions about its implementation. Moreover, without proper research, it is difficult to determine the best way to implement Education 5.0 and measure its success.
One of the hot technology nowadays is Metaverse [187]. Implementing the Metaverse in education requires a significant investment in technical infrastructure, including high-speed internet access, powerful servers, and hardware devices such as VR/AR headsets. Developing educational content that is suitable for the Metaverse can be a complex and time-consuming task. Teachers and educators need to be trained on how to create and deliver content in a virtual environment. Not all students have access to the necessary technology and devices to participate in the Metaverse. This can create a digital divide, where some students have an advantage over others. The Metaverse raises concerns about security and privacy, as personal data and sensitive information may be at risk of being hacked or stolen.
The Metaverse is a complex and rapidly evolving technology, and there is a lack of standardization and interoperability between different platforms and devices [188, 131]. The Metaverse can be difficult to scale, as the number of users and devices increases [186, 131]. This can lead to technical challenges such as latency, bandwidth constraints, and server overload [188, 131]. Providing an intuitive and seamless user experience is crucial to the success of the Metaverse. The Metaverse should be easily navigable and engaging, with minimal lag or glitches.
Finally, balancing online and offline learning is also a challenge for Education 5.0 [189]. It can be difficult to find the right balance between online and offline learning, as too much online learning can lead to isolation and lack of social interaction, while too much offline learning can limit the use of technology and personalization.
### _Summary and Insights_
In conclusion, Education 5.0 is a promising concept that can enhance the learning experience for students, but it also comes with a set of challenges. These challenges include implementation cost, lack of teacher training, digital divide, limited resources for content creation, privacy and security concerns and limited research on the effectiveness and impact of Education 5.0. To overcome these challenges, schools and colleges need to invest in new technologies, provide proper training for teachers, and ensure that all students have access to the necessary technology and resources. Additionally, proper research needs to be conducted to determine the best ways to implement Education 5.0 and measure its success. Furthermore, steps must be taken
to ensure the privacy and security of student data, and to address any ethical concerns that may arise. Overall, Education 5.0 has the potential to revolutionize education, but it requires careful planning, implementation, and ongoing evaluation to ensure its success. Table III summarizes the different challenges that can potentially make barriers while implementing Education 5.0 systems. We identify the possible solution based on the current state-of-the-art and envision open questions and research direction to address that particular challenge.
## VII Conclusion
In this survey, we have exhibited the concept of Education 5.0, a futuristic term for the next revolution in Education. The evolution of Education by the gradual integration of ICT and AI technologies which paved the way for Education 5.0 has been exhibited. We proposed a conceptual 2D architecture to emphasize on the goal and requirements while the need for technology on the other end to complement the goal. The current state-of-the-art which lead to the idea of Education 5.0 is elucidated while some of the crucial challenges are identified to complete the study. We aim that this survey will cover a base for the realization of Education 5.0, as the next big wave in education.
|
2307.04793 | Stellar triples with chemically homogeneously evolving inner binaries | Observations suggest that massive stellar triples are common. However, their
evolution is not yet fully understood. We investigate the evolution of
hierarchical triples in which the stars of the inner binary experience
chemically homogeneous evolution (CHE), particularly to understand the role of
the tertiary star in the formation of gravitational-wave (GW) sources. We use
the triple-star rapid population synthesis code TRES to determine the evolution
of these systems at two representative metallicities: $Z = 0.005$ and $Z =
0.0005$. About half of all triples harbouring a CHE inner binary (CHE triples)
experience tertiary mass transfer (TMT) episodes, an event which is rare for
classically evolving stars. In the majority of TMT episodes, the inner binary
consists of two main-sequence stars (58-60 per cent) or two black holes (BHs,
24-31 per cent). Additionally, we explore the role of von Zeipel-Lidov-Kozai
(ZLK) oscillations for CHE triples. ZLK oscillations can result in eccentric
stellar mergers or lead to the formation of eccentric compact binaries in
systems with initial outer pericenters smaller than $\sim$ 1200 $R_{\odot}$.
Approximately 24-30 per cent of CHE triples form GW sources, and in 31 per cent
of these, the tertiary star plays a significant role and leads to
configurations that are not predicted for isolated binaries. We conclude that
the evolution of CHE binaries can be affected by a close tertiary companion,
resulting in astronomical transients such as BH-BH binaries that merge via GW
emission orders of magnitude faster than their isolated binary counterparts and
tertiary-driven massive stellar mergers. | Andris Dorozsmai, Silvia Toonen, Alejandro Vigna-Gómez, Selma E. de Mink, Floris Kummer | 2023-07-10T18:00:04Z | http://arxiv.org/abs/2307.04793v2 | # Stellar triples with chemically homogeneously evolving inner binaries
###### Abstract
Observations suggest that massive stellar triples are common. However, their evolution is not yet fully understood. We investigate the evolution of hierarchical triples in which the stars of the inner binary experience chemically homogeneous evolution (CHE), particularly to understand the role of the tertiary star in the formation of gravitational-wave (GW) sources. We use the triple-star rapid population synthesis code TRES to determine the evolution of these systems at two representative metallicities: \(Z=0.005\) and \(Z=0.0005\). About half of all triples harbouring a CHE inner binary (CHE triples) experience tertiary mass transfer (TMT) episodes, an event which is rare for classically evolving stars. In the majority of TMT episodes, the inner binary consists of two main-sequence stars (58-60 per cent) or two black holes (BHs, 24-31 per cent). Additionally, we explore the role of von Zeipel-Lidov-Kozai (ZLK) oscillations for CHE triples. ZLK oscillations can result in eccentric stellar mergers or lead to the formation of eccentric compact binaries in systems with initial outer pericenters smaller than \(\sim 1200\)\(R_{\odot}\). Approximately 24-30 per cent of CHE triples form GW sources, and in 31 per cent of these, the tertiary star plays a significant role and leads to configurations that are not predicted for isolated binaries. We conclude that the evolution of CHE binaries can be affected by a close tertiary companion, resulting in astronomical transients such as BH-BH binaries that merge via GW emission orders of magnitude faster than their isolated binary counterparts and tertiary-driven massive stellar mergers.
keywords: gravitational waves, stars: evolution, stars: massive, stars:black holes, binaries:close
## 1 Introduction
An accurate and detailed understanding of the evolution of massive stars is essential for various important open questions in astrophysics, such as nucleosynthesis of heavy elements, the origin of supernova events, gamma-ray bursts, and GW sources (e.g. Langer, 2012). Observational evidence shows that the fraction of stars in hierarchical triples or in higher-order multiple-stellar systems increases with the mass of the primary star (Evans, 2011; Sana et al., 2014). In particular, Moe & Di Stefano (2017) showed that the majority of O-type stars reside either in triple or quadruple stellar systems. This implies that in order to understand the evolution of massive stars, and to correctly interpret the various astrophysical phenomena related to them, we need to consider stellar interactions in hierarchical triples.
The evolution of hierarchical triples involves a complex interplay between three-body dynamics, stellar evolution, and stellar interactions (e.g. Toonen et al., 2016). Three-body interactions can result in e.g. ZLK oscillations (von Zeipel, 1910; Lidov, 1962; Kozai, 1962; Naoz, 2016), a secular effect where the eccentricity of the inner binary can be significantly enhanced as a result of dynamics. ZLK oscillations coupled with various dissipative processes (e.g., tides, GWs) can shrink the orbit (e.g. Mazeh & Shaham, 1979; Fabrycky & Tremaine, 2007; Thompson, 2011) and prompt the merger of the inner binary (e.g. Perets & Fabrycky, 2009; Vigna-Gomez et al., 2022). These type of mergers can result in astronomical transient events such as Type Ia supernova (e.g. Katz & Dong, 2012; Hamers et al., 2013; Toonen et al., 2018; Swaruba Rajamuthukumar et al., 2022) or double compact object mergers (e.g. Antognini et al., 2014; Antonini et al., 2017; Rodriguez & Antonini, 2018; Hamers & Thompson, 2019; Fragione & Loeb, 2019; Stegmann et al., 2022). Furthermore, stellar evolution can affect the orbital dynamics of the triple. For example, radial expansion and mass loss can prompt ZLK oscillations or dynamical instabilities (Perets & Kratter, 2012; Shappee & Thompson, 2013; Michaely & Perets, 2014; Toonen et al., 2022; Hamers et al., 2022).
Population synthesis studies of stellar triples show that the inner binaries in hierarchical triples have increased stellar interactions compared to isolated binaries (e.g. Toonen et al., 2020; Stegmann et al., 2022; Hamers et al., 2022). Similarly, tertiary-driven dynamics could play an essential role in double compact object mergers. While GW sources detected by the LIGO/Virgo collaboration (LVC, e.g. Abbott et al., 2019; Abbott et al., 2019, 2021; The LIGO Scientific Collaboration et al., 2021) have been studied in the context of stellar triples, this has been done so far only in a limited parameter space. For example, for systems in which the inner binary is wide enough such that interaction between the two stars in the form of mass exchange can be neglected (e.g. Silsbee & Tremaine, 2017; Antonini et al., 2017; Rodriguez & Antonini, 2018; Fragione & Loeb, 2019; Vigna-Gomez et al., 2021; Martinez et al., 2022), or in which the stars of the inner binary merge during the main sequence (Stegmann et al., 2022).There are still major uncertainties and a need to explore and to
understand the population of merging binary BHs from hierarchical triples.
In this paper, we focus on the evolution of hierarchical triples in which the stars of the inner binaries are chemically homogeneously evolving. CHE stars have been discussed in the context of rapidly-rotating stars (Maeder, 1987; Yoon & Langer, 2005; Yoon et al., 2006), which can experience enhanced mixing during the MS stage. This mixing allows hydrogen-rich matter in the radiative envelope to be deposited into the convective core, where it is fused to helium. At the same time, helium is mixed throughout the star. This prevents the build-up of a chemical gradient inside the star and the classical core-envelope structure. As a result, the stars remain very compact over their lifetime. CHE has been proposed to occur in very close binaries where the tidal deformation of both stars is strong and they are forced to rotate rapidly (de Mink et al., 2009; Song et al., 2016). More recently, CHE binaries received renewed interest as they have been proposed as a new pathway to form BH binaries that can merge within the age of the universe (de Mink & Mandel, 2016; Mandel & de Mink, 2016; Marchant et al., 2016; du Buisson et al., 2020; Riley et al., 2021). Recently, Vigna-Gomez et al. (2021) studied triples with CHE inner binaries in the context of sequential merging BH-BHs with masses that fall in the pair-instability mass gap. Specifically, they considered sequential mergers of hierarchical co-planar triples, a simplified approach which neglected three-body dynamics. In this paper, we remove the constraints of co-planarity and explore, for the first time, the evolution of massive stellar triples with CHE inner binaries in the entire parameter space. As isolated CHE binaries are known to be promising GW progenitors, we will mostly focus on the role of the tertiary star in the evolution of the inner binary in the context of GW astronomy.
This paper is structured as follows. In section 2, we introduce \(\tt{TRES}\), the triple evolutionary code we use in this study, and the adaptations we have made to model CHE and contact binaries. In section 3, we discuss the results of our population synthesis in \(\tt{TRES}\) and identify the most important evolutionary channels. In section 4, we show that the initial parameters of the tertiary star are sufficient to predict the evolutionary channel of each system. In section 5, we use analytical and numerical methods to explore our synthetic population of stellar triples in the context of GW sources. Finally, we discuss the main difference between the evolution of triples with and without CHE stars in their inner binary.
## 2 Methodology
We use \(\tt{TRES}\) to simulate the evolution of our hierarchical triples (see Toonen et al., 2016, for a detailed description of the code). \(\tt{TRES}\) couples secular dynamics of stellar triples with stellar evolution, and takes into account additional physical processes such as stellar interactions and dissipative processes.
\(\tt{TRES}\) determines the evolution of each star by using the fitting formulae of Hurley et al. (2000) to the stellar tracks of Pols et al. (1998) from the rapid binary synthesis code SeBa (Portegies Zwart & Verbunt, 1996; Toonen et al., 2012), while interactions between the stars are determined by \(\tt{TRES}\). \(\tt{TRES}\) treats three-body dynamics in the following way. For secular evolution, we include secular three body dynamics (subscript '3b') including quadrupole (Harrington, 1968) and octupole terms (Ford et al., 2004 with corrections of Naoz et al., 2013). Regarding the additional physical processes, we take into account: i) general relativistic effects (GR) and GW emission (subscript 'GR' Peters, 1964; Blaes et al., 2002), ii) tidal friction (subscript 'TF' Hurley et al., 2002), iii) the effects of stellar winds under the assumptions of fast, adiabatic wind at the mass loss rate provided by SeBa (subscript 'wind'), iv) precession due to ZLK, GR, tides (subscript 'tides' Smeyers & Willems, 2001) and intrinsic stellar rotation (subscript 'rotate' Fabrycky & Tremaine, 2007), and v) the change in the stellar rotation due to stellar evolution based on spin angular momentum conservation (subscript 'I'). This gives rise to a set of first-order ordinary differential equations, that are solved numerically. These equations are:
\[\left\{\begin{array}{lcl}\dot{a}_{\rm in}&=&\dot{a}_{\rm in,GR}+\dot{a}_{ \rm in,TF}+\dot{a}_{\rm in,wind}\\ \dot{a}_{\rm out}&=&\dot{a}_{\rm out,GR}+\dot{a}_{\rm out,TF}+\dot{a}_{\rm out,wind}\\ \dot{a}_{\rm in}&=&\dot{e}_{\rm in,3b}+\dot{e}_{\rm in,GR}+\dot{e}_{\rm in,TF} \\ \dot{e}_{\rm out}&=&\dot{e}_{\rm out,3b}+\dot{e}_{\rm out,GR}+\dot{e}_{\rm out,TF}\\ \dot{g}_{\rm in}&=&\dot{g}_{\rm in,3b}+\dot{g}_{\rm in,GR}+\dot{g}_{\rm in,tides}+ \dot{g}_{\rm in,rotate}\\ \dot{g}_{\rm out}&=&\dot{g}_{\rm out,3b}+\dot{g}_{\rm out,GR}+\dot{g}_{\rm out,tides}+\\ \dot{g}_{\rm in}&=&\dot{h}_{\rm in,3b}\\ \dot{g}&=&\dot{g}_{\rm in,3b}\\ \dot{g}&=&\dot{g}_{\rm in,3b}\\ \dot{g}&=&\dot{g}_{\rm in,3b}\\ \dot{g}&=&\dot{g}_{\rm in,out}(J_{b,\rm in}(J_{b,\rm in}+J_{b,\rm out}\theta)) \\ &\dot{J}_{b,\rm out}(J_{b,\rm out}+J_{b,\rm in}\theta)\\ \dot{\Omega}_{1}&=&\dot{\Omega}_{1,\rm TF}+\dot{\Omega}_{1,1}+\dot{\Omega}_{ 1,\rm wind}\\ \dot{\Omega}_{2}&=&\dot{\Omega}_{2,\rm TF}+\dot{\Omega}_{2,1}+\dot{\Omega}_{ 2,\rm wind}\\ \dot{\Omega}_{3}&=&\dot{\Omega}_{3,\rm TF}+\dot{\Omega}_{3,1}+\dot{\Omega}_{ 3,\rm wind}\end{array}\right. \tag{1}\]
where \(a\), \(e\), \(g\), \(h\) and \(J_{b}\) represent the semimajor axis, eccentricity, argument of pericenter, line of ascending nodes, and the orbital angular momentum for the inner (subscript 'in') and outer (subscript 'out') orbit. The dot represents the time derivatives. Lastly \(\theta\equiv\cos(i)\), where \(i\) is the mutual inclination between the inner and outer orbit, and \(\Omega_{1},\Omega_{2},\Omega_{3}\) the spin frequency of the primary, secondary and tertiary star respectively. Per definition the primary and secondary stars are the stars in the inner binary, with the primary star initially more massive than the secondary star, and the tertiary star orbits the inner binary.
We highlight three aspects of the orbital evolution of hierarchical triples that is particularly relevant for the systems we study in this paper. Firstly, if the apsidal precession of the inner binary due to short range forces, such as tides (\(\dot{g}_{\rm in,tides}\)) and GR effects (\(\dot{g}_{\rm in,GR}\)) occurs on a much shorter timescale than the precession due to three-body dynamics (\(\dot{g}_{\rm in,3b}\)), ZLK oscillations will be quenched (see e.g. Holman et al., 1997; Eggleton & Kiseleva-Eggleton, 2001; Blaes et al., 2002; Fabrycky & Tremaine, 2007; Thompson, 2011; Dong et al., 2014; Liu et al., 2015; Petrovich, 2015; Anderson et al., 2017). The timescale of ZLK oscillations can be approximated as (e.g. Inmanen et al., 1997; Holman et al., 1997; Kinoshita & Nakai, 1999):
\[t_{\rm ZLK}=\left(\frac{M_{1}+M_{2}}{GM_{\rm out}^{2}}\right)^{1/2}\left(\frac{a _{\rm out}}{a_{\rm in}^{1/2}}\right)^{3}(1-e_{\rm out}^{2})^{3/2}. \tag{2}\]
The timescale related to the apsidal precession due to tides are (e.g. Smeyers & Willems, 2001; Liu et al., 2015):
\[t_{\rm tides}=\left(\frac{M_{1}}{15k_{\rm am}\mu_{\rm in}^{1/2}M_{2}}\right) \left(\frac{a_{\rm in}^{11/2}}{R_{1}^{5}}\right)\left(\frac{(1-e_{\rm in}^{2}) ^{5}}{1+\frac{3}{2}e_{\rm in}^{2}+\frac{1}{8}e_{\rm in}^{4}}\right), \tag{3}\]
where \(k_{\rm am}\) the apsidal motion constant, which we assume to be \(0.0144\) for MS and helium stars, \(\mu_{\rm in}=G(M_{1}+M_{2})\), i.e. the standard gravitational parameter for the inner binary and \(R_{1}\) is the radius of the inner star. The timescale related to precession due to general relativistic effects is (e.g. Misner et al., 1973; Blaes et al., 2002; Miller & Hamilton, 2002):
\[t_{\rm GR}=\frac{c^{2}}{3\mu_{\rm in}^{3/2}}a_{\rm in}^{5/2}(1-e_{\rm in}^{2}). \tag{4}\]
If \(t_{\rm ZL,K}\gg\min(t_{\rm GR,t_{\rm tides}})\), then three-body dynamics are suppressed. If the timescales are comparable, then the maximum eccentricity induced by the ZLK oscillations is diminished. In principle, rotation-induced oblateness in the inner binary also induces apsidal precession (\(g_{\rm in,rot}\), see e.g. Fabrycky & Tremaine, 2007). However, as long as the rotational period of the inner stars is not shorter than the orbital period (which is true for all systems considered here), \(\delta_{\rm tides}\gg\delta_{\rm rot}\) and therefore precession due to stellar rotation does not play a role in suppressing three-body dynamics (Liu et al., 2015).
Secondly, octupole terms in the three-body dynamics are typically negligible for CHE triples, as the mass ratio of the inner binary is always very close to one. Finally, we estimate the time it takes for the inner binary to merge due to GWs following Peters (1964), if the tertiary is dynamically decoupled from the inner binary. If ZLK oscillations are still relevant during the inspiral phase, we follow the approximation of Miller & Hamilton (2002):
\[t_{\rm GW}\approx t_{\rm GW,Peten}(a_{\rm in},e_{\rm in,max})(1-e_{\rm in,max} )^{-1/2}, \tag{5}\]
where \(t_{\rm GW}\) is the time required for the merger, \(t_{\rm GW,Peters}\) is the time to merger based on the relation of Peters (1964), \(e_{\rm in,max}\) is the maximum eccentricity reached during ZLK oscillations and \(a_{\rm in}\) is the initial inner semimajor axis. The approximation in equation 5 is based on Wen (2003) and it neglects the effects of precession due GR. When the latter is taken into account, Thompson (2011) finds that equation 5 underestimates the actual merger timescale typically by a factor of 2-3.
### Modelling of chemically homogeneous evolution
We follow Riley et al. (2021) in order to incorporate CHE stars in TRES. That means that we assume a star evolves chemically homogeneously, if the angular frequency of the spin of the star is above a certain critical value, i.e. \(\omega_{\rm star}>\omega_{\rm CHE,crit}\). Riley et al. (2021) provides a fit to this critical value based on MESA (Paxton et al., 2011) models at different masses and metallicities. In order to determine whether a star evolves chemically homogeneously, we check whether our simulated star is spinning above \(\omega_{\rm CHE,crit}\) at every timestep. If a star meets this criteria we do not evolve its radius during that timestep. We assume that the star by the end of core hydrogen burning forms a helium star with a mass \(M_{\rm He,ZAMS}=M_{\rm TAMS}\), where \(M_{\rm He,ZAMS}\) is the initial mass of the helium star and \(M_{\rm TAMS}\) is the terminal age main sequence mass of the star. With these assumptions, CHE stars experience an instantaneous drop in radii at the end of their MS phase (compare main sequence stellar evolution with helium star evolution in Hurley et al., 2000). This is a simplification of the results of detailed simulations of CHE stars, where the latter suggests a gradual contraction of the radius during the MS (e.g. Maeder, 1987). If a CHE star loses angular momentum (e.g. due to stellar winds), its rotational frequency decreases. If the frequency reduces to below the critical value, we assume the evolution of the star transitions back to the classical non-CHE case.
For simplicity, we only consider systems in which the stars of the inner binary are CHE from zero-age main sequence (ZAMS). Stars that do not evolve chemically homogeneously from ZAMS could, in theory, become CHE if they attained a sufficiently high-spin frequency before a significant chemical gradient is built up in their interior. This can be achieved for example, if a star is spun up by accretion during a mass transfer event (e.g., Cantiello et al., 2007; Ghodla et al., 2022). We neglect such systems in this study.
### Contact binaries
We follow the implementation of Riley et al. (2021) for modelling contact binaries (which is based on the models of Marchant et al., 2016). We assume that contact binaries, i.e. binaries in which both stars fill their Roche-lobes, can maintain co-rotation and consequently survive the contact phase without merging as long as neither of the stars fill the outer Lagrangian points (L2 and L3). For contact binaries, Marchant et al. (2016) finds that mass is transferred between the two stars back and forth until they reach an equal mass ratio. We follow Marchant et al. (2016) and approximate the L2 point as
\[\frac{R_{\rm L,2,-}-R_{\rm RL,2}}{R_{\rm RL,2}}=0.299\tan^{-1}(1.84\rm q^{0.397}), \tag{6}\]
where \(R_{\rm RL,2}\) is the Roche-lobe radius of the secondary star, which we approximate following Eggleton (1983).
If the stars in the inner binary are in contact but without filling their L2 points, we assume that the masses of the binary equalise via a fully conservative mass transfer phase. We follow Riley et al. (2021) and assume this mass equalisation occurs instantaneously and readjust the orbit of the inner binary as (see, e.g. Soberman et al., 1997):
\[\frac{a_{\rm fin}}{a_{\rm init}}=\left(\frac{M_{\rm 1,init}M_{\rm 2,init}}{M_{\rm 1, fin}M_{\rm 2,fin}}\right)^{2}, \tag{7}\]
where \(a_{\rm init}\), \(a_{\rm fin}\) are the initial and the final orbital separation and \(M_{\rm 1,init}\), \(M_{\rm 2,init}\) are the initial masses of the primary and the secondary, respectively. The final masses are \(M_{\rm 1,fin}=M_{\rm 2,fin}=1/2\cdot(M_{\rm 1,init}+M_{\rm 2,init})\) by definition. The assumption of mass equalisation for contact binaries results in the prediction of the CHE channel leading to mostly equal-mass binary BH mergers (e.g. Marchant et al., 2016).
### Stellar winds
The mass loss rates of stellar winds and their effects on the evolution of the star are determined by SeBa (Hurley et al., 2000; Toonen et al., 2012), while the effects on the orbit of the triple are determined by TRES (equation 1). In this study, we use the same implementation of stellar winds for massive stars as in Dorozmassi & Toonen (2022) with one difference; the mass loss rates of helium stars and giants are calculated according to the empirical formula of Hamann et al. (1995) instead of Sander & Vink (2020).
For reference, we summarise the mass loss rates prescriptions used in this study. For MS stars, we follow Vink et al. (2001), if \(T_{\rm eff}\leq 50\) kK and Nieuwenhijzen & de Jager (1990), if \(T_{\rm eff}>50\) kK. For evolved stars crossing the Hertzsprung gap core helium burning (CHeB) stars, we follow Vink et al. (2001), if \(T_{\rm eff}\geq 8\) kK or the maximum between Nieuwenhijzen & de Jager (1990) and Reimers (1975), if \(T_{\rm eff}<8\) kK. For evolved stars beyond the Humphreys-Davidson limit, we assume \(\dot{M}_{\rm LBV}=1.5\cdot 10^{-4}\,M_{\odot}\rm yr^{-1}\)(Belczynski et al., 2010). For Asymptotic Giant Brach stars and double shell burning supergiants, we calculate the maximum between Nieuwenhijzen & de Jager (1990), Reimers (1975) and Vassiliadis & Wood (1993). Finally, for helium stars we follow the empirical form from Hamann et al. (1995) in the form \(\dot{M}_{\rm WR}=0.5\cdot 10^{-13}\cdot\left(\frac{T_{\rm C}}{\zeta_{0}}\right)^{1.5 }\left(\frac{Z}{Z_{\odot}}\right)^{0.86}\) with a clumping factor of \(\eta=0.5\) from Hamann & Koesterke (1998) and a metallicity scaling of \(\dot{M}_{WR}\sim Z^{0.86}\)(Vink & de Koter, 2005).
In order to compute the change in the orbit due to stellar winds, we assume stellar winds are spherically symmetric and fast compared to the orbital velocity; additionally, we neglect wind accretion by the companions. In that case the inner and the outer orbit of the triple
widens as
\[\dot{a}_{\rm in,wind}=\left(\frac{a_{\rm final}}{a_{\rm init}}\right)_{\rm in}= \frac{M_{1,\rm init}+M_{2,\rm init}}{M_{1,\rm final}+M_{2,\rm final}}\,, \tag{8}\]
and
\[\dot{a}_{\rm out,wind}=\left(\frac{a_{\rm final}}{a_{\rm init}}\right)_{\rm out }=\frac{M_{1,\rm init}+M_{2,\rm init}+M_{3,\rm init}}{M_{1,\rm final}+M_{2,\rm final }+M_{3,\rm final}}, \tag{9}\]
where subscripts 'init' and 'final' refer to properties before and after the stellar winds carried mass away from the stars in a given timestep. We assume that the eccentricity remains unchanged by stellar winds (Huang, 1956, 1963).
We neglect stellar wind accretion by the other stars in the triple system (see e.g. Bondi & Hoyle, 1944). Neglecting accretion is justified for line-driven winds due to their large terminal velocities (see e.g. Vink et al., 2001). The assumptions of a fast and spherically symmetric wind might not always be valid (e.g. Brookshaw & Tavani, 1993), and rapidly rotating stars might not have fully symmetric outflows (Georgy et al., 2011). Particularly, stellar winds in certain binary-configurations might even lead to orbital shrinking (Schroder et al., 2021).
### Remnant formation
The mass of the compact object remnant is computed based on the delayed supernova model from Fryer et al. (2012). This prescription gives the mass of the stellar remnant as a function of CO core mass, where the latter is determined in SeBa based on the fits of Hurley et al. (2000). The natal kick velocity for BHs is calculated as
\[v_{BH}=(1-f_{\rm b})\left(\frac{M_{NS}}{M_{BH}}\right)v_{\rm kick}, \tag{10}\]
where \(f_{\rm b}\) is the fallback fraction (Fryer et al., 2012), \(M_{\rm NS}\) is the canonical neutron star mass (\(M_{\rm NS}=1.4M_{\odot}\)) and \(v_{\rm kick}\) is a random kick velocity drawn from the distribution inferred by Verbunt et al. (2017) from proper motion measurements of pulsars. We determine the change in the inner and outer orbit due to the core collapse of any of the stars in the triple system based on the formalism developed in Pijloo et al. (2012).
Models of Fryer et al. (2012) predict that the most massive stars collapse directly (typically \(M_{\rm ZAMS}\ga 40\,M_{\odot}\)), without any ejecta, and the only mass loss during the remnant formation is due to neutrino losses, which is assumed to be 10 per cent of the pre-core-collapse mass of the star. Additionally, we assume that the neutrino emission is spherically symmetrical and do not impart natal kick onto the BH. In this case, the orbit is only changed due to the instantaneous mass loss (e.g. via Blaauw kick, see Blaauw, 1961). We note that, if the pre-core-collapse orbit is circular, a Blaauw kick due to neutrino losses does not lead to a significant change in the inner orbital elements. However, this is no longer the case for eccentric pre-core-collapse orbits. In particular, if the core collapse occurs near the pericenter, the orbit can become significantly wider (e.g. Hills, 1983).
By the onset of core-oxygen burning, the core temperatures of the most massive stars can reach above \(T_{\rm core}\sim 3\times 10^{9}\,K\). Under these conditions, the emitted gamma-ray photons in the core are energetic enough to form electron-positron pairs. This leads pair-instability (see e.g. Fowler & Hoyle, 1964, Rakavy & Shaviv, 1967, Barkat et al., 1967, Fraley, 1968). Depending on the mass of the star, this instability can result in a pulsation pair instability supernova, in which the star experiences a series of pulsations leading to severe mass loss (i.e. or PPISN, see e.g. Yoshida et al., 2016; Marchant et al., 2016; Woosley, 2017; Renzo et al., 2020), or pair instability supernova, in which the star is completely disrupted and no remnant is formed (PISN, see e.g. Yoshida et al., 2016; Marchant et al., 2016; Woosley, 2017; Renzo et al., 2020). For the treatment of pair-instability in massive stars, we follow Stevenson et al. (2019). If the mass of the helium star pre-core-collapse is \(M_{\rm HE,pre-SN}\geq 35\,M_{\odot}\), the star is assumed to undergo PPISN, and its remnant mass is determined by the fitting formula of Stevenson et al. (2019), based on the detailed stellar simulations of Marchant et al. (2019). If \(60\leq M_{\rm HE,pre-SN}\leq 130\,M_{\odot}\), we assume the star undergoes PISN, and leaves no remnant behind. In principle, if \(M_{\rm HE,pre-SN}\geq 130\,M_{\odot}\), photo-dissintegration prevents the pair instability supernova to occur and the star collapses directly into a BH (Bond et al., 1982, Woosley & Weaver, 1982, Heger & Woosley, 2002, du Buisson et al., 2020), however this does not occur for any of our simulated systems.
### Tertiary mass transfer (TMT) episodes
If the tertiary star fills its Roche-lobe, it will transfer mass to the inner binary. There have been some efforts to study and model this process (de Vries et al., 2014; Leigh et al., 2020; Comerford & Izzard, 2020; Glanz & Perets, 2021; Soker & Bear, 2021; Moreno Mendez et al., 2022), but this complex scenario remains to be fully understood.
In order to calculate the Roche-lobe of the tertiary star, we assume the inner binary can be approximated as a point mass and estimate the Roche radius with the fitting formula of Eggleton (1983). This assumption is valid in the regime where the orbital separation of the outer star is much larger than that of the inner binary (e.g. \(a_{\rm out}\gg a_{\rm in}\)). TRES determines the stability of TMT based on extrapolating typical methods from binary star evolution, i.e. by using critical mass ratios (see e.g. Toonen et al., 2016). This parameter is defined as \(q_{\rm crit}=M_{\rm donor}/M_{\rm accretor}\), i.e. the ratio of the mass of the donor and the mass of the accretor star at the onset of the mass transfer episode. The mass transfer phase is assumed to be dynamically unstable, if the mass ratio of the system is above the critical mass ratio, i.e. \(q>q_{\rm crit}\). We obtain \(q_{\rm crit}\) for each stellar evolutionary stage from Hurley et al. (2002) and Claeys et al. (2014). We quote these values for the two most common donor types in our simulations (a complete description of our assumptions about \(q_{\rm crit}\) can be found in Toonen et al., 2016). These are \(q_{\rm crit}=3\) and \(q_{\rm crit}=(1.37+2[M_{\rm donor,core}/M_{\rm donor}]^{3})/2.13\) for Hertzsprung gap stars (i.e. hydrogen shell burning stars which have not regained thermal equilibrium yet) and core helium burning (CHeB) stars, respectively. The term in the squared bracket is the core mass to total mass ratio of the donor. If this equals to \(\sim 0.45\) - \(0.65\), which is fairly typical for massive CHeB stars (Dorozsmai & Toonen, 2022), then \(q_{\rm crit}\approx 0.7\)-\(0.75\). This reflects the assumption made by Hurley et al. (2000), CHeB stars tend to have deep convective envelopes (cf. Klencki et al., 2020), and are therefore more likely to experience unstable mass transfer episodes (e.g. Hjellming & Webbink, 1987).
Stable TMT could be accompanied with the formation of a circumbinary disc or it could occur in a ballistic accretion fashion. These two types could lead to significantly different evolution of the inner orbit (de Vries et al., 2014). We assume that TMT occurs via ballistic accretion, if \(a_{\rm in}(1+e_{\rm in})\geq R_{\rm cd}\) at the onset of the TMT phase, where \(R_{\rm cd}\) is (i.e. adapting the fitting formulas for mass transferring binaries of Lubow & Shu, 1975 and Ulrich & Burger, 1976 to triples):
\[R_{\rm cd}=0.0425\,a_{\rm out}(1-e_{\rm out})\left[\frac{1}{q_{\rm out}} \left(1+\frac{1}{q_{\rm out}}\right)\right]^{1/4}. \tag{11}\]
#### 2.5.1 TMT: Evolution of the inner orbit
If the tertiary star fills its Roche-lobe, TRES stops the simulation of the system. However, when discussing potential GW progenitors (Section 5), we determine the orbital evolution due to TMT by applying simplified assumptions, if the mass transfer episode is dynamically stable. In this subsection we describe our assumptions about the evolution of the inner orbit during a stable phase of TMT, while in subsection 2.5.2 we discuss the evolution of the outer orbit.
We distinguish three particular TMT configurations cases, based on the evolutionary stage of the inner binary and on whether or not the transferred mass forms a circumbinary disc around the inner binary:
1. an inner binary with compact objects and with ballistic accretion,
2. an inner binary with compact objects and with a circumbinary disc,
3. a non-compact inner binary.
4. An inner binary with compact objects and with ballistic accretion. Hydrodynamical simulations of de Vries et al. (2014) showed that in case of a TMT episode with ballistic accretion, the transferred mass eventually engulfs the inner binary and exerts friction on it. This leads to a scenario that could be considered similar to the common-envelope evolution of binaries (e.g. Paczynski, 1976; Ivanova et al., 2013), since in both cases drag forces exerted by a gaseous medium supplied from the donor star lead to the orbital shrinking of the binary. Inspired by this similarity, de Vries et al. (2014) applied a modified version of \(\alpha\)-formal (originally developed for common-envelope evolution, see e.g. Tutukov and Yungelson, 1979; de Kool et al., 1987; Dewi and Tauris, 2000) to model the inner binary evolution of triples experiencing TMT (see also Hamers et al., 2021). For the configuration case (i), we take the same approach.
Below we explain how the post-mass-transfer inner orbit is determined based on this formalism in detail. \(\Delta M_{\rm transf}\) is the mass that is transferred from the tertiary in a timestep \(\Delta t\). When \(\Delta M_{\rm transf}\) ends up encompassing the inner binary, it has binding energy of \(E_{\rm bind}\). As the inner orbit is shrinking due to the friction during the TMT episode, the orbital energy of the inner binary changes by \(\Delta E_{\rm orb}\). We assume that a fraction (\(\sigma_{\rm TMT}\)) of \(\Delta E_{\rm orb}\) is used to unbind \(\Delta M_{\rm transf}\). We can write an equation expressing the energy balance as:
\[\alpha_{\rm TMT}\Delta E_{\rm orb}=E_{\rm bind}, \tag{12}\]
with
\[\Delta E_{\rm orb}=\frac{GM_{1}M_{2}}{2a_{\rm in,fin}}-\frac{G\left(M_{1}+ \Delta M_{\rm transf}/2\right)\left(M_{2}+\Delta M_{\rm transf}/2\right)}{2a_{\rm in,init}}, \tag{13}\]
and
\[E_{\rm bind}=\frac{-G(M_{1}+M_{2})\Delta M_{\rm transf}}{\lambda_{\rm TMT}a_{\rm init }}, \tag{14}\]
where \(\lambda_{\rm TMT}\) is a parameter related to the structure of \(\Delta M_{\rm transf}\), parameterising its binding energy, \(a_{\rm in,init}\) is the initial orbital separation before \(\Delta M_{\rm transf}\) is transferred to the inner binary and \(a_{\rm in,fin}\) is the final orbital separation after \(\Delta M_{\rm transf}\) is expelled from the inner binary. We assume that the total mass transferred to the inner binary throughout the entire TMT episode equals to the mass of the hydrogen envelope of the tertiary \(M_{\rm out,env}\) (but see Laplace et al., 2020). Then assuming a constant \(\alpha_{\rm TMT}\) and \(\lambda_{\rm TMT}\), the orbit changes due to the entire TMT episode as:
\[\frac{a_{\rm in,fin}}{a_{\rm in,init}}=\frac{M_{1}\,M_{2}}{\frac{2\left(M_{1} +M_{2}\right)M_{\rm out,env}}{\alpha_{\rm TMT}\lambda_{\rm TMT}}+\left(M_{1}+ \frac{M_{\rm out,env}}{2}\right)\left(M_{2}+\frac{M_{\rm out,env}}{2}\right)}. \tag{15}\]
As both \(\alpha_{\rm TMT}\) and \(\lambda_{\rm TMT}\) are unknown, we combine them and try three different values: \(\alpha_{\rm TMT}\lambda_{\rm TMT}=0.05,\,0.5,\,5\). Here \(\alpha_{\rm TMT}\lambda_{\rm TMT}=5\) is the fiducial value used in Hamers et al. (2021), which is in a good agreement with the hydrodynamical simulations of de Vries et al. (2014), in which the inner stars are on the MS during the TMT episode. We note that we neglect the possibility of TMT episode with ballistic accretion transitioning to a TMT episode with a circumbinary disc.
Additionally, for configuration type (i), we assume that the inner binaries circularise as a result of the mass transfer phase (as \(a_{\rm in,new}=a_{\rm in}(1-e_{\rm in})\)). We note that this assumption might not be correct for highly eccentric inner binaries. For example, Glanz and Perets (2021) showed that binaries at the onset of common-envelope events with \(e\gtrsim 0.95\) might retain eccentricities as high as \(e\sim 0.2\).
(ii) _An inner binary with compact objects with circumbinary disc._ If a circumbinary disc is formed during a mass transfer phase towards an inner BH-BH binary we assume that the orbit of the inner binary remains unchanged. The actual physics underlying such a process are very complex (see Lai and Munoz, 2022, for a review on circumbinary accretion from gaseous medium). The circumbinary disc may exert a torque on the inner binary and extract angular momentum from it, while the accreted matter can transfer angular momentum onto the inner binary. Furthermore, the circumbinary disc and the inner binary could be tidally distorted by the tertiary star. It is commonly assumed that circumbinary accretion of a BH-BH binary from a gaseous medium leads to the shrinking of its orbit due to the torques exerted by the circumbinary disc and due to dynamical friction of the gas (e.g. Bartos et al., 2017; Stone et al., 2017; Antoni et al., 2019; Tiede et al., 2020; Duffell et al., 2020; McKernan et al., 2020; Rozner and Perets, 2022). However, a consensus regarding this physical process is still missing, with some hydrodynamical simulations suggesting that accretion from circumbinary disc could even lead to to orbital widening instead of orbital decay (e.g. Munoz et al., 2019; Moody et al., 2019).
(iii) _A non-compact inner binary_. If the mass transfer occurs with a MS-MS accretor, we assume that this results in the merger of the inner binary. We make this assumption because these binaries have very short periods and a sizeable fraction of them are in contact and most likely they would expand due to TMT, overfilling their L2 point, which would lead to merger (see later subsection 5). As we discuss in in subsection 5, we do not consider GW sources from those triple systems, in which the TMT occurs towards a binary with evolved (i.e. non-MS), non-compact stars.
We do not model unstable phases of TMT (as we will show later, they are very rare among the systems we discuss in this paper). We note, however, that during this type of mass transfer episode, the outer orbital separation is predicted to rapidly decrease due to the common-envelope-like evolution in triple system; this could result in a regime where the secular approximation from the triple is no longer valid (e.g. Glanz and Perets, 2021; Comerford and Izzard, 2020; Soker and Bear, 2021).
#### 2.5.2 TMT: Evolution of the outer orbit
When determining the evolution of the outer orbit due to a stable phase of TMT, we apply the same method for all accretor types, irrespective of whether a circumbinary disc is formed. We calculate the evolution of the outer orbit during the TMT phase, based on the following relation:
\[\frac{a_{\rm out}}{a_{\rm out}}=-2\frac{\dot{M}_{3}}{M_{3}}\left[1-\beta \frac{M_{3}}{M_{1}+M_{2}}-(1-\beta)\left(\gamma+\frac{1}{2}\right)\frac{M_{3 }}{M_{\rm tot}}\right], \tag{16}\]
where \(\beta\) is the fraction of mass accreted by the inner binary, \(\gamma\) is the specific angular momentum lost from the system as a fraction of the specific angular momentum of the triple and \(\dot{M}_{3}\) is the mass transfer rate from the tertiary star. Equation 16 can be derived from angular momentum arguments. It is an adaptation of the relation describing the orbital evolution of a circular, mass transferring binary comprised of point particles (see e.g. Soberman et al., 1997), applied to a triple experiencing a TMT episode. This adaptation is valid, if the tertiary star is sufficiently far away from the inner binary, such that the inner binary can be treated as a point particle with a mass of \(M_{1}+M_{2}\). We assume that eventually all the transferred mass is isotropically expelled from the triple (\(\beta=0\)), from near the inner binary. This expelled matter thus carries away a specific angular momentum that is equal to that of the inner binary (\(\gamma=M_{3}/(M_{1}+M_{2})\), see also Hamers et al., 2021, for a similar approach). In this case equation 16 can be expressed as
\[\frac{a_{\rm out,fin}}{a_{\rm out,ini}}=\frac{M_{\rm tot,init}}{M_{\rm tot, fin}}\left(\frac{M_{3,\rm init}}{M_{3,\rm fin}}\right)^{2}\exp\left(2\frac{M_{3,\rm fin}-M_{3,\rm init }}{M_{1}+M_{2}}\right). \tag{17}\]
In case of BH-BH inner accretors, these assumptions might be valid, as the accretion rate of BHs might be capped by the Eddington-limit, and most of the mass could indeed be expelled from the system, for example in the form of a jet (e.g. King et al., 2000; van den Heuvel et al., 2017). On the other hand, MS stars are likely to accrete more efficiently, and therefore \(\beta=0\) might no longer be a good approximation.
### Initial conditions
We sample \(10^{5}\) triples at two representative (moderate and low) metallicities: Z = 0.005 and Z = 0.0005. We simulate each hierarchical triples from ZAMS. After drawing the parameters for a given triple system, we further check, if it is dynamically stable (based on the criteria of Mardling & Aarseth, 2001) or if the stars in the inner binary are CHE at ZAMS. If any of the two criteria are not met, we do not evolve the triple system and only take it into account for the normalisation of event rate calculations. We terminate the simulation of a triple system when either a Hubble time (assumed to be 13.5 Gyrs) has passed, or when the tertiary star fills its Roche lobe, a merger occurs, a dynamical instability occurs or if any of the stars becomes unbound from the triple. We also stop the simulation, if any of the stars in the inner binary transitions back from CHE to classical evolution. That is, we only consider triples in which the stars of the inner binary chemically homogeneously evolve throughout their entire MS lifetimes. We refer to this population as _CHE triple population_.
In this study, we motivate the choice of the initial distributions of the parameters of the inner binaries based on recent surveys of massive binaries (e.g. Sana et al., 2012; Kobulnicky et al., 2014). In such surveys, a possible tertiary companion is not always unequivocally identified and therefore it is not clear whether the inferred distributions also hold for triples or only for isolated binaries.
We assume the ZAMS mass of the primary star (\(M_{1,\rm ZAMS}\)) follows the power-law mass distribution of Kroupa (2001), i.e. \(N\)\(-\)\(M_{ZAMS}^{-2.3}\) for \(M_{\rm ZAMS}\geq 0.5\,M_{\odot}\) and \(N\)\(-\)\(M_{ZAMS}^{-1.3}\) for \(M_{\rm ZAMS}<0.5\,M_{\odot}\). We sample \(M_{1,\rm ZAMS}\) from a mass range of 20-100 \(M_{\odot}\). The lower limit approximately coincides with the lowest initial mass at which CHE is still possible in a tidally locked binary (e.g. Riley et al., 2021), while the upper limit is roughly the maximum mass at which the stellar tracks used in TRES are still reasonably accurate. We assume a flat inner mass-ratio (i.e. \(q_{\rm in,ZAMS}=M_{2,\rm ZAMS}/M_{1,\rm ZAMS}\)) distribution, which is in broad agreement with Sana et al. (2012). We restrict the range of \(q_{\rm in,ZAMS}\) to 0.7-1 given that inner binaries in which both of the stars are chemically homogeneously evolving and have \(q_{\rm in}\leq 0.7\) would merge early during the MS (where we found the lower limit of 0.7 from our simulations). We sample the inner semimajor axis from a log-uniform distribution (Opik, 1924; and in broad agreement with Sana et al., 2012) in the range of 16 to 40 \(R_{\odot}\). We assume that the inner binaries are tidally locked at ZAMS. This has three implications: i) the inner binaries have circular orbits, ii) their rotational angular frequency is synchronised with the orbital angular frequency, and iii) the spins of the stars are aligned with the orbital angular momentum vector.
We draw the properties of the outer binary from the same distributions that we assume for the inner binaries, with the exception of outer eccentricities. Observations of hierarchical multiple systems of galactic solar-type stars support the assumption that the distributions of the initial parameters of the inner and the outer binaries are the same (Tokovinin, 2014; Tokovinin et al., 2006). We sample the outer semimajor axis from a loguniform distribution in the range of 100 to \(10^{5}\)\(R_{\odot}\). We assume that the distribution of the outer mass ratio (i.e. \(q_{\rm out,ZAMS}=M_{\rm out,ZAMS}/(M_{1,\rm ZAMS}+M_{2,\rm ZAMS})\)) is flat on a range of 0.1 to 1, furthermore the mass of the tertiary is restricted to a range of 5-100 \(M_{\odot}\). We assume non-spinning tertiary stars. The eccentricities of the outer orbit are drawn from a thermal distribution (e.g Heggie, 1975). The mutual inclination between the inner and outer orbit is assumed to be uniform in \(\cos(\rm i_{ZAMS})\), where \(\rm i_{ZAMS}\) is the initial inclination. The initial argument of the pericenter is assumed to be uniformly distributed between \(-\pi\) and \(\pi\).
In Section 5, we compare our CHE triple population to a CHE isolated binary population. To this end, we also perform population synthesis of isolated binaries with CHE stars. We sample \(10^{5}\) isolated binaries at Z = 0.005 and Z = 0.0005 and evolve them with TRES. We sample from the same initial distributions that we assumed for the inner binaries of our triple population. Similarly to the triple population, we discard systems that are not CHE at ZAMS and stop the simulation, if a Hubble time has passed, or if any of the stars in the binary transitions from CHE to classical evolution. We only analyse binaries, in which the stars remain CHE throughout their entire MS lifetime (hereafter _CHE binaries_).
Throughout the paper, we estimate birth rate and merger rate densities of different evolutionary channels (discussed in detail in appendix, section B). In order to determine each of these quantities, one must know how common single and multiple stellar systems are. We assume two different stellar populations, with different binary and triple fractions. In the first, we assume that about 73 per cent of massive stars are found in triples (with multiplicity fractions of \(f_{\rm single}=0.06\), \(f_{\rm binary}=0.21\), \(f_{\rm triple}=0.73\), see e.g. Moe & Di Stefano, 2017)1, whereas in the second test population, we assume there are no triples and about 70 per cent of massive stars are in binaries (\(f_{\rm single}=0.3\), \(f_{\rm binary}=0.7\), \(f_{\rm triple}=0\), see e.g. Sana et al., 2012)2.
Footnote 1: Moe & Di Stefano (2017) finds that 73 per cent of O stars are either in triples or quadruolpucs. Therefore \(f_{\rm triple}=0.73\) should be considered as a rough upper limit. We also note that Tokovinin et al. (2006) finds that there is a strong correlation between the inner period and the triple multiplicity; among solar type stellar systems, 96 per cent of the spectroscopic binaries with periods less than 3 days has a tertiary companion. Therefore CHE triples, which have also inner binaries with periods of few days, could have exceptionally high triple fractions too.
## 3 Results of population synthesis simulations
In Table 1, we provide an overview of our sampled systems based on the evolutionary type of the inner binary. Out of our sampled population of triples, only about 10 per cent of the triples have an inner binary where both stars evolve chemically homogeneously from ZAMS (CHE at ZAMS triples, see Table 1), and we follow the further evolution only for these triples. About 75 per cent of CHE at ZAMS triples qualify as CHE triples and we focus on these systems for the majority of the paper. For the remaining 25 per cent, we distinguish three scenarios:
* **The inner stars transition to classical evolution**. As the orbit of the inner binary widens due to stellar winds, the rotational frequencies of the inner stars decrease, because the stellar tides enforce synchronization between the stellar spins and the (new longer) orbital period. If the inner orbit widens sufficiently, the angular rotational frequencies of the inner stars drop below \(a_{\rm CHE}\) and therefore these stars transition to classical evolution. This occurs only in our moderate metallicity model (15.5 per cent of all CHE at ZAMS triples at Z = 0.005 and 0 per cent at Z = 0.0005).
* **The inner binary does not survive the contact phase during the MS phase of the inner stars**. We assume a merger takes place when both stars overflow their outer Lagrangian point during the contact phase. This occurs during mass equalization in the contact phase or due to GW emission, which lead to shrinkage of the inner orbit. As orbital widening due to stellar winds prevent mergers, the process occurs more efficiently at low metallicities (about 9 per cent of all CHE at ZAMS triples at Z = 0.005 metallicity and 17.5 per cent at Z=0.0005).
* **Computational issue**. Finally, we note that the simulation of about 2 (6.7) per cent of CHE at ZAMS triples fails at Z = 0.005 (Z = 0.0005). This can occur because either no solution is found for the secular orbital evolution of the system, or the computation time exceeds the allowed CPU time (which is 5000 seconds per system).
### Main evolutionary outcomes
In Table 2, we show the most common evolutionary outcomes for CHE triples. We distinguish 5 different evolutionary channels.:
* **No post-MS mass transfer phase:** During the MS, it may be in a contact, but the system does not experience any other form of mass transfer events. The inner binary eventually forms a BH-BH binary in all these triples.
* **Stellar merger of the inner binary due to ZLK:** Stellar merger occurs in the inner binary due to ZLK oscillations.
* **Tertiary mass transfer (TMT):** The tertiary star fills its Roche lobe.
* **Unbound systems:** This evolutionary outcome takes place, if any of the stars becomes unbound from the system. This occurs when a stellar remnant is formed in the system, with three major subtypes: (i) natal kick imparted onto the remnant object during the SN explosion, (ii) instantaneous mass loss during pulsational PISN, or (iii) complete disruption of the star due to PISN.
* **Dynamical instability:** These systems eventually become dynamically unstable, where the secular approximation is no longer valid.
We discuss these channels in detail in sections 3.5 - 3.3.
### Examples for the evolution of a few selected systems
In the following, we present the evolution of a few selected systems from some of the channels introduced in section 3.1. In all of these example systems, the initial parameters of the inner binary are the same: \(M_{1,\rm ZAMS}=M_{2,\rm ZAMS}=70\,M_{\odot}\), \(a_{\rm in,ZAMS}=22.4\,R_{\odot}\). These have been specifically chosen such that this system would form a GW source via the binary CHE channel within the Hubble time, if it was an isolated binary (i.e. in about 8.9 Gyrs). The inner binary is tidally locked and therefore \(e_{\rm in,ZAMS}=0\). The stars of the inner binary are in contact from ZAMS and equalise in mass soon after ZAMS. The initial mutual inclination is \(\rm i_{\rm ZAMS}=90^{\circ}\) in all systems discussed below, which allows for ZLK oscillations to develop, unless they are suppressed by short range forces (see e.g. Liu et al., 2015).
In order to understand the evolutionary paths of CHE triples introduced below, we first show which configurations of CHE triples lead to efficient ZLK oscillations (see Fig. 1). We evolve the previously introduced CHE inner binary as an isolated system, and take four snapshots during different evolutionary stages (ZAMS, end of MS, at the onset of core collapse, and at the formation of an inner BH-BH binary). For each snapshot, we show a range of possible tertiary companions to this inner binary with different tertiary masses (\(M_{\rm out}\)) and outer semi major axes (\(a_{\rm out}\)) and identify those regions, where three-body dynamics are relevant.
As shown in the leftmost panel, precession due to tides completely suppresses three-body dynamics when the inner stars are still on MS for almost the entire parameter space of CHE triples. The limited number of triples for which this is not true typically become dynamically unstable later in the evolution (e.g. compare panel 1 with panel 4). By the time of hydrogen depletion in the inner stars, the stellar radii of CHE stars shrinks typically by a factor of 3-5 with respect to their ZAMS value. Therefore, at this stage tides become less efficient (since \(t_{\rm tides}\sim R^{-5}\), see equation 3) and precession due to GR becomes the major limitation to three-body dynamics. For the systems shown in Fig. 1, ZLK oscillations occur only, if \(a_{\rm out}\lesssim 500\,R_{\odot}\). During the CHeB phase of the inner stars, the the typical timescale of precession due to GR further increases, as a result of the strong Wolf-Rayet winds that significantly widen the inner orbit. As long as the inner orbit widens faster than the outer orbit (which is always true if the tertiary star is the initially least massive star in the system), the timescale related to ZLK oscillations will not significantly increase. Therefore during this stage, the parameter space where three-body dynamics are relevant increases. This is also shown in the rightmost panel of Fig. 1; by the time the inner binary forms BHs, triples with \(a_{\rm out}\lesssim 2000\,R_{\odot}\) will develop ZLK oscillations.
#### 3.2.1 Example for stellar merger of the inner binary due to ZLK oscillations
First, we discuss the evolution of a CHE triple, in which the inner binary merges as a double helium star due to strong ZLK oscillations (shown in Fig. 2). This triple has a tertiary with an initial mass of \(M_{\rm out,ZAMS}=32.1\,M_{\odot}\) and a circular outer orbit with \(a_{\rm out,ZAMS}=200\,R_{\odot}\). As indicated by Fig. 1, when the stars of the close inner binary are still on the MS, precession associated with strong tides suppresses the effects of the three-body dynamics (see also e.g. Vigna-Gomez et al., 2022). At 3.9 Myrs, the stars of the inner binary evolve off the MS. By this time, these stars had lost a small amount of mass due to stellar winds and the inner orbit had widened by only 2 per cent as a result. Similarly, the outer orbit also widens only by a negligible amount. Consequently, the timescale of the ZLK oscillations does not change significantly. On the other
hand, the tidal effects become much weaker, as the radii of the stars had decreased by a factor of 5 with respect to their ZAMS value. As a result, the ZLK oscillations are no longer suppressed (see also second panel of Fig 1). At this stage, there are two competing mechanisms that drive the evolution of the pericenter: ZLK oscillations and the strong Wolf-Rayet-like winds, which decrease and increase the pericenter, respectively. For this triple, the ZLK timescale is extremely short (few years) and a large inner eccentricity of \(e_{\rm in}\approx 0.65\) is reached shortly after the onset of CHeB, during which the orbital widening due to stellar winds is negligible. At this stage, the pericenter becomes sufficiently small such that the helium stars fill their Roche-lobes at the point of closest approach. We assume this results in the merger of the inner binary.
#### 3.2.2 Example for TMT towards an eccentric BH-BH binary
The next triple we discuss experiences a TMT episode towards an eccentric BH-BH inner binary (shown in Fig 3). This system has the same parameters as the previously discussed triple, but with a slightly larger initial outer semimajor axes: \(a_{\rm out,ZAMS}=421~{}R_{\odot}\). When the inner stars evolve of MS, ZLK oscillations are quenched by precession due to GR (compare the second panels of Fig. 1 and Fig. 3). Three-body dynamics become later effective, as the orbit of the inner binary widens significantly and faster than the outer orbit due to strong WR winds (compare the third panels of Fig. 1 and Fig. 3, although by this stage the parameters of the inner binary differ slightly). As a result \(t_{\rm GR}\) increases by a factor of 5, while \(t_{\rm ZLK}\) barely changes. As ZLK cycles become only effective once the inner orbit has sufficiently widened, the inner binary does not come into contact despite reaching similarly high inner eccentricities as in the previous system.
As the stars of the inner binary have the same mass, they co-evolve, and they become stellar remnants at the same time. This occurs around 4.2 Myr, when the inner eccentricity is \(e_{\rm in,max}=0.75\). Since core-collapse occurs in an eccentric orbit, large range of possbie post-supernova orbits are possible (\(a_{\rm in}=42\)-\(186~{}R_{\odot}\)) depending on where exactly the stars are in their orbit. In the particular example shown in Fig 3, the core collapse occurs while both stars are near the pericenter (which is less likely as they spend more time near the apocenter). This leads to an inner semi-major axis of \(a_{\rm in}=171~{}R_{\odot}\) after BH-BH formation. As the outer orbit is circular at the onset of the core-collapse, it only widens by a moderate amount. As the inner period to outer period ratio has increased by a factor of 7, the timescale of the ZLK oscillations also further decrease, making the three-body dynamics even more relevant for the further evolution of the system. The evolution of this triple therefore demonstrates, that if the ZLK oscillations are strong enough to induce eccentricities before the formation of an inner BH-BH binary, the importance of three-body dynamics can be significantly increased during the last stages of the evolution of the triple, depending on (i) where the inner stars are in their orbit when the formation of the compact objects occur and (ii) on the eccentricity of the outer orbit.
After the formation of the inner BH-BH binary, the tertiary star evolves off MS, and at 6.1 Myr fills its Roche-lobe and transfers mass to the highly eccentric (\(e_{\rm in}=0.94\)) BH-BH binary at a highly inclined orbit (\(i=71.5^{\circ}\)). At this stage, we stop the simulation (but
\begin{table}
\begin{tabular}{l c c c c} \hline & \multicolumn{2}{c}{Z = 0.005} & \multicolumn{2}{c}{Z = 0.0005} & Combined \\ \hline & \% of simulation & \% of CHE at ZAMS & \% of simulation & \% of CHE at ZAMS & Birth rate [Gpc\({}^{-3}\)yr\({}^{-1}\)] \\ \hline CHE at ZAMS & 10.3 & 100 & 9.9 & 100 & 13.7 \\ - _CHE triple_ & 7.6 & 73.5 & 7.5 & 75.7 & 9.6 \\ - _Transition to classical evolution_ & 1.6 & 15.5 & 0 & 0 & 1.9 \\ - _Merges during contact phase_ & 0.9 & 9 & 1.7 & 17.5 & 1.4 \\ - _Simulation error_ & 0.2 & 2 & 0.6 & 6.7 & 0.2 \\ \hline \end{tabular}
\end{table}
Table 1: An overview of our sampled triples based on the evolutionary type of the inner binary. For the definitions of the different categories, see text in section 3.1.
\begin{table}
\begin{tabular}{l c c c} \hline & Channel & \% at Z = 0.005 & \% Z = 0.0005 & Birth rate [Gpc\({}^{-3}\)yr\({}^{-1}\)] \\ \hline
**No post-MS MT** & **27.2** & **10.8** & **2.3** \\ - Inner binary evolution decoupled from tertiary & 97.4 & 98.6 & 2.2 \\ - Driven by three-body dynamics & 2.6 & 1.4 & 0.1 \\ \hline
**Stellar merger of the inner binary due to KL** & **3.3** & **2.5** & **0.4** \\ - double helium star accretor & 74.7 & 58.5 & 0.3 \\ - helium star-MS accretor & 7.9 & 35 & 0.04 \\ - helium star-BH accretor & 17.4 & 6.5 & 0.06 \\ \hline
**Tertiary mass transfer** & **55.4** & **52.1** & **5.2** \\ - MS-MS accretor & 58 & 60.8 & 3.1 \\ - BH-BH accretor & 30.9 & 24.3 & 1.5 \\ - Other types of accretors & 11.1 & 14.9 & 0.6 \\ \hline
**Unbound systems** & **10.7** & **34** & **1.4** \\ - Core collapse SN & 100 & 16 & 0.9 \\ - (P)PISN & 0 & 84 & 0.5 \\ \hline
**Dynamical instability** & **3.5** & **0.6** & **0.3** \\ \hline \end{tabular}
\end{table}
Table 2: A summary of the different channels (in bold font) and their sub-channels (in normal font) identified of CHE triples. The rows with bold fonts in the second and third column express the number of systems in each channel as a percentage of all CHE triple systems. The rows with normal fonts in the second and third column express the number of systems in each sub-channel as a percentage of all systems in their respective main channel.
see later section 5, where we predict the further evolution of some of these systems). We note, however, that even if the TMT episode does not affect the inner binary, it still merges due to GWs about a factor of 8 faster than its isolated binary counterpart, just alone due to the high eccentricities induced by the ZLK oscillations.
#### 3.2.3 Example for TMT towards a circular BH-BH binary
Next, we show the evolution of a CHE triple, which also experiences a TMT episode towards a BH-BH binary, but in which three-body dynamics remain suppressed throughout the entire evolution. The initial outer semimajor axis is \(a_{\rm out,ZAMS}=1069\,R_{\odot}\). For this system the timescales of the ZLK oscillations remain too long with respect to the timescale associated with precession due to GR effects throughout the entire post-MS phase. At the onset of the core-collapse, at which the parameter space for ZLK oscillations is the typically the largest for CHE triples with inner binaries composed of non-compact objects, the outer semimajor axis is \(a_{\rm out,ZAMS}=1720\,R_{\odot}\) and the tertiary mass is \(M_{\rm out}=31.9\,M_{\odot}\). Third panel in Fig. 1 implies that the three-body dynamics is just quenched by the relativistic precession at this stage. Therefore, the inner orbit remains circular when the BHs are formed, and the inner orbit only widens moderately due to BH formation. The inner and the outer orbit after the formation of a BH-BH binary are \(a_{\rm ini}=46.6\,R_{\odot}\) and \(a_{\rm out}=1860\,R_{\odot}\) and therefore the ZLK oscillations remain quenched. At 6 Myr, the tertiary reaches a radius of 547 \(R_{\odot}\) and fills its Roche-lobe while crossing the Hertzsprung gap. The last two examples suggest (and we will show in section 4 that this is generally true for the vast majority of CHE triples) that three-body dynamics are only relevant for the evolution of CHE triples, if the tertiary star is on a sufficiently short orbit, such that it will eventually fill its Roche-lobe and initiate a TMT episode. Conversely, if the tertiary star remains detached throughout the evolution of the triple, the inner binary evolves effectively as an isolated binary for the vast majority of CHE triples.
### No post-MS mass transfer
In these triples, the tertiary star remains bound and detached, while the stars of the inner binary form a BH-BH binary. The inner stars are in contact in the majority of the cases (e.g. around 90 per cent at \(Z=0.005\)). There are no any other mass transfer phases during the evolution of these systems (by definition). About 27 per cent of CHE triples evolve this way in our moderate metallicity model (see Table 2). This decreases to 11 per cent at Z = 0.0005. The main reason for this difference is the larger number of PISN that occurs at lower metallicities, which prevent the formation of BHs.
After the formation of the BH-BH binary, the system may merge due to GW emission within a Hubble time. This occurs for all systems of this type at \(Z=0.0005\). However at \(Z=0.005\), the stellar winds are strong enough such that 32 per cent of the inner binaries of these triples end up with orbits that are too wide to merge within a Hubble time due to GW emission. We note that these are not necessarily all of the GW sources from our simulations, as triples in other channels discussed here can also potentially form merging binary BHs (see discussion in section 5).
For the majority of these triples (\(>97\) per cent), the inner binary evolves essentially unaffected by the tertiary star (see also section 4). Therefore, the properties of the inner binaries of this channel are nearly indistinguishable from those of isolated CHE binaries. The initial outer pericenters of the triples of this channel are large enough such that the outer star remains detached (i.e. \(a_{\rm p,out,ZAMS}\gtrsim 2000\)-\(3000\,R_{\odot}\) at \(Z=0.005\), see also section 4). At such large tertiary separations, the three-body dynamics remain suppressed during the entire evolution of the triple.
The properties of the subgroup in which three-body dynamics drive the evolution of the inner binary are very different. Firstly, they have very short initial outer pericenters (i.e, \(a_{\rm p,out,ZAMS}\approx 100\)-\(700\,R_{\odot}\)), and secondly, the tertiary has a relatively low mass (typically \(M_{\rm out,ZAMS}\) = 10-30 \(M_{\odot}\)). In these systems, the ZLK oscillations drive the eccentricity of the inner BH-BH binary up to large values (e.g. \(e_{\rm in}\gtrsim 0.7\)-0.9). Above a given eccentricity, the GW emission becomes so efficient that the inner binary decouples from the tertiary and it plunges due to GWs (see e.g. Silsbee & Tremaine, 2017; Rodriguez & Antonini, 2018). These systems typically have a relatively low-mass tertiary star compared to the stars in the inner binary, such that the inner binary merges as a BH-BH binary due to GW emission before the tertiary star would evolve off the MS and fill its Roche-lobe. Overall, the parameter space for this subgroup is very small, and therefore we predict a negligible GW merger rate (see later discussion in section 5).
### Stellar merger of the inner binary due to ZLK
In this scenario, the inner binary merges due to three-body dynamics, before it would form a BH-BH binary. At \(Z=0.005\), about 3.3 per cent of the CHE triple population evolves this way. In our low metallicity model, this fraction decreases slightly, to 2.2 per cent. This is because at lower metallicities, the inner period to outer period ratio increases less due to the weaker stellar winds, and therefore ZLK oscillations remain less efficient (see equation 2).
Mergers in this channel occur in inner binaries, in which one or both of the stars have already evolved off MS, otherwise the strong tidal effects typically quench the ZLK oscillations (see section 3.2). As shown in Table 2, most of the merger occurs between two helium stars (59-75 per cent). The rest occurs between a helium star - MS star or helium star - BH binaries. The majority of the double helium star mergers (\(>\)90 per cent) originate from triples, in which the stars in the inner binary were in contact during MS and co-evolved. This also implies that the majority of them have equal masses at the time of the merger. The masses of these helium inner stars typically range from 29 to 94 \(M_{\odot}\) at \(Z=0.005\).
The outer orbital period of the triples from this channel has to be sufficiently short, such that the ZLK oscillations are strong enough such that they prompt the inner binary to merge. The outer pericenter at the moment of the merger typically ranges from 100 to 200 \(R_{\odot}\) and it does not exceeds 700 \(R_{\odot}\). The eccentricities of the inner binary at the moment of the merger typically have values of \(e_{\rm in}\approx 0.5\)-0.9. For all of these triples, the tertiary is a MS star at the time of the merger and less massive than the stars of the inner binary, otherwise it would evolve faster than the stars in the inner binary and would fill its Roche-lobe, while the inner stars are still on MS. If the outer orbit does not significantly change after the merger, the tertiary star is expected to transfer mass to the merger product, once it evolved off the MS.
### Systems with tertiary mass transfer (TMT)
Among CHE triples, this is the most common evolutionary pathway. In these systems, the outer star eventually initiates a mass transfer phase while the inner binary is detached or in contact. Approximately 55 (52) per cent at \(Z=0.005\) (\(Z=0.0005\)) of all CHE triples experience this type of evolution (see Table 2). This means that a TMT episode would eventually occur in about 40 per cent of all stellar
systems containing a binary with CHE stars (with \(f_{\rm binary}=0.21\), \(f_{\rm triple}=0.73\)). While systems containing binaries with CHE stars are rare (see e.g. typical birth rates in Table 1), they form GW sources very efficiently (e.g. Mandel & de Mink, 2016; de Mink & Mandel, 2016; Marchant et al., 2016). Therefore, our predictions suggest that the evolution of a non-negligible fraction of potential GW progenitors could experience a TMT episode. This is an interesting result, as TMT is thought to be very uncommon for classically evolving hierarchical triples, which would have implied that they play a limited role in important astrophysical phenomena (see e.g. de Vries et al., 2014; Toonen et al., 2020; Hamers et al., 2022; Kummer et al., 2023). In particular Toonen et al. (2020) found that about 1 per cent of
Figure 1: We illustrate the parameter space where ZLK oscillations develop for typical CHE triples at different evolutionary stages at Z = 0.005. Each panel represents triples with a specific inner binary (with masses \(M_{1},M_{2}\) and inner separation \(a_{\rm in}\) as indicated in the top left of each panel) but with varying tertiary masses, \(M_{\rm out}\) (x-axis) and outer semimajor axes, \(a_{\rm out}\) (y-axis, logscale). The parameters of the inner binary in each panel are the same as that of an isolated CHE binary with \(M_{1,\rm ZAMS}=M_{2,\rm ZAMS}=70\,M_{\odot}\), \(a_{\rm ZAMS}=22.4\,R_{\odot}\) at different evolutionary stages (see text in section 3.2). First panel: two CHE main sequence stars at zero-age main sequence, second panel: helium stars at the onset of core-helium burning (HeMS), third panel: two helium stars at the end of core-helium burning (HeRGB), fourth panel: at the formation of a BH-BH inner binary. We assume circular inner and outer orbits and a mutual inclination of i = 90\({}^{\circ}\). The light blue and the dark blue lines show regions, where the ZLK timescale equals to the timescale of apsidal precession due to tides and GR effects, respectively (see equation 2, 4, and 3). Consequently, triples above any of these two lines do not exhibit ZLK oscillations, as they are quenched by these short range forces. The green line represents the boundary of dynamical instability. The shaded grey region represents triples where ZLK oscillations are effective.
Figure 2: A schematic drawing showing the evolution of a triple system in which the stars in the inner binary experience stellar merger due to ZLK oscillations. The first line below each drawing shows the evolutionary stage of each star. The first is for the primary, the second is for the secondary, and the third is for the tertiary star. CHE MS is chemically homogeneously evolving MS star, HeMS is core-helium burning helium star, HeRGB is a helium star that has finished core-helium burning the parameters of the triple at ZAMS are: \(M_{1,\rm ZAMS}=M_{2,\rm ZAMS}=70\,M_{\odot}\), \(a_{\rm in,\rm ZAMS}=22.4\,R_{\odot}\), \({\rm iZAMS}=90^{\circ}\), \(M_{\rm out,ZAMS}=32.1\,M_{\odot}\), \(a_{\rm out,\rm ZAMS}=200\,R_{\odot}\), \(e_{\rm out}=0\). The outer eccentricity remains \(e_{\rm out}\leq 0.01\) throughout the evolution.
triples with primaries in the intermediate mass range belong to this evolutionary channel. Similarly, de Vries et al. (2014) predicts that about only 1 per cent of the observed 725 triples in the catalogue of Tokovinin (2010) would eventually initiate TMT.
In the following sections (3.5.1-3.5.6), we discuss the properties of the triples of this channel at the onset of TMT. While predicting the outcome of a TMT episode is currently extremely challenging, highlighting several important aspects of these systems (e.g. dynamical stability of TMT, timescales of TMT episodes, the amount of transferred mass, the type of accretors, etc.) helps to better understand the nature of these systems and the role they potentially play in the evolution of GW progenitors.
#### 3.5.1 Donors of TMT episodes
Here, we discuss the stellar evolutionary stage of the donor star at the onset of the mass transfer episode. as it is highly relevant for determining if the mass transfer episode occurs in a dynamically stable or unstable way (which has dramatic effect on the outcome of the TMT, compare e.g. de Vries et al., 2014; Glanz & Perets, 2021).
In particular, convective envelopes can be developed by core-helium-burning or asymptotic giant branch stars. Mass transfer episodes initiated by such cool-giant donors with deep convective envelopes are more likely to occur in a dynamically unstable way than mass transfer phases initiated by giant donors with mostly radiative envelopes (e.g., Hjellming & Webbink, 1987; Soberman et al., 1997; Klenecki et al., 2020).
At \(Z=0.005\), around 80 per cent of the donors of TMT systems are stars crossing the Hertzsprung gap. At this metallicity, the largest expansion in the radius of the star occurs during this evolutionary phase, which makes binary interaction during this stage the most probable. The second most common donor type is CHeB star with 11.3 per cent, while the rest are either stars on the first giant branch (when the tertiary \(M_{\rm out,ZAMS}\la 8\,M_{\odot}\)) or stars on the asymptotic giant branch.
At lower metallicities, CHeB donors are more prevalent. At \(Z=0.0005\), only 58 per cent of the tertiary donors are HG stars while 40 per cent are CHeB stars; this is because the onset of CHeB occurs at a higher effective temperature with respect to systems at Z = 0.005. Consequently, at lower metallicities, the onset of CHeB is followed by a larger increase in radius with respect to their higher metallicity counterparts. This in turn implies that stars are more likely to fill their Roche-lobes at this evolutionary stage.
#### 3.5.2 Stability of TMT episodes
The vast majority of mass transfer episodes in this channel occur in a dynamically stable way (99.9 per cent at Z = 0.005 and 98.8 per cent at Z = 0.0005). This is due to the relatively low mass ratios at the onset of the mass transfer phase (i.e. typically \(q_{\rm out}<q_{\rm crit}\), see right panel of Fig. 4 for our moderate metallicity model, and Fig. A1 for our low metallicity model). Typical mass ratios for systems with HG donors are \(q_{\rm out}\) = 0.4-0.8, while for CHeB donors, they are \(q_{\rm out}\) = 0.3-0.5. The values for CHeB donors are smaller because of the strong LBV winds that CHeB star experience decrease the mass ratios over time. Unstable mass transfer phases exclusively occur with CHeB donors in our simulations.
These low mass ratios also imply that the expansion due to stellar evolution drives the TMT episodes (e.g. Soberman et al., 1997). Consequently, we expect TMT episodes with HG donors to last \(10^{4}\) yrs, while TMT episodes with CHeB donor could last much longer up to \(10^{5}\)-\(10^{4}\) years.
#### 3.5.3 Accretors of TMT episodes
In this subsection, we discuss the type of accretors of TMT episodes. The evolutionary stage of the inner binaries has a crucial role in the outcome of TMT episodes. If the inner binary comprises CHE MS stars, a TMT episode probably leads to the merger, as CHE binaries have very short periods and the majority of them are in contact at the onset of the TMT (see also Braudo et al., 2022). On the other hand, if the inner binary consists of BHs, TMT episode is unlikely to lead to merger by itself, however, in principle, could be a source of (observable) X-ray emission (e.g. Kaaret et al., 2017).
As shown in Table 2, the two most common types of accretors are MS-MS and BH-BH binaries. In only 11-15 per cent of CHE triples experience TMT with different accretors, such as an inner binary
consisting of two helium stars or a helium star with a MS or BH companion.
We highlight the relatively large fraction of BH-BH accretors (24-31 per cent of CHE triples experiencing TMT). For classically evolving triples, mass transfer towards a BH-BH binary is highly unlikely. Firstly, in systems in which a TMT episode were to occur towards a BH-BH inner binary, the stars of the inner binary need to be more massive than the tertiary, such that they form BHs before the outer star fills its Roche-lobe. Secondly, the outer star has to be sufficiently close, otherwise it would remain detached throughout its evolution. This, in turn, puts a limit on the largest possible inner orbit, if the system is to remain dynamically stable. The maximum inner orbit for such systems is so small that classically evolving inner stars (which eventually expand) would initiate mass transfer and would most likely merge, which would reduce the triple to a binary and a tertiary mass transfer would never occur (see e.g. Fig. 14 in Toonen et al., 2020). On the other hand, if the triple has CHE inner stars, the stars will not expand and not merge with one another, instead the system will evolve to contain a BH-BH binary by the time the tertiary fills its Roche-lobe.
#### 3.5.4 Mass transferred towards the inner binary
We discuss the amount of mass that is transferred during the TMT episode. This is an important aspect, as the relative transferred mass determines angular momentum reservoir available to change the orbit of the inner binary.
Assuming that the entire envelope of the donor star is transferred towards the inner binary, the amount of transferred mass ranges between 1-40 \(M_{\odot}\) for BH-BH accretors and between 10-50 \(M_{\odot}\) for MS-MS accretors (see left panel of Fig. 4 for Z = 0.005 and A1 in section of A of Appendix for Z = 0.0005). Systems with MS-MS accretors typically receive a larger amount of mass than BH-BH accretors, because the tertiary star is typically more massive in the former case. This is because for the tertiary to fill its Roche lobe while the inner stars are still on the MS, the initial tertiary star needs to be evolve faster and hence be more massive than the MS stars. The relative transferred mass expressed as a fraction of the total mass of the inner binary (i.e. \(M_{\rm transferred}/M_{\rm tot,inner}\)) has the same maximum value (\(\sim 0.5\)) for both BH-BH and MS-MS accretors (see grey histogram in left panel of Fig. 4).
Figure 4: We show the properties of the donor star at the onset of tertiary mass transfer episode. The left panels shows the amount of mass transferred from the tertiary to the inner binary for systems undergoing TMT at Z = 0.005, and the right panels show the outer mass ratio, \(q_{\rm out}=M_{\rm out}/(M_{1}+M_{2})\), at the onset of the tertiary mass transfer phase. Different colours correspond to different evolutionary stages of the donor as shown by the legend. We exclude cases where the donor is a MS star. Furthermore, we distinguish BH-BH inner binaries (upper panels) and MS-MS inner binaries (lower panels), which are the main types of accreting systems. The unfilled histogram in the panels on the left shows the transferred mass as a fraction of the total mass of the inner binary for all donor types. All histograms shown have been normalised with respect to the population of CHE evolving triples.
#### 3.5.5 Formation of circumbinary disc
We discuss how common it is for TMT systems to develop a circumbinary disc at the onset of the mass transfer episode. As explained in section 2.5, whether a TMT episode is accompanied by a formation of a circumbinary disc can have important consequences for the evolution of the inner orbit.
We find that about 63 per cent of all TMT systems develop circumbinary discs in our moderate metallicity model, while in the rest TMT proceeds in a ballistic fashion. Systems in which a circumbinary disc is formed during the TMT phase typically have larger outer pericenters at the onset of the mass transfer (\(a_{\rm p,out}\approx 300\)-\(6000\ R_{\odot}\)) than in those where TMT proceeds in a ballistic manner (\(a_{\rm p,out}\approx 100\)-\(600\ R_{\odot}\)).
TMT with circumbinary disc is more prevalent at lower metallicities. About 74 per cent of all TMT systems develop circumbinary discs at Z = 0.0005. This occurs because the ratio of the inner and the outer orbital separation decreases less by the onset of the mass transfer phase due to weaker stellar winds (see equation 11).
TMT episodes with inner BH-BH binaries are somewhat more likely to occur in a ballistic fashion than with MS-MS inner binaries. About 45 (23) per cent of TMT systems with BH-BH inner binaries do not develop circumbinary discs at Z = 0.005 (Z = 0.0005), while 32 (27) per cent of TMT episodes with MS-MS inner binaries occur in a ballistic fashion. This is mainly because the inner apocenter to outer pericenter ratios at the onset of TMT are typically higher for inner BH-BHs than for inner MS-MS binaries (see equation 11). This difference is due to Wolf-Rayet winds, supernova kicks and possible ZLK oscillations that BH-BH inner binaries experienced prior to the TMT episode.
#### 3.5.6 Three-body dynamics prior to TMT
Three-body dynamics can increase the eccentricities of the inner binary. This can, for example significantly decrease the coalescence time due to GWs (e.g. Miller & Hamilton, 2002; Blaes et al., 2002; Wen, 2003; Thompson, 2011).
Three-body dynamics are almost always suppressed during the MS phase of the inner binaries due to the strong tides (see also section 3.2). Consequently, the inner orbits of TMT systems with MS-MS inner binaries are always circular at the onset of the mass transfer episode. On the other hand, this is no longer the case when the inner stars are in their post-MS. In Fig. 5, we show the cumulative distribution of the inner binary eccentricities at the onset of the mass transfer phase of TMT systems with BH-BH accretors at Z = 0.005. We see that systems without circumbinary discs tend to have eccentric inner orbits at the onset of mass transfer. The high eccentricities are caused by ZLK cycles during the post-MS evolution of the inner binary. About 40 per cent of such triples have \(e_{\rm in}\gtrsim 0.4\) at this stage. This is in contrast with the systems with circumbinary discs; about 90 per cent of the systems have eccentricities \(e_{\rm in}\lesssim 0.1\). The difference is due to the smaller inner period to outer period ratios that systems without circumbinary discs have (see equation 2). In our low metallicity model, high eccentricities at the onset of TMT are much less common (see Fig. A2 in section of A of Appendix). For these systems the inner period to outer period ratio does not increase significantly because of the weak stellar winds.
### Unbound systems
In this channel, one of the stars in the triple becomes unbound as a result of core-collapse. We distinguish systems based on whether this occurs via PISN or via classical core collapse (e.g. Fryer et al., 2012). As shown in Table 2, PISN does not occur in our moderate metallicity model, whereas at Z = 0.0005, it becomes quite prevalent; about 84 per cent of the unbound systems occur due to PISN.
If the triples becomes unbound as a result of a classical core-collapse, we further distinguish whether it is due to the core-collapse occuring in the inner binary (97 per cent of all classical core-collapse systems at Z = 0.005 and 99 at Z = 0.0005) or of the tertiary star (3 per cent at Z = 0.005 or 1 per cent at Z = 0.0005). As the inner binary consists of CHE stars, they have large initial masses (i.e. \(M_{\rm ZAMS}\gtrsim 30\ M_{\odot}\)) and furthermore they develop more massive CO cores than their classically evolving counterparts. Therefore, they get weak (if any) natal kicks when they form BHs according to our implemented natal kick prescription Yet weak natal kicks, or even completely symmetrical instantaneous mass losses due to neutrino losses (which we assume to be 10 per cent of the pre-collapse mass according to Fryer et al., 2012) can unbind the tertiary star, if the outer star has high eccentricities. We find that in systems in which one of the stars becomes unbound due to the core-collapse in the inner binary, the outer eccentricities are large, about 70 per cent of them \(e_{\rm out}\geq 0.8\). In the vast majority of the cases (about 99 per cent of such unbound systems), only the tertiary is ejected, while the inner binary remains bound.
If the triple becomes unbound due to the core-collapse of the tertiary star and with low outer eccentricity, it almost always occurs as a result of a strong natal kick. Consequently, most of such unbound systems have initial tertiary masses of \(M_{\rm out,ZAMS}\approx 8\)-\(25\,M_{\odot}\) (see also discussion in section 4), as these systems are expected to receive the largest kicks according the supernova prescription of Fryer et al. (2012).
### Systems which become dynamically unstable
These triples typically have very short initial outer pericenters (\(a_{\rm p,out,ZAMS}\approx 70\)-\(400\,R_{\odot}\)) and therefore are very close to the stability limit at ZAMS. Such systems can transition to non-secular or non-hierarchical evolution, if \(a_{\rm in}/a_{\rm out}\), \(e_{\rm out}\) or \(q_{\rm out}\), significantly
Figure 5: The cumulative distribution of the inner binary eccentricities of systems experiencing TMT with BH-BH accretors at the onset of the mass transfer phase at Z = 0.005. The green curve shows the systems with accretions disc formed during the mass transfer phase. The blue curve shows the systems where ballistic accretion occurs.
increases during evolution (see Mardling & Aarseth 2001). Among CHE triples, there are primarily two processes that can trigger this change: stellar winds and core collapse.
If the relative wind mass loss rate (e.g. \(\dot{M}/M\)) in the inner binary is higher than that of the tertiary star, \(a_{\rm in}/a_{\rm out}\) and \(q_{\rm out}\) will increase, which can prompt the triple to experience a dynamical instability (see also Kiseleva et al. 1994; then & Tutukov 1999; Perets & Kratter 2012; Toonen et al. 2022). 30 per cent of the systems of this channel destabilise due to stellar winds and the destabilisation occurs when the stars of the inner binary are in their post-MS phase. At this stage, the inner stars experience strong Wolf-Rayet winds, while the tertiary star is still on the MS with significantly lower mass loss rates.
In the remaining 70 per cent, the instability sets in due to the collapse of the core of one of the stars. We find that this only occurs when BH formation takes place in the inner binary. As noted in section 2.4, CHE stars typically form BHs via direct collapse, such that \(q_{\rm out}\) only increases slightly. Furthermore, the direct collapse is expected to be accompanied by a weak Blauw-kick due to neutrino losses such that \(a_{\rm in}/a_{\rm out}\) and \(e_{\rm out}\) only increase significantly, if the inner or the outer pre-core-collapse orbits are eccentric, respectively. The pre-core-collapse inner orbit is eccentric in 72 per cent of the systems of this channel. And the eccentricity is caused by ZLK oscillations. In the remaining 28 per cent, three body-dynamics is not efficient in driving up the eccentricity because the mutual inclination is outside of the critical Kozai range (see e.g. Naoz 2016). Therefore, the core collapse occurs when the inner orbit is circular. These systems still become unstable during the BH formation, because 1) either \(a_{\rm in}/a_{\rm out}\) already increased strongly due to stellar wind mass losses before the BH formation or 2) the outer orbit is eccentric and the core collapse occurs while the tertiary star is near the outer pericenter (leading to a significant increase in \(e_{\rm out}\)).
The occurrence rate of this channel is strongly dependent on metallicity (3.5 per cent of all CHE triples at Z = 0.005 and 0.7 per cent at Z = 0.0005, see Table 2). This dependence is due to the reduced strength of stellar winds and ZLK oscillations (which are responsible for any eccentricity in CHE inner binaries) at lower metallicities.
## 4 The origin of each evolutionary channel
In this section, we discuss the initial parameters of the triples from each evolutionary channel introduced in section 3.1. We find that initial parameters can be used as a proxy to determine the final evolutionary outcome of CHE triples. In particular, the evolutionary outcome can be parameterised by the initial mass and orbital separation of the tertiary star. The parameters of the inner binary play a less important role in this regard, as the parameter space for CHE inner binaries is already quite reduced. We illustrate this in the left panel of Fig. 6 by showing an ensemble of CHE triples at \(Z=0.005\), in which the parameters of the inner binary are the same, but the mass and the orbital separation of the tertiary star are varied (therefore this grid represents only a small subset of the entire CHE population discussed in section 3.1). The inner binary consists of two 70 \(M_{\odot}\) stars and with a circular initial orbit with \(a_{\rm in,ZAMS}=22.4\,R_{\odot}\) (similarly to the example systems discussed in section 3.2). The initial tertiary mass ranges from 5 to 100 \(M_{\odot}\), while \(a_{\rm out,ZAMS}\) ranges from 200 to 10\({}^{4}\)\(R_{\odot}\).
### Initial parameters of systems of different evolutionary channels
The majority of the triples shown in the left panel of Fig. 6 experience TMT episodes. Their initial outer orbital separations are relatively short and range roughly from 100 to 3300 \(R_{\odot}\). The evolutionary phase of the inner stars at the onset of the TMT episode depends on the initial mass of the tertiary star. For the systems shown in the left panel of Fig. 6, the inner binary at the onset of TMT comprise of BHs, if \(M_{\rm out,ZAMS}\leq 59\,M_{\odot}\), helium stars, if \(59\,M_{\odot}\leq M_{\rm out,ZAMS}\leq 70\,M_{\odot}\), and MS stars, if \(M_{\rm out,ZAMS}\geq 70\,M_{\odot}\). The majority (53 per cent) of the TMT systems in the left panel of Fig. 6 have a BH-BH inner binaries. For the entire population of CHE triples presented in section 3.1, the same percentage is smaller (i.e 31 per cent) at the same metallicity (see Table 1). As shown in Fig. 7, this quantity (i.e. the ratio of the number of TMT systems with BH-BH inner binaries and the number of all TMT system) scales proportionally to the initial mass of the secondary star in the inner binary. This means that TMT episodes occur more frequently with BH-BH accretors among CHE triples with more massive inner stars. This is due to our assumptions about the initial distribution of the triples (section 2.6). If the TMT occurs towards a BH-BH inner binary, the tertiary has to be initially the least massive in the triple. With increasing \(M_{\rm 2,ZAMS}\), the fraction of triples for which \(M_{\rm 2,ZAMS}>M_{\rm out,ZAMS}\) increases because of our assumptions of a maximum initial stellar mass of \(M_{\rm ZAMS,max}=100\,M_{\odot}\) and a flat outer mass ratio distribution.
In 15 per cent of the triples shown in the left panel of Fig. 6, the inner binary merges before BH formation or before a TMT episode occurs. All such mergers in the grid occur between two helium stars, and are due to ZLK oscillations that arise when the stars of the inner binary evolve off the MS. The initial outer orbital separations in this channel are very short, i.e. 200 to 241 \(R_{\odot}\), while the tertiary masses range between \(32\leq M_{\rm out,ZAMS}/M_{\odot}\leq 68\). For lower tertiary masses (\(M_{\rm out,ZAMS}<32\,M_{\odot}\)), the ZLK oscillations are not strong enough to boost the inner eccentricity and cause a mass transfer episode in the inner binary. For larger tertiary masses (\(M_{\rm out,ZAMS}>70\,M_{\odot}\)), the tertiary typically fills its Roche-lobe before the stars of the inner binary evolve off the MS. However, during the main sequence phase of the inner stars, the effects of ZLK cycles are quenched and consequently no mergers are prompted by three-body dynamics before the tertiary initiates a TMT episode.
Triples of the no post-MS MT channel in the left panel of Fig. 6 have initial outer orbits \(a_{\rm out}\gtrsim 2000\)-3000 \(R_{\odot}\). Their initial tertiary mass is also typically outside of the range of -8-25 \(M_{\odot}\), such that the system does not dissociate due to SN kicks. As we shown in the next subsection, three-body dynamics are not important for the evolution of these systems.
In left panel of Fig. 8, we show the initial pericenter (\(a_{\rm outer,ZAMS}\)) distribution of the entire CHE triple population for each evolutionary channel at Z = 0.005. As it can be seen, the range of initial pericenters are in agreement with those shown in Fig. 6 for all channels except for the unbound systems (since for the unbound systems, the outer eccentricity plays a crucial role, as explained in section 3.6, and in the grid we assume circular outer orbits). This again confirms that the parameters of the tertiary star play the most important role in determining the evolutionary path of a CHE triple. As shown in left panel of Fig. 8, the range of \(a_{\rm outer,ZAMS}\) of systems with TMT episodes increases with decreasing metallicity. At lower metallicity, the stellar winds are weaker and consequently, the outer orbit widens less. Therefore, the maximum \(a_{\rm outer,ZAMS}\) at which the tertiary stars can still fill their Roche-lobes also increases with decreasing metallicity.
### Initial parameters of triples with three-body dynamics
In the right panel of Fig. 6, we show the maximum eccentricities that the inner binaries reach during their evolution (\(e_{\rm in,max}\)). About 29 per cent of the triples shown in the right panel of Fig. 6 reach \(e_{\rm in,max}\geq 0.4\) due to ZLK cycles. In all of these triples, the tertiary star eventually fills its Roche-lobe (although in some cases, the inner binary merges first).
For the systems shown in Fig. 6, ZLK cycles are efficient when \(a_{\rm out,ZAMS}\la 1200\,R_{\odot}\) and \(M_{\rm out,ZAMS}\la 70\,M_{\odot}\). When the outer orbit is \(a_{\rm out,ZAMS}\ga 1200\,R_{\odot}\), the ZLK cycles are quenched by various short range forces (e.g. precession caused by tides or general relativistic effects). If \(a_{\rm out,ZAMS}\la 1200\,R_{\odot}\) but \(M_{\rm out,ZAMS}\ga 70\,M_{\odot}\), the tertiary star fills its Roche-lobe while the stars in the inner binary are still on the MS. The inner binaries of these triples do not develop high eccentricities, as ZLK cycles are quenched during MS due to strong tides (see section 3.2), and TMT episode with MS-MS accretors are expected to result in the merger of the inner binary (see section 5).
The right panel of Fig. 6 also shows that \(e_{\rm in,max}\) does not decrease smoothly with decreasing outer orbital separations, instead it drops rather abruptly across \(a_{\rm out,ZAMS}\approx 1200\,R_{\odot}\). Triples with \(a_{\rm out,ZAMS}\approx 1200\,R_{\odot}\) reach very large inner eccentricities (\(e_{\rm in,max}\approx 0.9\)), while at slightly larger orbital separations (i.e. \(a_{\rm out,ZAMS}\approx 1500\,R_{\odot}\)) the ZLK cycles are completely quenched.
These above mentioned effects are qualitatively also true for the entire CHE triple population presented in section 3.1 (see right panel of Fig. 8). At Z = 0.005, the ZLK oscillations are only efficient, if \(a_{\rm p,out,ZAMS}\la 1200\,R_{\odot}\). This implies that three-body dynamics are only relevant for those triples, in which the tertiary star would eventually fill its Roche-lobe (compare right and left panel of Fig. 8). Consequently, if the tertiary in a CHE triple remains detached throughout its evolution, the evolution of the inner binary will almost always be kinetically decoupled from the tertiary star. If \(a_{\rm p,out,ZAMS}\la 1200\,R_{\odot}\), a wide range of inner eccentricities are possible (\(e_{\rm in,max}=0.9\)) for all \(a_{\rm p,out,ZAMS}\). In this case, the value of \(e_{\rm in,max}\) is primarily determined by the mutual inclination of the triple (see also e.g. Anderson et al. 2017).
In our low metallicity model (\(Z=0.0005\)) the maximum initial outer pericenter at which three-body dynamics are still relevant is lower compared to our moderate metallicity model (right panel in Fig. A4 in section A of Appendix). At such low metallicities, stellar winds do not widen the orbit of the inner binary significantly and thus the timescales of the ZLK cycles do not decrease as much as at Z = 0.005.
## 5 Gravitational waves sources
We now discuss the possible formation channels of GW sources that originate from CHE triples and their properties. In section 5.1 we predict the merger rate densities and compare them to that of GW sources from isolated CHE binaries. For this, we assume two test populations with different stellar multiplicity fractions. One population is composed of only single and binary stellar systems (i.e. with stellar multiplicity fractions at ZAMS of \(f_{\rm single}=0.3\), \(f_{\rm binary}=0.7\), \(f_{\rm triple}=0\)), in the other triples dominate (\(f_{\rm single}=0.06\), \(f_{\rm binary}=0.21\), \(f_{\rm triple}=0.73\)).
In sections 5.2 - 5.5 we discuss the properties of each GW formation channel from CHE triples and binaries. These predictions are based on the synthetic populations discussed previously, and in cases where the simulations are stopped before the formation of a BH-BH binary, we predict the further evolution of CHE triples beyond the stopping conditions (Section 2.6) by applying simple assumptions (as detailed below).
The four main identified formation channels of GW sources within our CHE triple population are (see also Fig. 9):
* **Effectively isolated inner binary:** For such triples, three-body dynamics is suppressed by various short-range forces and the tertiary star remains detached throughout the entire evolution. The inner binary therefore evolves effectively as an isolated binary and the properties of these GW sources are indistinguishable from those of the CHE binary channel. There are two ways these systems can form: i) with the tertiary star bound to the triple (systems from the no post-MS MT channel, see section 3.3) and ii) systems in which the tertiary star becomes unbound from the triple (from the unbound channel discussed in section 3.6). For the latter, we assume that the orbit of the inner binary is not affected by the tertiary unbinding from the triple system.
* **TMT with a BH-BH accretor:** This channel comprises systems in which the tertiary star fills its Roche-lobe when the inner binary is a BH-BH binary. The inner binary components do not coalesce during the TMT phase, but will merge afterwards due to GW emission. In these systems, the tertiary star can affect the evolution of the inner binary in two major ways, via TMT episode and via three-body dynamics (see section 3.5). In section 2.5, we introduced our assumptions regarding the evolution of the inner orbity during a TMT episode.
* **TMT with a MS-MS accretor:** In this scenario, there are two sequential mergers taking place in the system (see also e.g. Stegmann et al. 2022). First, the inner binary merges when the stars are still on the MS as a result of mass transfer from the tertiary to the inner binary. This reduces the triple to a binary. We assume that the merger product of the inner binary evolves further in a classical way (as opposed to CHE). Consequently, the merger product expands and eventually fills its Roche-lobe and transfers mass to the initial tertiary star. The orbit shrinks due to this second phase of mass transfer and as a result, a merging double compact object is formed. The second phase of mass transfer is essential. Systems in which no mass transfer takes place after the inner binary merger might form detached BH-BH binaries but are too wide to merge due to GWs within the Hubble time. We note that double MS mergers among CHE triples typically occur due to TMT episodes as three-body dynamics are suppressed during the MS phase.
* **Dynamical mergers:** In the triples of this channel, ZLK oscillations are very efficient and drive up the inner eccentricities to \(e_{\rm in}\approx 0.6\)-0.9 after the stars of the inner binary have become BHs. Such systems merge due to GW emission within a few Myrs. The tertiary remains detached until the inner binary merges and therefore these triples belong to the no post-MS MT channel. As discussed in section 3.3, these systems are rare.
We ignore the possibility of GW source forming in a CHE triple through a stellar merger that do not occur between two MS stars. Such mergers can occur due to TMT or three-body dynamics with (i) helium star-MS binary or (ii) double helium star binaries. We justify the omission of the first type, as they are relatively rare. This type of merger occurs in 0.2-2 per cent of all CHE triples depending on metallicity. For the second type, the merger product is a helium star, it is not expected to significantly expand and it is unlikely to ever fill its Roche-lobe. Without a phase of mass transfer that leads to orbital shrinkage, the binary remains too wide to merge within a Hubble time. However, if the merger remnant can accrete matter during the
TMT phase it could regain a hydrogen-rich envelope, and expand later in its evolution. For simplicity, we neglect this scenario.
### Rates of GW mergers
In the population without triples, the predicted merger rate density is \(R_{\rm merger}=44.2\,{\rm Gpc^{-3}yr^{-1}}\) (see Table 3). This is about a factor of two higher than predicted by Riley et al. (2021), giving a rough agreement given the simplicity of our rate calculation (see discussion in Appendix B). The total merger rate density of the population containing triples is \(R_{\rm merger}=23\,{\rm Gpc^{-3}yr^{-1}}\). This is about a factor of two lower than that of the population without triples. There are two reasons for this difference. Firstly, stellar mergers frequently occur in CHE triples, preventing the formation of compact BH-BH binaries. While all CHE binaries form BH-BH binaries, only about 60 (45) per cent of CHE triples form (inner) BH-BH binaries at Z = 0.005 (Z = 0.0005).
Secondly, the number of systems formed in the population with triples is always lower per unit stellar mass formed than in the population without triples, as triple systems, on average, have larger total masses than binaries and/or single stars.
In the population with triples, about half of the GW mergers originate from formation channels involving CHE originate from triples. The role of the tertiary is negligible for 69 per cent of GW progenitors from CHE triples. In the remaining 31 per cent, the evolution of the inner binary is affected by the tertiary star via TMT and/or three-body dynamics.
Figure 6: Left panel: we show the evolutionary outcome of collection of triples with fixed inner binary parameters and different parameters for the tertiary at a metallicity \(Z=0.005\). The parameters of the inner binary are the same for each system shown in the grid, i.e. \(M_{\rm 1,ZAMS}=M_{\rm 2,ZAMS}=70,M_{\odot}\), \(\sigma_{\rm 0a,ZAMS}=22.4\,R_{\odot}\), \(\sigma_{\rm in}=0\). The initial tertiary mass \(M_{\rm out}\) ranges from 5 to \(100\,M_{\odot}\) on a linear scale, while \(a_{\rm out,ZAMS}\) ranges from 200 to \(1000\,R_{\odot}\) on a logarithmic scale. We assume zero outer eccentricity and an initial inclination of \(90^{\circ}\). Right panel: we show the maximum eccentricity of the inner binary reached during the evolution. The parameters of the inner binary are the same as in left panel
\begin{table}
\begin{tabular}{l c c c c c} \hline & of all CHE systems & of all CHE systems & Formation efficiency & Formation efficiency & Merger rate density \\ & at Z = 0.005 [\%] & at Z = 0.0005 [\%] & at Z = 0.005 & at Z = 0.0005 & [\({\rm Gpc^{-3}yr^{-1}}\)] \\ \hline \multicolumn{6}{c}{**Population with triples**} \\
**CHE triple channels:** & **29.1/31.9** & **23.7/23.7** & **6.9 - 10\({}^{\circ}\)/7.2 - 10\({}^{\circ}\)** & **9.2** & **12.7/11.8** \\ \multicolumn{6}{l}{Effectively isolated inner binary} \\ -TMT with BBH \& BA (ec. 1/3c. 2) & 3.86.6 & 2.3/2.3 & 9 - 10\({}^{-7}\)/1.2 - 10\({}^{-7}\) & 5.8 - 10\({}^{-3}\)/5.8 - 10\({}^{-8}\) & 1.4/0.5 \\ -TMT with BBH \& CBD & 5.9 & 7.8 & 1.4 - 10\({}^{-7}\) & 1.9 - 10\({}^{-7}\) & 2.4 \\ -TMT with MS-MS channel & 0.3 & 5 & 6 - 10\({}^{-9}\) & 1.2 - 10\({}^{-7}\) & 0.2 \\ -Dynamical mergers & 0.2 & 0.1 & 4 - 10\({}^{-9}\) & 2.0 - 10\({}^{-9}\) & 0.05 \\
**CHE binaries** & **8.7** & **12.9** & **7.2 - 10\({}^{-7}\)** & **5.2 - 10\({}^{-7}\)** & **11** \\ \hline \multicolumn{6}{c}{**Population without triples**} \\
**CHE binaries** & **65** & **88.6** & **1.2 - 10\({}^{-6}\)** & **1.2 - 10\({}^{-6}\)** & **44.2** \\ \hline \end{tabular}
\end{table}
Table 3: Summary of the statistics of GW sources from the population with triples (with \(f_{\rm single}=0.06\), \(f_{\rm binary}=0.21\), \(f_{\rm triple}=0.73\)), and from the population without triples (with \(f_{\rm single}=0.3\), \(f_{\rm binary}=0.7\), \(f_{\rm triple}=0\)). The ‘of all CHE systems’ is the number of systems, expressed as a fraction of all systems containing a binary that contains two, tidally-locked CHE stars. Formation efficiency gives the number of systems expressed as a fraction of all stellar systems (see equation B1). Merger rate density is the merger rate density in the local universe (see equation B15).
### Isolated binaries
At \(Z=0.005\), about 68 per cent of the CHE binary population forms a BH-BH binary that merges within the Hubble time, while at \(Z=0.0005\), all CHE binaries merge due to GWs within the age of the universe. In our moderate metallicity model, the delay times of these BH-BH binaries from this population ranges from 3 to 50 Gyr (and therefore the delay time of GW sources ranges from 3 to 13.5 Gyr). In our low metallicity model, the delay times are considerably shorter, ranging roughly from 100 to 600 Myrs. At Z = 0.005, only those binaries merge which were in contact during their MS phase. At Z = 0.0005, about 97 per cent of all GW progenitors were in contact during their MS phase. Since we assume such binaries equalise in mass, we predict that the vast majority of GW sources consist of equal mass black home binaries from this population (in broad agreement with Marchant et al. 2016). The masses of the merging binary black holes from this channel range from 20 to 42 \(M_{\odot}\) at Z = 0.005 and 33 to 54 \(M_{\odot}\) at Z = 0.0005.
### Effectively isolated inner binaries
This is the dominant channel among CHE triples with a predicted merger rate density of 8.8 Gpc\({}^{-3}\)yr\({}^{-1}\). At Z = 0.005 (Z = 0.0005), about 19 (12) per cent of all CHE systems (e.g CHE binaries and CHE triples, see section 2.6) are expected to form GW sources via this channel. In 53 per cent of the GW progenitors of this channel, the tertiary star becomes unbound by the time both stars in the inner binaries form BHs. This percentage drops to 38 per cent at Z = 0.0005.
The demographics of this channel are nearly indistinguishable from the isolated binary population. The merger efficiency of this channel, which we define as the GW sources as a fraction of BH-BH inner binaries formed via a certain channel, is 68 per cent. Unsurprisingly, this is the same as the merger efficiency of the isolated CHE binary channel. Similarly to the CHE binary case, the majority of the inner binaries of these triples were also in contact during their MS phase and therefore this channel also produces overwhelmingly equal mass mergers.
### TMT with a BH-BH accretor
This is the dominant formation channel in which the evolution of the inner binary is affected by the tertiary star. The predicted merger rate density is \(R_{\rm merger}\) = 3.8 Gpc\({}^{-3}\)yr\({}^{-1}\), which accounts for about 16 per cent of all GW mergers from CHE systems. About 10 per cent of all CHE systems form merging binary BHs via this channel.
With our simplistic models of TMT (see subsection 2.5), we predict that the outer orbit widens as a result of the TMT episode for all triples considered in this study. In the lower panel of Fig. 11, we show how the outer pericenter changes after the mass transfer phase for triples experiencing TMT with a BH-BH inner binary accretor for our moderate metallicity model (and in the lower panel of Fig. 14 for our low metallicity model). The orbital separations widen typically by a factor 1.5-2.
Even, if the inner orbit remains unchanged due to TMT, the outer orbit widens so much, such that three body-dynamics become typically negligible after the TMT episode for the majority of these triples. For example, at Z = 0.005, in those TMT systems, in which ZKL oscillation are effective prior to the mass transfer event, 70 per cent of the inner binary becomes decoupled from the tertiary star after the TMT episode. If the evolution of the inner BH-BH inner binary is decoupled from the tertiary, its orbital evolution is solely determined by the emission of GWs (and therefore the coalescence time can be determined according Peters 1964, otherwise, we use equation 5).
As noted in section 2.5, we make different assumptions about the evolution of the inner orbit based on whether a circumbinary disc is formed during TMT. We therefore discuss the properties of GW sources from these two subtypes separately.
#### 5.4.1 Accretion through a circumbinary disc
The predicted merger rate of this channel is 2.4 Gpc\({}^{-3}\)yr\({}^{-1}\). The merger rate efficiency is just 6 per cent higher than the merger rate efficiency from isolated binaries. The slight increase is due to the small number of eccentric inner binaries at the onset of the mass transfer (\(\sim\)10 per cent of systems undergoing TMT with BH-BH accretors and circumbinary discs have \(e_{\rm in}\)\(>\)0.4, see Fig. 5). The small difference is not surprising as we have assumed here that the orbit of the inner binary does not change due to circumbinary disc accretion. However, if circumbinary disc accretion leads to a significant increase (decrease) in the inner period, the compact object merger fraction decreases (increases) significantly as well. Clearly, better models are required to understand circumbinary accretion of a BH binary from a mass transferring tertiary star.
#### 5.4.2 Ballistic accretion
The properties of these GW sources depend on how the inner binary evolves due to TMT. If we simplistically assume that that the inner orbit does not change (i.e. _Scenario 1_, see section 2.5), then the merger rate density of this channel in the local universe is \(R_{\rm merger}\) = 1.4 Gpc\({}^{3}\)yr\({}^{-1}\). In this case about 3.8 (2.3) per cent of all stellar systems containing a CHE binary form GW sources via this channel at Z = 0.005 (Z = 0.0005). The merger efficiency of this channel is 75 per cent at Z = 0.005, which is slightly higher than that of the CHE binary population (68 per cent). As discussed in section
Figure 7: The distribution of the initial secondary masses of the entire CHE triple population that undergo tertiary mass transfer at Z = 0.005. We only show those systems which have either BH-BH or MS-MS accretors. The grey unfilled histogram, with the corresponding secondary x-axis on the right hand side, shows the number of systems with BH-BH accretors as a fraction of all systems undergoing tertiary mass transfer phases.
3.5, a considerable fraction of these sources have high eccentricities, namely, 48 per cent with \(e_{\rm in}\gtrsim 0.4\) at Z = 0.005 and 10 per cent at Z = 0.0005. This results in shorter delay times and more mergers with respect to the isolated CHE binary channel (top left panel of Fig. 12).
If the orbital evolution can be described by equation 15 (i.e. _Scenario 2_, see section 2.5), then the inner pericenters of BH-BH binaries decrease by 1-3 orders of magnitude due to the TMT episode, depending on the efficiency parameter \(\sigma_{\rm TMT}\). In this case, all inner binetiers become dynamically decoupled from the tertiary star after the TMT episode. As shown in the left panel of Fig. 10, the peak of the orbital separation distribution shifts from \(32\,R_{\odot}\) to \(25\), 5 and \(1\,R_{\odot}\) with \(\sigma_{\rm TMT}=0.05,5,5\). With such short periods, nearly all (i.e. typically \(\gtrsim 99\) per cent) of the inner binaries eventually emerge. However, none of the inner binaries merge during the mass transfer, in fact they merge due to GW emission afterwards. In Fig. 12, we show that the typical delay times in _Scenario 2_ are also orders of magnitude shorter with respect to that of isolated CHE binaries. With \(\sigma_{\rm TMT}=0.05\), the delay times of these GW sources is dominated by the stellar evolution. Such timescales could make TMT episodes relevant in young clusters in which star-formation is still active. Even when assuming a weaker friction exerted by the transferred mass (i.e. \(\sigma_{\rm TMT}\lambda_{\rm TMT}=5\)) resulting in the smallest orbital shrinkage in our models, most of the BHs merge within a few hundred Myr at Z = 0.005.
Despite the higher merger efficiency, the predicted merger rate density for _Scenario 2_ is considerably lower (i.e. \(R_{\rm merger}=0.5\,\rm{Gpc^{-3}yr^{-1}}\)) than in _Scenario 1_. This is due to the extremely short delay times, implying the progenitor stars must have formed recently, when the cosmic star formation rate is low (e.g. Madau & Fragos, 2017). As the cosmic star formation rate is expected to increase strongly from \(z=0\,\rm{to}\,z=2\), we expect the merger rate density of this channel to be significantly higher at \(z\approx 2\) than at \(z=0\). This would make these sources more relevant for third-generation GW detectors.
We mention two interesting aspects of this channel. Firstly, depending on the efficiency parameter of the TMT episode, these systems could be in the LISA frequency band (Amaro-Seoane et al., 2022) during the mass transfer phase. In the right panel panel of Fig. 10, we show the frequency at which the BH-BH binaries emit GWs after the mass transfer episode. With \(\sigma_{\rm TMT}=0.5\), about half, and with \(\sigma_{\rm TMT}=0.05\), all of our systems enter the mHZ regime during the mass transfer phase. The evolution through the LISA frequency range would be primarily driven by gas dynamics instead of GW emission (see also Renzo et al., 2021). Such sources would be detectable by LISA, if the corresponding luminosity distances are not larger than \(\sim 10\,\rm{kpc}\) and \(\sim 10^{4}\,\rm{kpc}\) in case of \(\sigma_{\rm TMT}=0.5\) and \(\sigma_{\rm TMT}=0.05\), respectively (see e.g. Fig. 1 in Amaro-Seoane et al., 2022).
Secondly, a TMT episode could be accompanied by a detectable electromagnetic signal, as the transferred mass is expected to heat up when it reaches the inner BH binary. If the delay time between this signal and the GW merger is within the lifetimes of typical observing missions, then the GW merger could be associated with this electromagnetic counterpart (see also e.g. de Mink & King, 2017). We find that the time between the end of the TMT episode and the GW merger in case of \(\sigma_{\rm TMT}\lambda_{\rm TMT}=0.05\) is shorter than a year for 6 per cent of these sources at Z = 0.0005. This implies that in this case a electromagnetic counterpart could be detected, shortly before the GW merger. This is in contrast with the possible electromagnetic signatures associated with BH mergers in AGN discs, where the electromagnetic counterpart would occur after the GW merger (see e.g. McKernan et al., 2019)
### TMT with a MS-MS accretor
This channel has a low merger rate density of \(R_{\rm merger}=0.2\) Gpc\({}^{-3}\)yr\({}^{-1}\). Even though 25 per cent of all systems containing a CHE binary experience a double MS merger in the inner binary at Z = 0.005, only 1.1 per cent of them form merging binary BHs. This low merging efficiency is due to two reasons. Firstly, if the mass transfer episode between the merger product and the tertiary star proceeds in a dynamically unstable way, the process mostly ends in stellar merger and no double compact binary is formed. Secondly, if the same mass transfer proceeds instead in a stable way, the binary BH typically has too wide orbit to merge within the Hubble time. We note, however, that these predictions are sensitively dependent on uncertain stellar physics (such as the efficiency of CEE phase, mass-loss radius ex
Figure 8: Left panel: The distribution of the initial pericenter of a few selected evolutionary types of the entire CHE triple population at Z = 0.005. The histograms have been normalised to the full population of the CHE evolving triples. The histograms shown are stacked. Each colour represents a different evolutionary type. For clarity we do not show all the types introduced in section 3.1, see text for discussion. Right panel: The distribution of the initial outer pericenter of the entire CHE triple population at Z = 0.005. We distinguish systems based on the maximum eccentricity of the inner binary reached during their evolution. The histograms are normalised to one and stacked.
Figure 9: The possible formation channels of merging binary BHs from our CHE triples population.
ponent and binding energy of stars with \(M_{\rm ZAMS}\gtrsim 100\,M_{\odot}\)). We also note that the merger efficiency is significantly higher in our low metallicity model, 12.3 per cent of triples with double MS merger forms merging binary BHs. As the merger efficiency seems to increase with decreasing metallicity, and we only calculate the merger rate density based on two metallicities, it is likely that we underestimate the merger rate density for this channel (see more detailed explanation in Appendix section B).
In case of a TMT episode with a MS-MS accretor, we always assume that the inner binary merges due the mass transfer phase. We justify this assumption by the fact that THE MS-MS binaries tend to be on very close orbits (\(\sim\)20-30 \(R_{\odot}\)) compared to their stellar radii (\(\sim\)5-10 \(R_{\odot}\)). A significant fraction of them are already in contact. Furthermore, these stars may swell up as a result of accretion, this type mass transfer event is likely to end in merger (e.g. Braudo et al., 2022; Leigh et al., 2020).
The merger product is a rejuvenated MS star with a mass of \(M_{1+2}=M_{1}+M_{2}\). This means that we neglect any accretion during TMT and we assume a fully conservative merger without mass outflows. At Z = 0.005, the mass of the inner binary merger remnant \(M_{1+2}\) ranges from 65 to 188 \(M_{\odot}\). The distribution has a peak around \(\sim 100\,M_{\odot}\). At Z = 0.0005, the mass of the merger product ranges from 70 \(M_{\odot}\) to 190 \(M_{\odot}\).
The orbital separations after the TMT episode are shown in the upper panel of Fig. 11 (and Fig. A5 for our low metallicity model). We can see that the outer orbit widens typically by a factor of 1.7-2.5 and the orbital separations range from 150 to 6800 \(R_{\odot}\). While the ranges are similar at both metallicities, at Z = 0.0005, the typical orbital separations are significantly shorter.
Most of the systems experience a second phase of mass transfer after the TMT episode (62 per cent at Z = 0.005 and 96 per cent at Z = 0.0005) and typically the donor star is on the Hertzsprung gap during this second phase of mass transfer ( about 99 per cent at Z = 0.005, and about 86 per cent at Z = 0.0005). More evolved donor stars are not expected to occur frequently, as the onset of CHeB occurs at a cooler effective temperature with increasing mass with and followed by a less significant subsequent radial expansion (Hurley et al., 2000). In particular for \(M_{\rm ZAMS}\gtrsim 100\,M_{\odot}\), stars are predicted to expand negligibly after the CHeB, even at low metallicities.
Regarding the stability of the mass transfer between the merger remnant and the initial tertiary, we find that it occurs in an dynamically unstable manner in 66 (30) per cent of cases at Z = 0.005 (Z = 0.0005). We assume that CE phases with a donor star on the Hertzsprung gap result in a merger, following Dominik et al. (2012) (but see also Klencki et al., 2020; Marchant et al., 2021). At both metallicities, binary BHs are only produced when the second phase of mass transfer proceeds in a stable manner. Furthermore, in order to form a GW source, the orbit needs to be compact enough (\(a_{\rm out}\lesssim 1000\,R_{\odot}\)) at the onset of the second mass transfer event. This only occurs in about 5 per cent (30 per cent) of systems with stable mass transfer at Z = 0.005 (Z = 0.0005).
This is the only GW formation channel of CHE triples that yields a significantly different mass and mass ratio distributions than the CHE binary channel. The masses of the merging binary BHs range from 16 to 27 \(M_{\odot}\) at Z = 0.005 and 17 to 54 \(M_{\odot}\) at Z = 0.0005. The mass ratios range from 0.7 to 0.8 at Z = 0.005 and 0.5 to 1.0 at Z = 0.0005. All other channels produce merging binary BHs with masses that range from 20 to 42 at Z = 0.005 and 33 to 54 at Z = 0.0005. The vast majority (\(\gtrsim 90\) per cent) of these systems have equal masses, as the inner binaries had been in a contact during their MS phase.
Figure 11: The pericenter of the outer orbit before and at the onset of TMT for MS-MS inner (upper panel) and BH-BH inner binary accretors (lower panel) as calculated with equation 17 at Z = 0.005. The grey line shows the ratio of the pericenter after the TMT and at the before the TMT episode (i.e. \(a_{\rm p,out,afterT}/a_{\rm p,out,beforeTMT}\) with the corresponding values shown in the upper x-axis) We note that for we have normalised these distributions to \(10^{4}\) (such that the numbers on the y-axis are in the order of unity).
Figure 10: Orbital parameters of GW sources that evolve through the TMT channel with BH-BH accretors and ballistic accretion at Z=0.005. In the upper panel we show the inner pericenter of the inner BH-BH binary (\(a_{\rm p,in}\)). The black line represents \(a_{\rm p,in}\) at the onset of TMT. The coloured lines show the orbital characteristics at the end of the mass transfer for _scenario_ 2, where the change in the orbit has been estimated using an energy formalism with three different values of \(\sigma_{\rm TMT}\)\(\sigma_{\rm TMT}\) (equation 13 and 14). Lower panel: we show the frequency of the GW radiated by the inner BH-BH binary before and after the tertiary mass transfer phase.
### Dynamical mergers
The merger rate density of these channel is very low, \(R_{\rm merger}=0.05\) Gpc\({}^{-3}\)yr\({}^{-1}\). The delay times of these systems are very short and range from 4 to 20 Myrs. Similarly to the GW progenitors that have experienced TMT episodes with ballistic accretion, the short delay times imply that the merger rate density could be about an order of magnitude larger at \(z\approx 2\). About 25 per cent of these systems have eccentricities \(e_{\rm in}\gtrsim 10^{-4}\) when the characteristic GW frequency reaches 10 Hz, making eccentricities detectable by third-generation detectors (Lower et al., 2018).
For all systems, the tertiary star is still on the MS when the inner binary merges due to GWs with outer pericenters of \(a_{\rm p,out}\approx 120\)-\(790\,R_{\odot}\). It is therefore expected that the initial tertiary star will eventually fill its Roche-lobe, once it evolves of the MS.
## 6 Conclusion
We studied the evolution of hierarchical triples with CHE stars in the inner binary with a rapid populations synthesis approach. We performed simulations with the triple population synthesis code TRES at two representative metallicities: Z = 0.005 and Z = 0.0005. We showed that the evolution of CHE stars can be altered by the presence of a tertiary star in several ways. This can potentially lead to a formation of a number of diverse and unique astrophysical phenomena, e.g. TMT phases with BH-BH accretors, highly eccentric mergers of helium stars, and mergers of binary BHs with very short (few Myrs) delay times.
To summarise our main findings:
1. **Tertiary mass transfer (TMT) episodes are common among CHE triples:** Unlike in classically evolving hierarchical triples, we predict that TMT phase is very common among CHE triples. The tertiary star fills its Roche-lobe in about 50 per cent of all triples with CHE inner binaries. The same fraction for classically evolving systems is predicted to be a few percent at best (see e.g. de Vries et al., 2014; Toonen et al., 2020; Hamers et al., 2022; Kummer et al., 2023). We find that the mass transfer episodes initiated by the tertiary star typically occurs in a dynamically stable way.
2. **BH-BH inner binaries that accrete from tertiary star are also common:** About 31 (24) per cent of the tertiary-driven mass-transfer episodes occur with BH-BH accretors at Z = 0.005 (Z = 0.0005). Previous population synthesis studies suggest that such scenario is probably not possible triples with classically evolving stars (see e.g. Toonen et al., 2020; Hamers et al., 2022). Therefore, mass transfer towards a BH-BH inner binary represents a unique scenario for triples (or higher-order multiples) with CHE stars in the inner
Figure 12: The time delay distribution of the GW sources of CHE triple population shown by a stacked histogram. As a comparison, we also show the time delay distribution of the GW sources from CHE isolated binaries at the same metallicity (gold). The different colours of the stacked histograms refer to different evolutionary paths of the triples (shown by the legend in the top of the panel)
binaries. An exciting prospects would be a possible EM counterpart from such an event (e.g. de Mink & King, 2017).
* **Importance of three-body dynamics:** ZLK oscillations can be effective for CHE triples, if the stars in the inner binary have evolved off MS (otherwise precession due to strong tides quench ZLK cycles) and if the initial outer pericenter is \(a_{\rm p,outer,ZAMS}\lesssim 2000\,R_{\odot}\) (otherwise ZLK cycles are quenched by various short range forces throughout the entire evolution of the inner binary). ZLK oscillations are only present in those CHE triples, in which the outer pericenter is so short, such that the tertiary star would eventually fill its Roche-lobe. The inner eccentricities of these systems can reach values up to \(e_{\rm in,max}\sim 0.9\) (left panel of Fig. 8). The effects of three-body dynamics are negligible for those CHE triples in which the triple remains detached. In this case, the inner binary evolves effectively as an isolated binary.
* **Three-body dynamics can drive the inner binary to a stellar merger:** In about 3 per cent of CHE triples, the inner binary merges before BH-BH formation. The most common type is a merger of a double helium star binary, that comes into contact in a highly eccentric orbit (Table 2).
* **CHE triples form GW sources efficiently:** About 30 (24) per cent of the CHE triple population forms BH binaries that merge due to GWs within Hubble time at Z = 0.005 (Z = 0.0005). We predict a merger rate density of GW sources from CHE triples of \(R_{\rm merger}\approx 12\,\rm Gpc^{-3}yr^{-1}\) (Table 3). We also predict that about half of the GW sources from CHE systems originate from triples. In 69 per cent of all GW sources from CHE triples, the inner binary evolves effectively as an isolated binary and therefore its properties are indistinguishable from those of CHE binaries. In the remaining 31 per cent, the evolution of the GW progenitor is affected by three-body dynamics and/or TMT episodes.
* **Tertiary mass transfer and three-body dynamics could lead to the formation of BH-BH binaries that merge within Myrs** The vast majority of those GW progenitors of CHE triples, in which the evolution of the inner binary is not decoupled from the tertiary object, experience a TMT episode with a BH-BH inner binary. In this case, we model the evolution of the inner binary during the TMT phase with energy arguments (following de Vries et al., 2014, see also subsection 2.5) and with different assumptions on how efficiently the transferred mass shrinks the orbit of the inner binary. We find typical values for the delay time of these GW sources of few hundred Myrs and few Myrs in our model variation with the least and the most orbital shrinkage, respectively.
## Acknowledgements
SdM acknowledges Fabio Antonini, Adrian Hamers and Lieke van Son for insightful discussions. AD acknowledges travel grant from the HPC3 Europa programme for providing computational resources at the Snelius supercomputer in the Netherlands and acknowledges support fro API for allowing an extended visit. Computational work was performed by the Snelius supercomputer in the Netherlands and by the University of Birmingham's BlueBEAR HPC service. ST acknowledges support from the Netherlands Research Council NWO (VENI 639.041.645 and VIDI 203.061 grants). SdM acknowledges funding by the Netherlands Organization for Scientific Research (NWO) as part of the Vidi research program BinWaves with project number 639.042.728.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2302.01035 | Deep Learning Based Predictive Beamforming Design | This paper investigates deep learning techniques to predict transmit
beamforming based on only historical channel data without current channel
information in the multiuser multiple-input-single-output downlink. This will
significantly reduce the channel estimation overhead and improve the spectrum
efficiency especially in high-mobility vehicular communications. Specifically,
we propose a joint learning framework that incorporates channel prediction and
power optimization, and produces prediction for transmit beamforming directly.
In addition, we propose to use the attention mechanism in the Long Short-Term
Memory Recurrent Neural Networks to improve the accuracy of channel prediction.
Simulation results using both a simple autoregressive process model and the
more realistic 3GPP spatial channel model verify that our proposed predictive
beamforming scheme can significantly improve the effective spectrum efficiency
compared to traditional channel estimation and the method that separately
predicts channel and then optimizes beamforming. | Juping Zhang, Gan Zheng, Yangyishi Zhang, Ioannis Krikidis, Kai-Kit Wong | 2023-02-02T11:57:55Z | http://arxiv.org/abs/2302.01035v1 | # Deep Learning Based Predictive Beamforming Design
###### Abstract
This paper investigates deep learning techniques to predict transmit beamforming based on only historical channel data without current channel information in the multiuser multiple-input-single-output downlink. This will significantly reduce the channel estimation overhead and improve the spectrum efficiency especially in high-mobility vehicular communications. Specifically, we propose a joint learning framework that incorporates channel prediction and power optimization, and produces prediction for transmit beamforming directly. In addition, we propose to use the attention mechanism in the Long Short-Term Memory Recurrent Neural Networks to improve the accuracy of channel prediction. Simulation results using both a simple autoregressive process model and the more realistic 3GPP spatial channel model verify that our proposed predictive beamforming scheme can significantly improve the effective spectrum efficiency compared to traditional channel estimation and the method that separately predicts channel and then optimizes beamforming.
channel prediction, beamforming, deep learning.
## I Introduction
Timely and accurate channel state information (CSI) is essential to exploit the full potential of multiuser multi-antenna systems by designing the optimal transmission strategies such as beamforming, but it is challenging to obtain in practice. Traditionally downlink CSI is obtained at a base station (BS) via either feedback from users, or channel estimation via uplink pilots by using the channel reciprocity. Both methods introduce significant overhead, extra error and latency, and as a result the CSI at the BS becomes outdated for beamforming design especially in high-mobility scenarios, e.g., unmanned aerial vehicle and vehicle-to-everything communications.
A more efficient channel acquisition method is to predict channels based on historical CSI data by exploiting the temporal correlation. There has been a large body of research on channel prediction. Early works assume the accurate channel model such as the autoregressive (AR) process or the long-term channel statistics is available, and typically Kalman filtering [1] is employed to estimate the AR coefficients. However, practical channels may not be characterized by analytical models and they could be non-stationary, which degrade the performance of model-based prediction methods.
Recently deep learning based channel prediction has received much attention for its ability to learn CSI from data without prior knowledge about channel models. Starting from a single-antenna system, an efficient long-short term memory (LSTM) network, a type of recurrent neural network (RNN), is proposed in [2] for CSI prediction and adaptation in non-stationary changing channels. Going further, a data-driven receiver architecture that reduces the pilot overhead is designed in [3], following RNN-based channel prediction. The work in [4] begins by designing and conducting a measurement campaign to collect IQ samples of the IEEE 802.11p transmission and extract CSI in various real-world vehicular environments. The LSTM method is then employed to predict future CSI and received signal levels which is verified by trace-based evaluation. Deep learning based channel prediction is extended in [5] to massive multiple-input-multiple-output (MIMO) systems, which improves the channel prediction quality for both low and high mobility scenarios. A comparative study on a vector Kalman filter (VKF) predictor and a deep learning based predictor for massive MIMO systems using the spatial channel model (SCM) is carried out in [6], and it is shown that both can achieve substantial gain over the outdated channel in terms of the channel prediction accuracy and data rate. Following channel prediction, the beamforming optimization has also been studied. A deep learning approach is adopted in [7] to predict the angles between the unmanned aerial vehicle (UAV) and the user equipment in the presence of jittering due to the inherent random wind gusts, such that the UAV and the UE can prepare the transmit and receive beams in advance. A versatile unsupervised deep learning based predictive beamforming design is proposed in [8] in vehicular networks, which implicitly learns the features of historical channels and directly predicts the beamforming matrix to be adopted for the next time slot. The proposed method not only guarantees the required sensing performance, but also achieves a satisfactory sum-rate. Predictive beamforming is studied for dual-functional radar-communication (DFRC) systems in vehicular networks [9], in which a novel message passing algorithm based on factor graph is proposed to estimate the motion parameters of vehicles, and the beamformers are then
designed based on the predicted angles for establishing the communication links.
Different from the aforementioned works that focus on channel prediction only or have assistance from radar, we propose a deep learning framework that takes historical CSI data as input and directly predicts the beamforming solution for future channels to maximize the sum rate performance of a multiuser multi-antenna system. The framework incorporates an LSTM-based channel prediction module and a power optimization module which helps reconstruct the beamforming vectors using hybrid supervised-unsupervised learning. Furthermore, we propose to use the attention mechanism in the LSTM network such that the impact of historical channels in different coherent intervals will be correctly reflected in the channel prediction and this thus improves the performance of beamforming prediction.
The remainder of this paper is organized as follows. Section II introduces the system model and the problem formulation. Section III presents the proposed deep learning framework for predictive beamforming. Simulation results are given to validate the proposed method in Section IV and we conclude our work in Section V.
_Notation:_ All boldface letters indicate vectors (lower case) or matrices (upper case). The superscripts \((\cdot)^{\dagger}\) and \((\cdot)^{-1}\) denote the conjugate transpose and the matrix inverse, respectively. \(\text{vec}(\cdot)\) denotes the vector operation of a matrix. The identity matrix is denoted by \(\mathbf{I}\). \(\|\mathbf{z}\|\) denotes the \(L_{2}\) norms of a complex vector \(\mathbf{z}\).
## II System Model and Problem Formulation
### _System Model_
We consider a multi-input single-output (MISO) downlink system in which a BS with \(N_{t}\)-antennas serves \(K\) single-antenna users that employ single-user detection. Suppose \(s_{k}\) is the transmit signal to the user \(k\) with unit power and the BS transmits with a total power \(P_{T}\). The transmitted data symbol \(s_{k}\) is mapped onto the antenna array elements by multiplying the beamforming vector \(\mathbf{w}_{k}\in\mathbb{C}^{N_{t}\times 1}\). The received signal at the user \(k\) can be expressed as
\[y_{k}=\mathbf{h}_{k}^{\dagger}\mathbf{w}_{k}s_{k}+\sum_{j\neq k}\mathbf{h}_{k }^{\dagger}\mathbf{w}_{j}s_{j}+n_{k}, \tag{1}\]
where \(\mathbf{h}_{k}\in\mathbb{C}^{N_{t}\times 1}\) is the channel between the BS and the user \(k\), the second term represents the interference and \(n_{k}\) denotes the additive white Gaussian noise (AWGN) component with zero mean and variance \(\sigma_{k}^{2}\). Therefore, the signal-to-interference-plus-noise ratio (SINR) that measures quality of the data detection at the \(k\)-th user is given by
\[\Gamma_{k}=\frac{|\mathbf{h}_{k}^{\dagger}\mathbf{w}_{k}|^{2}}{\sum_{j\neq k} |\mathbf{h}_{k}^{\dagger}\mathbf{w}_{j}|^{2}+\sigma_{k}^{2}}. \tag{2}\]
We choose the sum rate as the system performance metric to maximize and the resulting problem is expressed as
\[\max_{\{\mathbf{w}_{k}\}}R\triangleq\sum_{k=1}^{K}\log(1+\Gamma_{k}),\text{s.t }\sum_{k=1}^{K}\|\mathbf{w}_{k}\|^{2}\leq P_{T}. \tag{3}\]
The sum rate optimization problem in (3) is nonconvex and the standard approach to find its suboptimal solution is to use the weighted minimum mean squared error (WMMSE) algorithm [10] assuming the CSI \(\{\mathbf{h}_{k}\}\) is available which normally relies on pilot-based channel estimation and introduces substantial overhead.
### _Problem Formulation_
In this paper, we adopt a hybrid channel-estimation-prediction scheme to solve the problem (3), in which we have CSI estimation of a certain number of channels and then use it to predict CSI of some future channels. Specifically, we assume the time horizon is divided into frames of \(N+P\) coherent intervals, and within each frame, the CSI of the first \(N\) coherent intervals is available, and the CSI of the rest \(P\) coherent intervals will be predicted without estimation.
The known CSI estimation is written as
\[\hat{\mathbf{h}}_{k}(n)=\mathbf{h}_{k}(n)+\mathbf{e}_{k}(n),\forall n=1, \cdots,N, \tag{4}\]
where \(\mathbf{h}_{k}(n)\) is the true CSI of user \(k\) at the \(n\)-th coherent interval and \(\mathbf{e}_{k}(n)\) is the channel estimation error that follows the complex Gaussian distribution with zero mean and variance matrix of \(\sigma_{e,k}^{2}\mathbf{I}\). We assume the least-square channel estimation is used, so \(\sigma_{e,k}^{2}\) depends on pilot transmission power.
The CSI prediction is expressed as
\[\tilde{\mathbf{h}}_{k}(N+m)=\text{M}(\{\hat{\mathbf{h}}_{k}(n)\},\forall n=1, \cdots,N),\forall m=1,\cdots,P, \tag{5}\]
with \(\text{M}(\cdot)\) being the mapping from the known channel estimation to the channel prediction.
With the hybrid scheme, our aim is to solve the problem (3) by predicting the beamforming solutions directly for those \(P\) unknown channels given the \(N\) known channels.
## III Predictive Beamforming Solution
### _The General framework_
Our proposed deep learning based framework to predict the beamforming solution is illustrated in Fig. 1 below.
The proposed framework includes three key modules. The first module is an neural network (NN) for channel prediction, which realizes the mapping \(\text{M}(\cdot)\) from the input known channel estimations \(\tilde{\mathbf{h}}_{k}(n),n=1,\cdots,N\) to the predicted channel \(\tilde{\mathbf{h}}_{k}(N+m),m=1,\cdots,P\). Supervised learning is adopted and the associated loss function is
\[L_{H}=\frac{1}{2LK}\sum_{l=1}^{L}\sum_{k=1}^{K}\sum_{m=1}^{P}\Big{(}\|\mathbf{ h}_{k}(N+m)^{(l)}-\tilde{\mathbf{h}}_{k}(N+m)^{(l)}\|^{2}\Big{)}, \tag{6}\]
Fig. 1: The proposed framework for predictive beamforming.
where \(L\) is the number of batches, and the superscript \({}^{(l)}\) denotes the index of the batch. Details of the channel prediction module will be introduced in the next subsection.
The second module is an NN to predict power vectors in order to facilitate the prediction of beamforming solution. With the predicted channel, the beamforming solution could be inferred directly using supervised learning, but the high dimensional beamforming will make the training challenging and reduce the accuracy of inference. Instead, we exploit the following parameterized structure of the beamforming solution for the sum rate maximization problem:
\[\tilde{\mathbf{w}}_{k}=\sqrt{p_{k}}\frac{\left(\mathbf{I}+\sum_{j=1}^{K}\frac{ q_{j}}{\sigma_{j}^{2}}\tilde{\mathbf{h}}_{j}\mathbf{\tilde{b}}_{j}^{\dagger} \right)^{-1}\tilde{\mathbf{h}}_{k}}{\|\left(\mathbf{I}+\sum_{j=1}^{K}\frac{q_ {j}}{\sigma_{j}^{2}}\tilde{\mathbf{h}}_{j}\mathbf{\tilde{b}}_{j}^{\dagger} \right)^{-1}\tilde{\mathbf{h}}_{k}\|}, \tag{7}\]
where \(\{\mathbf{q}\}\) and \(\{\mathbf{p}\}\) are the uplink and downlink power vectors, respectively, with the same total sum power of \(P_{T}\). This structure is adapted from the result for perfect CSI [11]. It can be seen that once the channel prediction is obtained, we can employ an NN to predict the power vectors \(\{\mathbf{q}\}\) and \(\{\mathbf{p}\}\) using supervised learning, and then reconstruct the beamforming solution using (7). The labelled data can be obtained using the WMMSE algorithm [10]. For simplicity, we use fully connected layers for the power module with details provided in Section IV. Suppose the associated loss is given by
\[L_{P} = \frac{1}{2LK}\sum_{l=1}^{L}\sum_{m=1}^{P}\Big{(}\|\mathbf{q}^{(N +m,l)}-\tilde{\mathbf{q}}^{(N+m,l)}\|^{2}\Big{)} \tag{8}\] \[+ \Big{(}\|\mathbf{p}^{(N+m,l)}-\tilde{\mathbf{p}}^{(N+m,l)}\|^{2} \Big{)}, \tag{9}\]
where \(\tilde{\mathbf{q}}\) and \(\tilde{\mathbf{p}}\) are the output of the power NN.
The next module will leverage the predicted channel and power to construct the predictive beamforming solution using (7). To improve the end performance of maximizing the sum rate, we also consider to incorporate the sum rate expression \(R(\{\tilde{h}_{k},\tilde{\mathbf{w}}_{k}\})\) in (3) into the loss function for training the overall NN. The overall loss function is thus given as a weighted sum below:
\[L=L_{H}+L_{P}-bR(\{\tilde{h}_{k},\tilde{\mathbf{w}}_{k}\}), \tag{10}\]
where \(b\) is a positive coefficient. The novel design of the loss function (10) reflects the fact that our proposed solution is a joint learning framework that incorporates channel prediction and power optimization, while it produces prediction for transmit beamforming directly. The rest of this section is devoted to the development of the channel prediction module.
### _Attention-based LSTM Channel Prediction_
Channel prediction is to predict future CSI given historical CSI data by exploiting the temporal correlation between them. The RNN is a well-suited machine learning technology to predict time series data [12]. However, standard RNNs have the issues of vanishing and exploding gradients during back-propagation, which makes predicting long time series data sequences challenging. LSTM is one of the most successful variants of RNNs to predict the correlate time series data and can solve the issues of vanishing and exploding gradients [12], so it is adopted in this paper to predict the temporal correlated channel.
An LSTM is composed of a memory cell which can store data for long periods. The flow of information into and out of the cell is managed by three gates. Specifically, the forget gate determines what information from the previous state cell will be memorized and what information will be removed that is no longer useful; the input gate determines which information should enter the cell state; and the output gate determines and controls the outputs.
Most existing works in channel prediction use a simple LSTM and only its last hidden state is fed into a fully connected layer to produce the predicted channels. In this paper we propose an improved solution by introducing the attention mechanism to allow the algorithm to put focus on different historical channels. Specifically, the attention scheme will put higher weights on more recent channels to improve the future channel prediction. This is intuitive since not all historical CSI data have the same impact on a future channel, and it is mostly influenced by more recent optimized by training and do not need to be predetermined manually.
In this paper we propose an improved solution by introducing the attention mechanism to allow the algorithm to put focus on different historical channels. Specifically, the attention scheme will assign higher weights on more recent channels to improve the future channel prediction. This is intuitive since not all historical CSI data have the same impact on a future channel, and it is mostly influenced by more recent channels. The attention mechanism has gained remarkable success in sequence-to-sequence tasks like language translation and handwritten word recognition [13], but has not been used for channel prediction.
The proposed channel prediction module using LSTM with the attention mechanism is depicted in Fig. 2. It has \(N\) time steps each corresponding to the known channel estimation in one coherent interval. In order to best make use of the historical data we create an attention layer, which is located between the LSTM and the fully connected layer. This layer assigns a weight \(c_{n}\) to the hidden state output of each time step \(n\) with \(\sum_{n=1}^{N}c_{n}=1\), and then combines the weighted sum of the original hidden states as the LSTM output state. This will allow the importance of channel estimation in different coherent intervals to be correctly characterized. The new output state is then fed into fully connected layers to produce the predicted channels. Instead of treating the weights \(\{c_{n}\}\) as hyperparameters, one distinct advantage of the proposal scheme is to incorporate them in the overall neural network training, so there is no need to adjust them manually.
### _Effective Sum Rate_
In this subsection we define and analyze the effective sum rate \(R_{E}\) as a performance metric that incorporates the sum
rates of estimated channels and predicted channels, which can be written as
\[R_{E}=\frac{(1-\alpha)N}{N+P}R_{e}+\frac{P}{N+P}R_{p}, \tag{11}\]
where \(\alpha\) as the portion of the channel estimation overhead in an coherent interval and therefore it satisfies \(\alpha<1\), \(R_{e}\) is the average sum rate for the estimated channels, which can be obtained using the existing neural network method such as that in [14] as if the CSI is perfect, while \(R_{p}\) is the average sum rate for the predicted channels, which is obtained using the proposed predictive beamforming method. Note that compared to the traditional channel estimation based optimization method, our proposed method reduces the channel estimation overhead by \(\alpha P\). Intuitively, the signalling overhead caused by channel estimation reduces the effective sum rate, so choosing a large \(N\) may be beneficial. However, the quality of channel prediction also relies on the number of known channel estimations. Therefore, the effective sum rate is in general not monotonically varying according to \(N\) and \(P\), and a balance between low overhead and high-quality prediction needs to be achieved in practice.
## IV Numerical Results
### _Simulation setup_
In this section, we provide numerical results to validate and evaluate the performance of the proposed deep learning-based predictive beamforming solution. Unless otherwise specified, we consider a MISO downlink consisting of \(N_{t}=4\) transmit antennas and \(K=3\) users. The transmit signal-to-noise (SNR) \(P_{T}\) and the variance of channel estimation error is \(\sigma_{e,k}^{2}=1/P_{T},\forall k\). The channel estimation overhead is set to be \(\alpha=0.1\). Unless otherwise specified, we assume \(N=20\) known CSI data are available and \(P_{T}\) is 20 dB. In our simulation, we generate 30,000 training labels and 1,000 testing samples, respectively, using the WMMSE algorithm in [10]. For the LSTM layer, we use \(2N_{t}KN\) units while three fully connected layers each with \(30N_{t}K\) neurons, relu activation are used for the power NN. \(b\) is chosen to be 0.001 in (10). We use Python and Keras in Tensorflow to train the proposed deep learning model. All simulation results are generated by using a computer with an Intel i7-7700 CPU and an NVIDIA Titan Xp GPU. The normalized MSE (NMSE) defined below is used as the performance metric for channel prediction:
\[\text{NMSE}=\mathbb{E}\left[\frac{\|\mathbf{H}-\mathbf{\tilde{H}}\|^{2}}{\| \mathbf{H}\|^{2}}\right]. \tag{12}\]
The following benchmark schemes are considered for comparison:
* Predict beamforming without attention. It is the same as the proposed method except that the channel prediction module does not employ the attention mechanism.
* Separate optimization, i.e., to use the proposed LSTM-based method with attention to predict the channel, and the zero-forcing (ZF) beamforming is used to optimize the sum rate of both the predicted channels and the known channels.
* Channel estimation followed by ZF beamforming. This is the traditional estimation scheme with pilot overhead in which no channel prediction is used. It may not always be feasible due to the latency.
* Kalman filtering. This scheme uses Kalman filtering for channel prediction and ZF for optimizing the sum rate.
### _Channel models_
We consider two different scenarios for modelling the channel dynamics.
* The first scenario is the first-order AR process. Suppose we collect all users' channel as \(\mathbf{H}\). In this scenario, the temporal evolution of the channel is given by \[\text{vec}(\mathbf{H}_{n})=\beta\text{vec}(\mathbf{H}_{n-1})+\mathbf{u}_{n},\] (13) where \(\beta=\mathcal{J}_{0}(2\pi f_{D}T_{s})\), \(\mathcal{J}_{0}(\cdot)\) is the zeroth-order Bessel function of the first kind, \(f_{D}\) is the maximum Doppler frequency shift and \(T_{s}\) is the sampling duration. \(\mathbf{u}_{n}\) is the zero-mean Gaussian excitation noise with covariance matrix \(\sigma_{u}^{2}\mathbf{I}\) with \(\sigma_{u}^{2}=1-\beta^{2}\). The composite term \(f_{D}T_{s}\) denotes the normalized Doppler rate. In the simulation, we choose \(f_{D}T_{s}=0.005\), so \(\beta=0.9998\), which corresponds to a slow user velocity of 2.7 km/h at a frequency of 2 GHz and sampling duration of 1 ms. For the Kalman filtering, an AR model with the prediction order of one in (13) is used together with the measure data in (4) to estimate the channels \(\{\mathbf{h}_{n}\},1\leq n\leq N\). While to predict the channels without measurement data, the simple state evolution is used: \(\mathbf{h}_{N+m}=\beta\mathbf{h}_{N+m-1},1\leq m\leq P\). Note that we have assumed that the model in (13) with the parameter \(\beta\) is known when designing the Kalman filtering, while for the proposed prediction method, neither the model nor the parameter is available and it will learn directly from the data.
* The second scenario is the 3GPP urban micro SCM which represents a more realistic but challenging channel environment [15]. We employ the default SCM parameters in [15] except that NumBsElements is \(N_{t}\), NumMsElements is 1, and NumPaths is set to 1.
Fig. 2: (a) The traditional channel prediction using LSTM without attention; (b) The proposed channel prediction using LSTM with an attention layer.
### _Results_
We first consider the first scenario of AR model. The NMSE results of the channel prediction are depicted in Fig. 3(a) against the number of predicted channels \(P\). In general, the more channels to predict, the higher the NMSE is. It is observed that our proposed channel prediction with the attention mechanism is superior to the counterpart without attention, and both achieve much lower NMSE than the traditional channel estimation scheme. Note that this is because our proposed prediction exploits the temporal correlation of the channel, while the traditional channel estimation does not and it simply uses pilots to estimate the current channel. As the number of predicted channel becomes larger, the performance of prediction will unavoidably degrade and become worse than the traditional channel estimation. In the sequel for our proposed solution we assume attention is always used. Kalman filtering achieves the lowest NMSE among all considered schemes by making use of the model information and known parameters, which are not available to the proposed prediction method. The accuracy of the first \(N\) channel estimation results will affect the subsequent channel prediction performance, and therefore in Fig. 3(b) we show the results of NMSE of predicted channels versus the training SNR for the estimated channels when \(P=20\). As can be seen, the NMSE of the traditional channel estimation keeps decreasing as the SNR increases; while for the prediction methods, the NMSE saturates when the SNR is above a certain threshold. This is because the performance of channel prediction is limited by the number of predicted channels no matter how accurate the channel estimation is.
Next, we plot the sum rate and the effective sum rate normalized by the sum rate achieved by the WMMSE solution against the number of predicted channels in Fig. 4(a) and Fig. 4(b), respectively. It can be seen from Fig. 4(a) that our proposed solution achieves the highest sum rate while the sum rate with the traditional channel estimation is the lowest. The separate solution that first predicts the channel and then uses ZF beamforming achieves slightly lower sum rate than that of Kalman filtering. The same trend can be observed from the Fig. 4(b) about the effective sum rate. Because the channel estimation overhead is taken into account, the achievable effective sum rates are lower than those in Fig. 4(a). For our proposed solution, the sum rate performance shows ceiling effect as the number of predicted channels increases. As per the analysis in Section III.C, this is because more future channels will reduce the prediction accuracy, and consequently the effective sum rate.
Next, we examine the effect of the user velocity on the normalized sum rate and the results are provided in Fig. 5. As expected, the sum rate decreases for all schemes as user mobility increases. Kalman filtering performs well at low velocity, but it cannot track the change of channel dynamics well at high user mobility. Our proposed method achieves much higher effective sum rate than Kalman filter and the separate approach. The channel estimation method clearly outperforms others at high user mobility in theory, but it may not be practical to obtain the channel estimation in time.
We then consider the second scenario of the urban micro SCM model. The normalized sum rate and effective sum rate results are shown in Fig. 6. Kalman filtering is not included in the comparison because in the simulation we found its performance is not satisfactory and this may be because the channel dynamic is too complex for Kalman filtering to predict. We can see from Fig. 6(a) that as the number of predicted channels increases, the sum rate of the separate optimization degrades quickly and is even much worse than the traditional channel estimation based optimization. Our proposed solution
Fig. 4: The normalized sum rate results against the number of predicted channels.
Fig. 5: The normalized sum rate results against the user velocity.
Fig. 3: The NMSE of the predicted channels versus the number of predicted channels; and (b) SNR, \(P=20\).
still achieves the highest sum rate although as the number of predicted channels increases, the performance gap with the traditional solution becomes smaller. This highlights the importance of end to end learning of the predictive beamforming. Fig. 6(b) depicts the effective sum rate results. As expected, our proposed solution achieves superior performance, while the performance of the separate solution is worse than the estimation-based optimization when the number of predicted channels is high. The sum rates of both our proposed solution and the separate solution demonstrate the trend of first increasing and then decreasing. This again validates our analysis in Section III.C that a balance between the reduced overhead and high-quality channel prediction is necessary. For instance, for our proposed solution and the separate solution, the optimal numbers of predicted channels are 10 and 5, respectively.
Finally, we assess the impact of the number of predicted channels \(P\) given a total frame length of \(N+P=40\) in Fig. 7. There is no quantitative criterion on how to choose the optimal \(N\) and \(P\). Intuitively, when \(N\) is larger, we have more channel information available to predict future channels more accurately, but this also causes higher overhead which will reduce the effective sum rate. A good tradeoff can be obtained by empirical study for a specific scenario. As can be seen from Fig. 7, for the AR-model, the channel is relatively easy to predict, i.e., with a small number of known channels \(N\), the proposed algorithm can predict a large number (\(P=30\)) of future channels. While for the SCM model, it is more challenging to track the channel evolution, so only a small number (\(P=10\)) of future channels are predicted in order to achieve a high effective sum rate.
## V Conclusions
In this paper, we have studied the predictive beamforming using the deep learning approach in the multiuser MISO downlink. A general framework that predict the beamforming solution to maximize the sum rate with historical channel measurement data was proposed. An LSTM with an attention layer was devised to improve the performance of channel prediction. Simulation results have shown that the proposed deep learning based solution achieves significantly higher effective sum rate over the traditional channel estimation based optimization and the separate prediction and then optimization scheme.
|
2308.14305 | 3D stellar motion in the axisymmetric Galactic potential and the e-z
resonances | The full phase space information on the kinematics of a huge number of stars
provided by the Gaia third Data Release raises the demand for a better
understanding of the 3D stellar dynamics. In this paper, we investigate the
possible regimes of motion of stars in the axisymmetric approximation of a
Galactic potential model. The model consists of three components: the
axisymmetric disk, the central spheroidal bulge and the spherical halo of dark
matter. The axisymmetric disk is divided into stellar and gaseous disk
subcomponents, each one modeled by three Miyamoto-Nagai profiles. The physical
and structural parameters of the Galaxy components are adjusted by
observational kinematic constraints. The phase space of the
two-degrees-of-freedom model is studied by means of the Poincar\'e and
dynamical mapping, the dynamical spectrum method and the direct numerical
integrations of the Hamiltonian equations of motion. For the chosen physical
parameters, the nearly-circular and low-altitude stellar behaviour is composed
of two weakly coupled simple oscillations, radial and vertical motions. The
amplitudes of the vertical oscillations of these orbits are gradually
increasing with the growing Galactocentric distances, in concordance with the
exponential mass decay assumed. However, for increasing planar eccentricities
and the altitudes over the equatorial disk, new regimes of stellar motion
emerge as a result of the beating between the radial and vertical oscillation
frequencies, which we refer to as e-z resonances. The corresponding resonant
motion produces characteristic sudden increase or decrease of the amplitude of
the vertical oscillation, bifurcations in the dynamical spectra and the chains
of islands of stable motion in the phase space. The results obtained can be
useful in the understanding and interpretation of the features observed in the
stellar 3D distribution around the Sun. | Tatiana A. Michtchenko, Douglas A. Barros | 2023-08-28T04:41:19Z | http://arxiv.org/abs/2308.14305v1 | # 3D stellar motion in the axisymmetric Galactic potential and the e-z resonances
###### Abstract
Context:The full phase-space information on the kinematics of a huge number of stars provided by the Gaia Data Release 3 raises the demand for a better understanding of the 3D stellar dynamics.
Aims:In this paper, we investigate the possible regimes of motion of stars in the axisymmetric approximation of the Galactic potential, applying a 3D observation-based model developed elsewhere. The model consists of three components: the axisymmetric disk, the central spheroidal bulge and the spherical halo of dark matter. The axisymmetric disk model is divided into thin and thick stellar disks and H i and H\({}_{2}\) gaseous disks subcomponents, by combining three Miyamoto-Nagai disk profiles of any model order (1, 2, or 3) for each disk subcomponent, to reproduce a radially exponential mass distribution. The physical and structural parameters of the Galaxy components are adjusted by observational kinematic constraints.
Methods:The phase space of the two-degrees-of-freedom model is studied by means of the Poincare and dynamical mapping, the dynamical spectrum method and the direct numerical integrations of the Hamiltonian equations of motion.
Results:For the chosen physical parameters, the nearly-circular (close to the rotation curve) and low-altitude stellar behaviour is composed of two weakly coupled simple oscillations, radial and vertical motions. The amplitudes of the vertical oscillations of these orbits are gradually increasing with the growing Galactocentric distances, in concordance with the exponential mass decay assumed. However, for increasing planar eccentricities \(e\) and the altitudes over the equatorial disk \(z\), new regimes of stellar motion emerge as a result of the heating between the radial and vertical oscillation frequencies, which we refer to as _e-z resonances_. The corresponding resonant motion produces characteristic sudden increase/decrease of the amplitude of the vertical oscillation, bifurcations in the dynamical spectra and the chains of islands of stable motion in the phase-space.
Conclusions:The results obtained can be useful in the understanding and interpretation of the features observed in the stellar 3D distribution around the Sun.
## 1 Introduction
In the past decades, there has been a growing set of evidence that the disk stars of the Milky Way galaxy exhibit density and velocity structures in several phase-space planes of the Galactic disk, in both the radial and vertical directions. The long-known stellar warp in the outer disk, a vertically asymmetric distribution of stars close to the Galactic plane, is seen as a bending of the plane upwards in the first and second Galactic quadrants (longitudes \(0^{\circ}\leq l\leq 180^{\circ}\)) and downwards in the third and fourth quadrants (longitudes \(180^{\circ}\leq l\leq 360^{\circ}\)) (Momany et al., 2006). Recently, a number of stellar density substructures, appearing as overdensities or dips, have been found in the LAMOST data, at several radial and vertical positions in the Galactic disk (Wang et al., 2018).
Large-scale vertical motions as a Galactic North-South asymmetry have been revealed by spectroscopic and astrometric surveys such as SEGUE (Widrow et al., 2012), RAVE (Williams et al., 2013), LAMOST (Carlin et al., 2013), as well as the second _Gaia_ data release (Gaia Collaboration et al., 2018). Such bulk vertical motion of stars present a wave-like behaviour, which has been referred to as bending and breathing motions, with stars coherently moving towards or away from the Galactic mid-plane, on both sides of it (Kawata et al., 2018; Ghosh et al., 2022; Khachaturyants et al., 2022). Several dynamical processes have been evoked to explain these non-zero vertical motions: the ones from external origins such as the passing of the Saggifarius dwarf galaxy through the Milky Way disk (Gomez et al., 2013), or the excitation of the disk due to interactions with dark matter subhaloes (Feldmann and Spolyar, 2015); and the ones from non-axisymmetric internal perturbations such as the breathing motion induced by the Galactic bar (Monari et al., 2015), or by the action of the spiral density waves (Faure et al., 2014; Ghosh et al., 2022; Khachaturyants et al., 2022, among others).
The density structures present in the _R-V\({}_{\varphi}\)_ plane (Galactocentric radius versus azimuthal velocity) of disk stars, known as diagonal ridges, clearly visible in the distribution of the mean Galactic radial velocity \(\langle V_{R}\rangle\), can also be seen in the vertical direction from maps of the mean absolute distance from the disk mid-plane (\(\|z\|\)) and the mean vertical velocity \(\langle V_{z}\rangle\) of the stellar distribution (Khanna et al., 2019; Wang et al., 2020). This attribute may be indicative of a coupling between planar and vertical motions of the stars in the disk. Several attempts to unravel the origin of these structures have been presented in the literature, with some of them relying on simulations of external mechanisms like
the Sagittarius dwarf galaxy perturbation (Antoja et al., 2018; Laporte et al., 2019; Khanna et al., 2019), and others from simulations that take into account internal dynamics such as the bar or the spiral resonances (Hunt et al., 2018; Michtchenko et al., 2018; Fragkoudi et al., 2019; Barros et al., 2020).
Another striking feature associated with the vertical motion of the stars in the Galactic disk is the phase-space spiral (or the so-called snail shell) present in the \(z-V\), plane. Many authors interpret it as evidence of an ongoing phase mixing in the vertical direction of the disk due to an influence of an outer perturbation (Antoja et al., 2018; Binney and Schonrich, 2018; Bland-Hawthorn et al., 2019; Laporte et al., 2019), or caused by the instability of a buckling bar (Khoperskov et al., 2019), or even due to the earlier discussed vertical bending waves (Darling and Widrow, 2019) or vertical breathing motion (Hunt et al., 2022, for two-armed phase spirals) in the disk. Alternatively, Michtchenko et al. (2019) showed that the phase spiral can be well-comprehended by the dynamical effects of the stellar moving groups in the solar neighbourhood.
We suggest to analyse the above-cited Galactic structures and investigate their causes by studying the stellar dynamics in the following steps:
* Modeling a 3D Galactic gravitational potential. In this work, we use the Barros et al. (2016) axisymmetric Galactic potential model, adjusted to recent observational data. Firstly, the chosen potential is suitable to the study of the vertical stellar motion and, secondly, it is simple in analytical forms that gives a good representation of the radially exponential mass distribution of the Galaxy. Moreover, any non-axisymmetric mass effects (e.g., due to the central bar and/or spiral arms) can be easily added to the model posteriorly;
* Studying the Galactic model in the whole phase space, in order to obtain all the possible regimes of stellar motion that could provide reasonable explanations to the observed Galactic phase-space structures. This is, in part, the main objective of the present paper;
* Performing numerical simulations through numerical integrations of stellar orbits, to verify whether our achievements are satisfactory. This is a topic for a future work.
In the present paper, we focus on the study of the stellar orbits in the Galactic disk assuming a zeroth-order approximation of a time-independent and axisymmetric Galactic gravitational potential. The whole phase space around the Sun is investigated through techniques widely used in the Celestial Mechanics. Resonances between the radial and vertical independent modes of the stellar motion are characterized in terms of resonant zones, formed by islands of stability that capture and trap stars inside them, enhancing the stellar density, and separatrices that deplete the objects close to these regions. Our goal, for a subsequent work, is to investigate possible associations between the observed Galactic phase-space structures and the resonant zones originated from the commensurabilities between the radial and vertical frequencies of the stellar motion.
This paper is organized as follows. In Section 2, we describe the 3D axisymmetric Galactic gravitational potential model used for the construction of the Hamiltonian function and the calculus of the stellar orbits. In Section 3, we study the 3D stellar dynamics on the representative plane \(R-V_{\varphi}\) in the radial and vertical directions. In Section 4, regimes of stellar motion are analysed in terms of the beating between the planar and vertical frequencies of oscillation, and, in Section 5, we extend the analysis for a wide range of initial vertical velocities \(V_{z}\). Concluding remarks are drawn in the closing Section 6.
## 2 3D Galactic model
Here, we briefly describe the 3D model for the axisymmetric Galactic potential and refer the reader to Barros et al. (2016) for more details. The model considers the contributions of three Galactic components into the potential; they are the axisymmetric disk, the central spheroidal bulge and the spherical halo of dark matter.
The axisymmetric disk is composed of the stellar (thin and thick disks) and gaseous (H i and H\({}_{2}\) disks) components. Each of these components is a superposition of three Miyamoto-Nagai disks (MN, Miyamoto and Nagai, 1975), which reproduces a radially exponential mass distribution (Smith et al., 2015). We adopt the 1st-, 2nd- and 3rd-order expressions for the potential of the MN-disks at the position \(R\) and \(z\) (cylindrical coordinates), respectively:
\[\Phi^{1}_{\rm MN}(R,z)=\frac{-GM}{\sqrt{R^{2}+(a+\zeta)^{2}}}\,, \tag{1}\]
\[\Phi^{2}_{\rm MN}(R,z)=\Phi^{1}_{\rm MN}(R,z)-\frac{GM\,\alpha(a+\zeta)}{ \left[R^{2}+(a+\zeta)^{2}\right]^{3/2}}\,, \tag{2}\]
\[\Phi^{3}_{\rm MN}(R,z)=\Phi^{2}_{\rm MN}(R,z)+\frac{GM}{3}\times\frac{a^{2} \left[R^{2}-2(a+\zeta)^{2}\right]}{\left[R^{2}+(a+\zeta)^{2}\right]^{5/2}}\,, \tag{3}\]
where \(\zeta=\sqrt{z^{2}+b^{2}}\), with \(b\) being the vertical scale length, while \(M\) and \(a\) are the mass and the radial scale length of each disk component, respectively.
Figure 1: Top: Rotation curve of the Galaxy. The observed rotation curve is represented by the points with error bars, which indicate masers data from high-mass star-forming regions (Reid et al., 2019; Rastorguev et al., 2017), and H i and CO tangent-point data (Burton and Gordon, 1978; Clemens, 1985; Fich et al., 1989). The red curve shows the analytical rotation curve expressed by Eq. (10). The contributions of the modelled disk (continuous curve), bulge (short-dashed line) and dark halo (long-dashed line) to the axisymmetric Galactic potential are also shown. Bottom: Vertical force \(K_{z}\) at \(z=1.1\) kpc as a function of the Galactic radius. The black dots with error bars are from observation measurements by Bovy and Rix (2013), while the grey solid curve shows the \(K_{z}\) force radial profile at \(z=1.1\) kpc resulting from our Galactic model.
The potential of the thin disk is then written as a combination of three third-order MN-disks
\[\Phi_{\rm thin}(R,z)=\sum_{i=1}^{3}\Phi_{\rm MNi}^{3}(R,z)\,, \tag{4}\]
while the potential of the thick disk is a combination of three first-order MN-disks
\[\Phi_{\rm thick}(R,z)=\sum_{i=1}^{3}\Phi_{\rm MNi}^{1}(R,z)\,. \tag{5}\]
The contributions of the gaseous disks components H i and H\({}_{2}\) into the total axisymmetric potential are, respectively,
\[\Phi_{\rm HI}(R,z) = \sum_{i=1}^{3}\Phi_{\rm MNi}^{2}(R,z)\,, \tag{6}\] \[\Phi_{\rm H_{2}}(R,z) = \sum_{i=1}^{3}\Phi_{\rm MNi}^{3}(R,z)\,. \tag{7}\]
The potential of the Galactic bulge is derived by a Hernquist density distribution profile, in the form (Hernquist 1990):
\[\Phi_{\rm b}(R,z)=\frac{-G\,M_{\rm b}}{\sqrt{R^{2}+z^{2}}+a_{\rm b}}\,, \tag{8}\]
where two free parameters, \(M_{\rm b}\) and \(a_{\rm b}\), are the total mass and the scale radius of the bulge, respectively.
Finally, we consider a spherical dark halo, whose potential is modelled with a logarithmic potential in the form (e.g. Binney & Tremaine 2008):
\[\Phi_{\rm b}(R,z)=\frac{v_{\rm h}^{2}}{2}\,\ln\left(R^{2}+z^{2}+r_{\rm h}^{2} \right)\,, \tag{9}\]
where \(r_{\rm h}\) is the core radius and \(v_{\rm h}\) is the circular velocity at large \(R\) (i.e., relative to the core radius), respectively.
The analytical rotation curve is then calculated using the expression.
\[V_{rot}(R)=\sqrt{R\times\frac{\partial\Phi_{0}}{\partial R}|_{z=0}}\,, \tag{10}\]
where the axisymmetric potential \(\Phi_{0}\) is just the sum of the potentials of the Galactic components described above, that is:
\[\Phi_{0}(R,z)=\Phi_{\rm thin}+\Phi_{\rm thick}+\Phi_{\rm HI}+\Phi_{\rm H_{2}}+ \Phi_{\rm b}+\Phi_{\rm h}\,. \tag{11}\]
The physical and structural parameters of the components of the Galactic axisymmetric potential are obtained by applying a fitting procedure, which adjusts the analytical rotation curve Eq. (10) to the observed one. For the observation-based rotation curve, we use data of H i-line tangential directions from Burton & Gordon (1978) and Fich et al. (1989), CO-line tangential directions from Clemens (1985), and maser sources data associated with high-mass star-forming regions from Reid et al. (2019) and Rastorguev et al. (2017). From these data, the Galactic radii and rotation velocities were calculated taking the distance of the Sun from the Galactic center as \(R_{0}=8.122\) kpc (GRAVITY Collaboration et al. 2018) and the Local Standard of Rest (LSR) velocity at the Sun \(V_{0}=233.4\) km s\({}^{-1}\) (Drimmel & Poggio 2018). As additional constraints to the fitting procedure, we used the value of the local angular velocity \(\Omega_{0}\), the local disk surface density \(\Sigma_{0}\), and the local surface density within \(|z|\leq 1.1\) kpc. For details of the data used and the fitting procedure, see Barros et al. (2016, 2021).
The parameters obtained are shown in Table 1 and Table 2. The contributions of each component of the axisymmetric potential to the Galactic rotation curve are plotted in Fig. 1 (top panel), together with the calculated rotation curve (red line) and the data points. Figure 1 (bottom panel) also shows the radial profile of the \(K_{z}\) vertical force resulting from the Galactic model calculated at \(z=1.1\) kpc. We note a good agreement between the modeled force and the measured \(K_{z}\)-force, based on measurements by Bovy & Rix (2013) (black dots with error bars).
The stellar orbits are calculated through the numerical integrations of the equations of motions defined by the Hamiltonian function given as
\[H(R,p_{r},z,V_{z})=\frac{1}{2}\left[p_{r}^{2}+\frac{L_{z}^{2}}{R^{2}}+V_{z}^{ 2}\right]+\Phi_{0}(R,z)\,, \tag{12}\]
where \(p_{r}\) and \(V_{z}\) are the linear radial momentum and vertical velocity, respectively, while \(L_{z}\) is the angular momentum, which is a constant in the axisymmetric approximation (all momenta are given per mass unit)
## 3 3D stellar dynamics on the representative plane
The dynamical model defined by the Hamiltonian in Eq. (12) is of two-degree-of-freedom. The resulting stellar motion can be formally represented by two coupled oscillations: one, in the radial (equatorial) direction, is described by the pair of the variables \(R-p_{r}\), and other, in the vertical direction, described by the pair \(z-V_{z}\). The phase space of the system is four-dimensional, that makes it difficult to visualize the stellar orbits. Yet, in this paper, we introduce a representative plane, which allows us to present the main features of the 3D stellar dynamics. This plane
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Disk component & \(M_{1}\) & \(a_{1}\) & \(M_{2}\) & \(a_{2}\) & \(M_{3}\) & \(a_{3}\) & \(b\) \\ & (\(10^{10}M_{\odot}\)) & (kpc) & (\(10^{10}M_{\odot}\)) & (kpc) & (\(10^{10}M_{\odot}\)) & (kpc) & (kpc) \\ \hline Thin disk & 2.282 & 3.859 & 2.342 & 9.052 & -1.846 & 3.107 & 0.243 \\ Thick disk & 0.061 & 0.993 & 4.080 & 6.555 & -3.521 & 7.651 & 0.776 \\ H i & 2.217 & 9.021 & 2.350 & 9.143 & -3.303 & 7.758 & 0.168 \\ H\({}_{2}\) & 1.005 & 6.062 & 0.177 & 3.141 & -0.907 & 4.485 & 0.128 \\ \hline \end{tabular}
\end{table}
Table 1: Physical and structural parameters of the disk components of the axisymmetric Galactic model
\begin{table}
\begin{tabular}{l c} \hline \hline Bulge & \(M_{b}\) & \(a_{b}\) \\ & (\(10^{10}M_{\odot}\)) & (kpc) \\ \hline & 2.54 & 0.425 \\ \hline \hline Dark halo & \(r_{h}\) & \(v_{h}\) \\ & (kpc) & (km s\({}^{-1}\)) \\ \hline & 5.56 & 169.77 \\ \hline \end{tabular}
\end{table}
Table 2: Physical and structural parameters of the spheroidal components of the Galactic model
is the \(R\)-\(V_{\varphi}\) plane, where \(V_{\varphi}\) is the tangential velocity defined as \(V_{\varphi}=L_{z}/R\); we show this plane in Fig. 2.
To study the stellar motions on the \(R\)-\(V_{\varphi}\) plane, we fix, in this paper, the initial values of the planar linear momentum at \(p_{r}=0\) and the vertical height at \(z=0\). It is worth noting that this choice preserves the generality of the presentation, since all closed stellar motions under the potential given by Eq. (11) pass through these conditions. The initial value of the vertical velocity is chosen as \(V_{z}=20\,\mathrm{km\,s^{-1}}\), except when the dependence of the motion on the initial vertical configuration is analysed (Sect. 5). The chosen velocity value is close to the value of the standard deviation of the \(V_{\varphi}\) distribution of the stars from the _Gaia_ DR3 (Gaia Collaboration et al. 2016, 2023), with \(|z|<1\,\mathrm{pc}\) (we obtained \(\sigma_{V_{z}}=18\,\mathrm{km\,s^{-1}}\)).
First, we plot on the \(R\)-\(V_{\varphi}\) plane in Fig. 2 the main characteristics of the Hamiltonian dynamics: the orbital energy and the angular momentum, which are both constants of motion in our model. The energy levels and \(L_{z}\)-levels are shown by dashed and continuous lines, respectively. The rotation curve obtained using Eq. (10), is shown by the red curve.
By definition, the rotation curve is a place of the circular orbits, which are orbits of the minimal energy, for a given \(L_{z}\)-value. The geometrical interpretation of this condition is that, given the location of a circular orbit on the rotation curve, the corresponding \(L_{z}\)-level is tangent to the minimal energy level on the \(R\)-\(V_{\varphi}\) plane. This fact can be observed in Fig. 2, considering that the energy is continuously increasing with the increasing radial distances \(R\), as shown on the top panel in Fig. 2. The same \(L_{z}\)-level intersects the levels of higher energy always at two points, which are turning points of the radial oscillations at the condition \(p_{r}=0\).
We plot in Fig. 2 the direct projections of three 3D stellar orbits: the one is the Sun's orbits (cyan dots) calculated with initial values \(R=8.122\,\mathrm{kpc}\), \(p_{r}=-12.9\,\mathrm{km\,s^{-1}}\), \(V_{\varphi}=245.6\,\mathrm{km\,s^{-1}}\), \(z=0.015\,\mathrm{kpc}\) and \(V_{z}=7.78\,\mathrm{km\,s^{-1}}\)(Drimmel & Poggio 2018). The orbits of other two fictitious stars (black dots) were starting at points \(\mathbf{1}\) and \(\mathbf{2}\), respectively, with \(p_{r}=0\), \(z=0\) and \(V_{z}=20\,\mathrm{km\,s^{-1}}\). We can verify that all orbits are closed, that is, each one evolves between two turning points, which belong to the same energy levels, along the corresponding \(L_{z}\)-levels (constant of motion), around the circular orbits, which lie at the intersection of the corresponding energy and \(L_{z}\)-levels.
The detailed analysis of the main features of the stellar motions on the representative \(R\)-\(V_{\varphi}\) plane can be done in two steps: first, focusing on the radial oscillations on the Galactic equatorial plane, and, then, on the vertical oscillations around the equatorial plane. By understanding the behaviour of each component of the stellar motion in the axisymmetric potential, we can consider new effects produced by their interaction. We will do it in the next sections.
### Radial oscillations
The \(R\)-\(V_{\varphi}\) plane shown in Fig. 2 is commonly used for understanding the radial (or planar) motion of stars. Indeed, due to the conservation of the angular momentum during the motion, the projections of the 3D orbits are aligned along the corresponding \(L_{z}\)-levels (continuous lines), as observed in the case of our three examples. Since the orbital energy is also conserved in our problem, the orbit's projections must match the corresponding energy level at conditions \(p_{r}=0\), which was chosen in the construction of the plane. By definition, this condition defines two turning points of a closed orbit, which delimit the path of the radial oscillation on the \(R\)-\(V_{\varphi}\) plane and, consequently, the maximal (\(R_{\rm max}\)
Figure 3: Poincaré map of the stellar 3D orbits calculated with the initial conditions chosen along the Sun’s \(L_{z}\)–level. The initial vertical conditions were chosen at \(z=0\) and \(V_{z}=20\,\mathrm{km\,s^{-1}}\). The projections are calculated at instants when the star crosses the equatorial plane with \(V_{z}>0\). The orbits are concentric, with the circular orbit at the center (black dot). This orbit lies at the intersection of the rotation curve and the Sun’s \(L_{z}\)–level on the \(R\)–\(V_{\varphi}\) plane, at \(R_{z}=8.522\,\mathrm{kpc}\) and \(V_{\varphi}=234\,\mathrm{km\,s^{-1}}\). Red and cyan dots show the projections of two periodic orbits on the Poincaré map corresponding to the 2/1 and 4/1 resonances, respectively (see later Fig. 7).
Figure 2: Topology of the Hamiltonian (Eq. 12) on the representative \(R\)–\(V_{\varphi}\) plane. Continuous curves are the levels of the angular momentum \(L_{z}=R\times V_{\varphi}\), while dashed curves are the energy levels, calculated with the fixed initial values \(p_{R}=0\), \(z=0\) and \(V_{z}=20\,\mathrm{km\,s^{-1}}\). Rotation curve is shown by red dots. The projections of the three 3D orbits on the plane are: the Sun’s orbit (cyan dots), one orbit starting at the configuration \(\mathbf{1}\) and other at the configuration \(\mathbf{2}\) (black dots). Note that the motion occurs always along a fixed \(L_{z}\)-value, due to conservation of the angular momentum \(L_{z}\). The conservation of the energy during the motion defines two turning points of each orbit, both lying at the intersections of the corresponding \(L_{z}\)– and energy levels. Top panel: the total energy (see Eq. 12), in arbitrary units, calculated along the rotation curve, as a function of \(R\).
and minimal (\(R_{\rm min}\)) values of the radial orbital variation (and the tangential velocity \(V_{\varphi}\), for a given \(L_{z}\)-value).
Any \(L_{z}\)-level crosses the rotation curve (red curve) at one point at \(R_{c}\) on the \(R\)-\(V_{\varphi}\) plane in Fig. 2, which corresponds to the circular orbit of the minimal energy for the corresponding \(L_{z}\). At this condition, two turning points merge at one point, characterizing the circular orbit with the zero-amplitude \(R\)-oscillation, that is, the periodic orbit of the two-degree-of-freedom problem. All other orbits calculated along the same \(L_{z}\)-level, are quasi-periodic orbits oscillating with the non-zero \(R\)-amplitudes, in such a way that larger the deviations from the rotation curve, larger are the amplitudes of the radial variations.
The amplitude of the radial oscillation is related to the planar eccentricity of the orbit as
\[e=\frac{R_{\rm max}-R_{\rm min}}{2R_{c}}\,,\]
which is uniquely defined by the values of the orbital energy and angular momentum of the stellar orbit. In the conservative problem considered in this paper, it is a constant of motion. The gain/loss of the orbital energy due to some external physical processes, without changing the angular momentum of the system, will increase/decrease the amplitude of \(R\)-oscillation and, consequently, the planar eccentricity. This effect, known as 'heating' in the stellar dynamics, is analogous to the tidal gravitational interactions in the planet-satellite systems.
The equatorial motions of the stars are shown in Fig.3, where we plot the orbits in the subspace \(R\)-\(p_{R}\). The orbits were calculated through numerical integrations of the equations of motion defined by the Hamiltonian (Eq. 12), with initial conditions chosen along the level \(L_{z}=1995\,{\rm kpc\,km\,s^{-1}}\), which corresponds to the Sun's orbit (see Fig. 2). To obtain the projection of the 3D motion on the Galactic plane, we gather the orbital coordinates at the instants when the orbits cross the equatorial plane (the condition \(z=0\)), in the direction of the positive \(z\)-values (the condition \(V_{z}>0\)). This approach is known as the Poincare surfaces of section method, which allows us to plot the radial motion separately from the vertical one.
Figure 3 shows that all orbits in the \(R\)-\(p_{R}\) subspace oscillate around a circular periodic orbit (black dot), which lies at the intersection of the corresponding \(L_{z}\)-level and the rotation curve, at \(R_{c}=8.522\,{\rm kpc}\) and \(V_{\varphi}=234\,{\rm km\,s^{-1}}\) on the \(R\)-\(V_{\varphi}\) plane in Fig. 2. This periodic orbit appears as a fixed point on the \(R\)-\(p_{R}\) plane, indicating that the amplitude of the radial oscillation is zero (or \(e=0\)). At the same time, the vertical motion started with the initial \(V_{z}=20\,{\rm km\,s^{-1}}\) has non-zero amplitude; it is a simple oscillation around the equatorial plane (\(z=0\)).
In the vicinity of the fixed point (black dot), between \(7.0\,{\rm kpc}\) and \(10.5\,{\rm kpc}\), the concentric low-eccentricity orbits, with \(e<0.2\), evolve in good agreement with the epicyclic approximation (e.g. Binney & Tremaine 2008). The orbit of the Sun, with eccentricity of \(0.061\), is located in this region of the phase space. The values of the radial and vertical frequencies for the Sun's \(L_{z}\) level, derived under the epicyclic approximation, are equal to \(0.00542\) and \(0.01387\) (in units of \(1/{\rm Myr}\)), respectively, that corresponds to the radial and vertical periods of \(184.5\,{\rm Myr}\) and \(72.07\,{\rm Myr}\). These periods derived from the power spectrum of the Sun's orbit, calculated numerically through our model, are of \(161.3\,{\rm Myr}\) and \(75.2\,{\rm Myr}\), respectively.
The orbits with the increasing eccentricity, however, deviate from the epicyclic approximation and, when the eccentricity approaches \(0.5\), a notable feature appears in Fig.3. Indeed, at the radial distances beyond \(12\,{\rm kpc}\), we can clearly observe a lack of circulating orbits, but there are some islands on the Poincare map. As shown below, the orbits in this region are different structurally from the low-eccentricity orbits and are related to the new kind of motion mode. The planar axisymmetric model is not able to explain this feature, thus, we proceed investigating the vertical motion and its interaction with the planar component.
Figure 4: Representative \(R\)–\(V_{\varphi}\) plane shown in Fig. 2, with the same rotation curve (red dots) and the Sun’s orbit (cyan dots) along the \(L_{z}\)–level (dashed line). The equidistant levels are used to represent the amplitude of the vertical oscillation of the stellar orbits (see top panel to associate the absolute \(z_{\rm max}\)-values), with the initial conditions \(z=0\) and \(V_{z}=20\,{\rm km\,s^{-1}}\). Top panel: the amplitude of the vertical oscillation, \(z_{\rm max}\), calculated along the rotation curve, as a function of \(R\).
Figure 5: Same as in Fig. 3, except the projections were calculated at conditions \(p_{R}=0\) and \(\dot{p}_{R}>0\) on the \(z\)–\(V_{z}\) plane. Red, cyan and green dots show the projections of three periodic orbits on the Poincaré map corresponding to the \(2/1\), \(4/1\) and \(6/1\) resonances, respectively. The large island of the orbits oscillating around the center at \(z=0\,{\rm kpc}\) and \(V_{z}=103.0\,{\rm km\,s^{-1}}\) (black dot) is related to the strong \(1/1\) resonance (see later Fig. 7).
### The vertical oscillations
The same representative \(R\)-\(V_{\varphi}\) plane shown in Fig. 2 can be used to study the stellar motion in the vertical direction. For this, we plot in Fig. 4, by continuous lines, the levels of the maximal altitude over the equatorial plane that a star reaches during its vertical oscillation (\(z_{\rm max}\)). To obtain \(z_{\rm max}\), we integrate numerically, over several billions years, the equations of motion defined by the Hamiltonian (Eq. 12), over the 200\(\times\)200 grid of the initial conditions covering the \(R\)-\(V_{\varphi}\) plane. All orbits started with \(p_{R}=0\) on the galactic equatorial plane (at \(z=0\)) and with the same initial vertical velocity \(V_{z}=20\,{\rm km\,s}^{-1}\). The panel on the top of Fig. 4 shows \(z_{\rm max}\) obtained in this way for the circular orbits located along the rotation curve (red curve). We plot also the projection of the Sun's orbit (cyan dots) along the \(L_{z}\)-level (dashed line) on the \(R\)-\(V_{\varphi}\) plane.
To understand the \(z_{\rm max}\)-evolution on the Galactic \(R\)-\(V_{\varphi}\)-plane, we consider first the nearly circular orbits, distributed along the rotation curve. For these low-eccentricity orbits, the amplitude of the vertical oscillation is smoothly increasing with the growing Galactocentric distance, as shown on the top panel in Fig. 4. This result is consistent with the radially decaying exponential mass distribution model applied and is in agreements with the observed increase of the scale height with the increase of the Galactic radius of the disk stars, which is the well-known flaring of the Galactic disk (Lopez-Corredoira et al., 2002; Amores et al., 2017; Robin et al., 2022).
However, the continuous evolution of the \(z_{\rm max}\)-levels is interrupted in the regions of the increasing planar eccentricities when we move away from the rotation curve on the \(R\)-\(V_{\varphi}\) plane in Fig. 4, in the direction of both the higher and lower tangential velocities. To understand this behaviour, it is worth noting that, in domains of regular oscillations, small changes in the initial conditions lead to small changes of the elements of the orbits, in particular, the amplitude \(z_{\rm max}\). Consequently, the levels suffer slight displacements on the map when the initial conditions are gradually changed. However, in the vicinity of the resonances, small changes in the initial configurations produce large changes of the orbital elements, forming singular structures, such as resonant islands and stochastic layers. On the dynamical map in Fig. 4, these structures appear in the form of stalactites of different widths.
On the Poincare map in Fig. 5, the same structures appear as chains of islands contrasting with the orbits, which regularly circulate around the origin. To construct this map, we pick up the orbital coordinates \(z\) and \(V_{z}\) at the instants when the star is in the turning point of its orbit (\(p_{r}=0\)) of the minimal radial distance (\(\hat{p}_{r}>0\)). The initial configurations of the orbits were chosen along the Sun's \(L_{z}\)-level and \(z\) and \(V_{z}\) fixed at 0 and \(20\,{\rm km\,s}^{-1}\), respectively, as in Fig. 3. However, in this case, the initial \(R\)-values were extended to the very large radial distances, up to \(27\,{\rm kpc}\).
Comparing this map to the map of the equatorial motion shown in Fig. 3, we verify that the behaviour in the vertical direction is more complicated. We can observe the projections of the qualitatively different types: there are circulating orbits of regular motion at smaller stellar altitudes. With the growing height, the chains of islands of different width appear; they are separated by the stochastic layers. A large island dominates the region of high vertical velocities in the positive half-plane of the map and is surrounded by the sea of chaotic motion. To identify the cause of this behaviour, we apply a special spectral analysis method and describe the results obtained in the next section.
## 4 The e-z resonances
To understand the features of the resonant motion, we start plotting the amplitude of the vertical oscillation (\(z_{\rm max}\)) as a function of the initial radial distance \(R\) on the top panel in Fig. 6. In contrast to one another shown on the top panel in Fig. 4, the amplitude \(z_{\rm max}\) was now calculated with initial conditions chosen along the \(L_{z}\)-level corresponding to the Sun's orbit. In this case, all orbits are eccentric, with exception of one located at \(R_{c}=8.522\,{\rm kpc}\), at the intersection of the \(L_{z}\)-level with the rotation curve in Fig. 4.
For the orbits starting along the same \(L_{z}\)-level at the same initial configurations in the vertical direction (\(z=0\) and \(V_{z}=20\,{\rm km\,s}^{-1}\)), the minimal value of \(z_{\rm max}\) is associated with the circular orbit, at \(R_{c}=8.522\,{\rm kpc}\). The vertical amplitude is generally increasing when the orbital eccentricity grows with both increasing and decreasing distances from the circular orbit. However, this evolution of \(z_{\rm max}\) is not monotonous, but shows sudden increase/decrease at some radial distances. The nature of this behaviour can be analyzed using the dynamical power spectrum method, which allows us to detect the change of a regime of motion through the analysis of the evolution of the main frequencies of the dynamical system (detailed description of the method and its applications to different systems are found in e.g. Michtchenko et al., 2002, 2017).
Figure 6 bottom shows two proper frequencies, \(f_{r}\) and \(f_{z}\), which are the frequencies of the radial and vertical oscillations, respectively, as functions of the radial distances \(R\). The evolution of the \(f_{r}\)-frequency (red dots) and its harmonics is monotonous over the whole \(R\)-range; it reaches the maximal value at \(R_{c}=8.522\,{\rm kpc}\), that is expected since the circular orbit is an equilibrium solution of the effective potential \(\Phi_{\rm eff}=\frac{L_{z}^{2}}{2R^{2}}+\Phi_{0}(R,z)\), with the constant \(L_{c}\). On the other hand, the behaviour of the \(f_{z}\)-frequency (black dots) on the dynamical spectrum is highly non-harmonic and we can observe the appearance of bifurcations, stable islands and an erratic scattering of the dots, which is characteristic of the chaotic motion.
Figure 6: Top: Amplitude of the vertical oscillation of stars as a function of the initial Galactocentric distance. The initial values of \(V_{\varphi}\) are chosen along the Sun’s \(L_{z}\)–level, while \(z=0\) and \(V_{Z}=20\,{\rm km\,s}^{-1}\). Bottom: Dynamical spectrum of the stellar orbits calculated along the Sun’s \(L_{z}\)–level. The nominal positions of some resonances are shown by vertical dashed lines.
Analyzing the evolution of the proper frequencies in Fig. 6 bottom, we verify that bifurcations occur when the beats between the two independent frequencies occur. This condition is known as resonances between distinct modes of motion, which lead to changes in the regime of motion. One beat occurs around \(R=14\,\)kpc, where \(f_{R}\simeq 2\,f_{z}\), giving origin to the island of the 2/1 resonant motion. The periodic orbit of the 2/1 resonance is shown in the subspaces \(R\)-\(p_{R}\) (top) and \(z\)-\(V_{z}\) (bottom) in Fig. 7 (column **a**); the projections of this orbit in the Poincare maps are shown by red dots in Figs. 3 and 5.
The beating frequencies are also observed at \(R=17.8\,\)kpc in the dynamical spectrum in Fig. 6 bottom; this event is associated to the 4/1 resonance. The radial and vertical oscillation modes of the corresponding periodic orbit are shown in Fig. 7 (column **b**); the projection of this orbit in the Poincare maps is shown by cyan dots in Figs. 3 and 5. In Fig. 5, we also show, by green dots, the projection of the 6/1 resonant orbit (see Fig. 7 **c**); the location of this resonance in the dynamical spectrum is shown at \(R=19.2\,\)kpc.
We denote these resonances as the \(e\)-\(z\) resonances, where \(e\) and \(z\) denote planar eccentricities and vertical heights of stellar orbits, respectively. The most important from the \(e\)-\(z\) resonances is the 1/1 resonance, whose domain is large and separated by the layers of chaotic motion (separatrices). For a given value of \(L_{z}\), it appears at very large Galactic distances; in the dynamical map in Fig. 6, its domain is extended from 20 kpc to 35 kpc. The motion inside the 1/1 resonance is stable, with the amplitude of the vertical oscillation reaching its minimal value at the center (locus) of the resonance, at \(\sim\)27 kpc; this locus appears as a fixed point (black dot) in the Poincare map in Fig. 5. The locus is a periodic orbit of the 1/1 resonance shown in Fig. 7 (column **d**). The neighborhood of the 1/1 resonance is generally a domain of highly unstable motion. This is due to the overlap of the high-order resonances of the type \(f_{r}/f_{z}\simeq m/n\) (\(m\) and \(n\) are integers).
Finally, it is worth emphasizing that, comparing the evolution of the two frequencies in Fig. 6 bottom, we note the qualitative difference in the behaviour of two independent modes: while the radial motion seems to be unaffected by the passages through the resonances, their impact on the vertical motion is significant. This could be explained by the peculiar characteristics of the massive disk potential.
## 5 The dependence on the initial vertical velocity \(V_{z}\)
In this section, we investigate the resonant behaviour as a function of the initial vertical velocity, \(V_{z}\). For this, we introduce the representative plane \(R\)-\(V_{z}\) of the initial conditions and fix the rest of the variables at \(p_{r}=0\), \(z=0\) and \(L_{z}=1995\,\)kpc\(\,\)km\(\,\)s\({}^{-1}\), this last corresponding to the Sun's angular momentum.
Figure 8 presents the amplitude of the vertical oscillation in the form of the \(z_{\rm max}\)-levels on the representative \(R\)-\(V_{z}\) plane. The red curve shows the positions of the circular orbits of the constant \(L_{z}\), which are slightly dislocated in the direction of the larger Galactocentric distances, when the initial vertical velocities increase. The panel beside the \(R\)-\(V_{z}\) plane allows us to quantify the \(z_{\rm max}\)-amplitude, showing its values calculated along the loci of the circular orbits (red curve). The smoothed evolution of this quantity with the increasing initial \(V_{z}\) can be observed.
The bifurcation structures, which are characteristic of the resonances, appear outside the loci of the circular orbits in Fig. 8 and are strengthened with the increasing planar eccentricities of the orbits. The black curve on the beside panel shows the evolution of the \(z_{\rm max}\)-amplitude calculated along the constant initial \(R=13\,\)kpc. We can observe the behaviour which is similar to that shown on the top panel in Fig. 6, indicating the passages through some \(e\)-\(z\) resonances. The most prominent passage is through the 1/1 resonance, whose loci are shown by the blue dots in Fig. 8. The projection of the Sun's trajectory on the \(R\)-\(V_{z}\) plane is shown by the cyan dots. Its proximity to the rotation curve avoids the capture in one of the resonances.
Figure 7: Resonant periodic orbits on the \(R\)–\(p_{R}\) plane (top row) and \(z\)–\(V_{z}\) plane (bottom row). The initial conditions were chosen along the Sun’s \(L_{z}\)–level, while \(z=0\) and \(V_{z}=20\,\)km\(\,\)s\({}^{-1}\). Column (a): the 2/1 resonant orbit with \(e=0.45\) (red dots on the Poincaré maps in Figs. 3 and 5). Column (b): the 4/1 resonant orbit with \(e=0.72\) (cyan dots in Figs. 3 and 5). Column (c): the 6/1 resonant orbit with \(e=1.01\) (green dots in Fig. 5). Column (d): the 1/1 resonant orbit with \(e=0.802\) (black dot in Fig. 5).
## 6 Conclusions
In this paper, we report the existence of the resonant motion of the kind \(e\)-\(z\) (\(e\) and \(z\) being the planar eccentricity and the vertical altitude of stellar orbits, respectively), produced by the axisymmetric Galactic 3D potential.
We investigate the spacial motion of the stars applying the elaborated model for the axisymmetric potential of the Galactic disk (Barros et al., 2016). The model accounts for the combined gravitational effects due to the two stellar disks (thin and thick disks) and two gaseous disks (H\({}_{1}\) and H\({}_{2}\) disks). Moreover, to account for a radially exponential Galactic mass distribution, each of the disks is approximated by the composition of the three Miyamoto-Nagai disk profiles (Smith et al., 2015). The physical and structural parameters of the modeled components were chosen to correspond to the observable rotation curve of the Galaxy.
The motion of stars in the axisymmetric Galactic potential is investigated in the whole phase space applying several techniques from the Celestial Mechanics. The one of those is the dynamical mapping of the representative plane chosen here as the \(R\)-\(V_{\varphi}\) plane. Based on the conservation laws, we identify two independent modes of the stellar motion, radial and vertical ones. The radial motion is an oscillation around the circular solution, which is a circular orbit belonging to the rotation curve and characterized by the same angular momentum of the radial mode. The vertical motion is an oscillation around the equatorial Galactic plane.
For nearly circular orbits, with small planar eccentricities, two oscillations are weakly coupled. It is worth noting that the circular orbits survive even in the case when vertical velocities are very high. However, when the planar eccentricities increase, the coupling between two modes of motion also increase. Their interaction is better visualised in the dynamical power spectra, which clearly show the regions in the phase space where two proper frequencies are commensurable. This beating frequencies indicate the resonant zones, which we denote as \(e\)-\(z\) resonances.
In the Hamiltonian theories, resonances are fundamental properties of dynamical systems with two or more degrees of freedom. They occur in domains of the phase space where the frequencies of the independent modes of motion are commensurable. The locations and the sizes of the resonant domains are strongly dependent on the physical parameters of the model adopted to describe the system under study. Thus, associating some observable structures of the objects to the resonances, we can assess the reliable ranges of the parameters of the applied model. One example of this approach is shown in Michtchenko et al. (2018), where we relate the moving groups in the solar neighbourhood to the Lindblad resonances produced by the spirals perturbations, and estimate the spiral strength and pattern speed.
The main features of the \(e\)-\(z\) resonant motion are sudden increase/decrease in the amplitude of the vertical oscillation, capture and trap of the stars inside the stable resonant zones, enhancing the density, and deplete of the regions close to the separatrices of the resonances. This behaviour is characteristic of the vertical oscillations, while the planar radial motion is only slightly affected by the \(e\)-\(z\) resonances; this is probably due to peculiar properties of a disk potential at all.
It is worth emphasizing that, despite the fact that the resonances occur at very large Galactic distances, the high eccentricities of the resonant orbits allow us their detection in the solar neighbourhood, according to Fig. 7. Indeed, analyzing the behaviour of the \(Gaia\)-eDR3 sample from the solar vicinity (\(R_{\odot}\pm 0.2\) kpc and \(|p_{R}|\geq 100\) km s\({}^{-1}\)), we have detected 10% (from one thousand randomly chosen stars) of the objects showing the resonant oscillations, mainly, inside the \(2/1\) resonance. Of course, whether these stars are really evolving inside the \(e\)-\(z\) resonances, it depends strongly on the parameters adopted in our model. Thus, we should be able to observe the described manifestations of the \(e\)-\(z\) resonances analysing the distributions of the observable proper elements of the stars. The comparison to theoretical predictions could allow us to improve unknown values of the parameters, which describe the observable Galactic mass distribution.
###### Acknowledgements.
We acknowledge the anonymous referee for the detailed review and for the many helpful suggestions which allowed us to improve the manuscript. This work was supported by the Brazilian CNPq, FAPESP, and CAPES. This work has made use of the facilities of the Laboratory of Astronaufomatics (AG/USP, NAT/Unics), funded by FAPESP (grant 2009/54006-4) and INCT-A. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
|
2309.02896 | Time-dependent properties of run-and-tumble particles. II.: Current
fluctuations | We investigate steady-state current fluctuations in two models of
run-and-tumble particles (RTPs) on a ring of $L$ sites, for \textit{arbitrary}
tumbling rate $\gamma=\tau_p^{-1}$ and density $\rho$; model I consists of
standard hardcore RTPs, while model II is an analytically tractable variant of
model I, called long-ranged lattice gas (LLG). We show that, in the limit of
$L$ large, the fluctuation of cumulative current $Q_i(T, L)$ across $i$th bond
in a time interval $T \gg 1/D$ grows first {\it subdiffusively} and then {\it
diffusively} (linearly) with $T$, where $D$ is the bulk diffusion coefficient.
Remarkably, regardless of the model details, the scaled bond-current
fluctuations $D \langle Q_i^2(T, L) \rangle/2 \chi L \equiv {\cal W}(y)$ as a
function of scaled variable $y=DT/L^2$ collapse onto a {\it universal} scaling
curve ${\cal W}(y)$, where $\chi(\rho,\gamma)$ is the collective particle {\it
mobility}. In the limit of small density and tumbling rate $\rho, \gamma
\rightarrow 0$ with $\psi=\rho/\gamma$ fixed, there exists a scaling law: The
scaled mobility $\gamma^{a} \chi(\rho, \gamma)/\chi^{(0)} \equiv {\cal H}
(\psi)$ as a function of $\psi$ collapse onto a scaling curve ${\cal H}(\psi)$,
where $a=1$ and $2$ in models I and II, respectively, and $\chi^{(0)}$ is the
mobility in the limiting case of symmetric simple exclusion process (SSEP). For
model II (LLG), we calculate exactly, within a truncation scheme, both the
scaling functions, ${\cal W}(y)$ and ${\cal H}(\psi)$. We also calculate
spatial correlation functions for the current, and compare our theory with
simulation results of model I; for both models, the correlation functions decay
exponentially, with correlation length $\xi \sim \tau_p^{1/2}$ diverging with
persistence time $\tau_p \gg 1$. Overall our theory is in excellent agreement
with simulations and complements the findings of Ref. {\it arXiv:2209.11995}. | Tanmoy Chakraborty, Punyabrata Pradhan | 2023-09-06T10:36:13Z | http://arxiv.org/abs/2309.02896v1 | # Time-dependent properties of run-and-tumble particles. II.: Current fluctuations
###### Abstract
We investigate steady-state current fluctuations in two models of run-and-tumble particles (RTPs) on a ring of \(L\) sites, for _arbitrary_ tumbling rate \(\gamma=\tau_{p}^{-1}\) and density \(\rho\); model I consists of standard hardcore RTPs, while model II is an analytically tractable variant of model I, called long-ranged lattice gas (LLG). We show that, in the limit of \(L\) large, the fluctuation of cumulative current \(Q_{i}(T,L)\) across \(i\)th bond in a time interval \(T\gg 1/D\) grows first _subdiffusively_ and then _diffusively_ (linearly) with \(T\), where \(D\) is the bulk diffusion coefficient. Remarkably, regardless of the model details, the scaled bond-current fluctuations \(D(Q_{i}^{2}(T,L))/2\chi L\equiv\mathcal{W}(y)\) as a function of scaled variable \(y=DT/L^{2}\) collapse onto a _universal_ scaling curve \(\mathcal{W}(y)\), where \(\chi(\rho,\gamma)\) is the collective particle _mobility_. In the limit of small density and tumbling rate \(\rho,\gamma\to 0\) with \(\psi=\rho/\gamma\) fixed, there exists a scaling law: The scaled mobility \(\gamma^{*}\chi(\rho,\gamma)/\chi^{(0)}\equiv\mathcal{H}(\psi)\) as a function of \(\psi\) collapse onto a scaling curve \(\mathcal{H}(\psi)\), where \(a=1\) and \(2\) in models I and II, respectively, and \(\chi^{(0)}\) is the mobility in the limiting case of symmetric simple exclusion process (SSEP). For model II (LLG), we calculate exactly, within a truncation scheme, both the scaling functions, \(\mathcal{W}(y)\) and \(\mathcal{H}(\psi)\). We also calculate spatial correlation functions for the current, and compare our theory with simulation results of model I; for both models, the correlation functions decay exponentially, with correlation length \(\xi\sim\tau_{p}^{1/2}\) diverging with persistence time \(\tau_{p}\gg 1\). Overall our theory is in excellent agreement with simulations and complements the findings of Ref. _arXiv:2209.11995_.
## I Introduction
Characterizing the time-dependent properties of interacting self-propelled particles (SPPs), also known as active matter, has attracted a lot of attention in the past [1; 2]. SPPs convert chemical energy into directed or persistent motion ("run"); they move with a velocity \(v\) and randomly change directions, or tumble, with a rate \(\gamma\), thus breaking detailed balance at microscopic scales and driving the system out of equilibrium. Because of the delicate interplay between persistence and interactions, they display remarkable collective behavior, such as flocking [3], clustering [4], "giant" number fluctuation [5], and anomalous transport [6; 7]. Indeed, over the last couple of decades, significant effort has been made to better understand the emergent properties of active matter through studies of paradigmatic models, such as the celebrated Vicsek models [3], active Brownian particles (ABPs) [8] and run-and-tumble particles (RTPs) [9]. However, despite numerous simulation and analytical studies in the past [1; 2], theoretical characterization of the dynamic properties of these many-body systems poses a major challenge and is still far from complete.
Understanding dynamic phenomena, such as density relaxation and current fluctuation, in out-of-equilibrium many-body systems is a fundamental problem in statistical physics. However, unlike the linear-response theory for equilibrium systems, a general theoretical formulation of the issue in a nonequilibrium setting is still lacking. However, a deeper understanding of the problem for driven diffusive systems is gradually emerging through studies of macroscopic transport coefficients such as the collective- or _bulk-diffusion coefficient_ and the _mobility_, respectively [10; 11]. While the density relaxation is characterized through the bulk-diffusion coefficient, which is related to the relaxation rate of long-wavelength perturbations [12], the current fluctuation can be characterized through the collective particle mobility [10]. In principle, the bulk-diffusion coefficient should be distinguished from the self-diffusion coefficient of a tagged particle [13; 14; 15]; however the distinction, which can be quite striking especially in one dimension and also other models [16; 17], is somewhat less emphasized in the context of active matter systems [18; 19]. One might expect the two transport coefficients to be connected [17; 20], but it is not clear how. It is worth noting here that the _cumulative_ or the space-time integrated current across the entire system is nothing but the _cumulative_ displacement of all (tagged) particles, in a given direction. One of the primary objectives for our work is the theoretical characterization of the space-time integrated current fluctuations. Another goal is to understand the spatial correlations of (appropriately coarse-grained) current or, equivalently, "velocity", which has recently received significant attention in the contexts of coarsening in the Vicsek model [21], ordering dynamics in the ABPs [22; 23; 24; 25], and other active matter systems [26; 27; 28; 29].
In the past decades, several analytical studies of current fluctuations for passive lattice gases have been conducted by using microscopic [20; 30; 31; 32; 33] as well as hydrodynamic frameworks [10; 34; 35]. However, their extension to interacting SPPs is still a work in progress. In fact, unlike studies of tagged particle displacement fluctuations [36; 37; 38; 39], there are few studies of current fluctuations in conventional models
of SPPs, with the exception of an exact analysis for noninteracting RTPs [40] and an approximate analysis for ABPs [41]. Recently, exact studies of fluctuations in "weakly interacting" RTPs [42], which is governed by mean-field hydrodynamics, was carried out in Refs. [43; 44]. While weakly interacting RTPs undergo diffusion (symmetric hopping) with a _finite_ rate, the run and tumble dynamics on the other hand occur with vanishingly small _system-size-dependent_ rates, thus distinguishing the model from the _conventional or the standard_ ones [45; 46]. Indeed, in the latter models, the hopping dynamics is _independent_ of system-size; as a result, spatial correlations are _finite_, and long-ranged for small tumbling rates. In such a case, the mean-field description of Ref. [42; 43] would not be applicable and a detailed microscopic study of the conventional, "strongly interacting", RTPs is desirable. It is worth noting that the order of limits, tumbling rate \(\gamma\to 0\) and systems size \(L\to\infty\), is important and, depending on the order, there can be various distinct circumstances as described below.
(1) As one takes the limit \(\gamma\to 0\) first and then the limit \(L\to\infty\) (by keeping drift velocity \(v\) fixed), the system eventually goes into a state of _dynamical arrest_[47], where dynamical activities cease and the system becomes frozen in time.
(2) The tumbling rate \(\gamma(L)\sim{\cal O}(L^{-\delta_{1}})\) and drift velocity \(v(L)\sim{\cal O}(L^{-\delta_{2}})\) are taken to be system-size-dependent and one takes the limit \(L\to\infty\), by keeping diffusion rate fixed; a specific case with \(\delta_{1}=2\) and \(\delta_{2}=1\) was considered in Refs. [42] and [44].
(3) On the other hand, in this work, we consider RTPs in the limit of \(L\) large, with \(\gamma\) fixed (diffusion rate is strictly zero) [15]. We are particularly interested in \(\gamma\to 0\) limit, but, in that case, the \(L\to\infty\) limit is taken first; in other words, persistent length \(l_{p}=v/\gamma\) is finite, and large, but much smaller than the system size \(L\), \(1\ll l_{p}\ll L\). We keep the drift velocity \(v\) fixed.
In this paper, we use a microscopic approach to investigate spatio-temporal correlations of current in two models of strongly interacting run-and-tumble particles (RTPs): Model I - standard hardcore RTPs and model II - a hardcore long-ranged lattice gas, which is an analytically tractable idealized variant of model I. We study the models on a ring of \(L\) sites for arbitrary tumbling rate \(\gamma=\tau_{p}^{-1}\) and density \(\rho\). Interestingly, despite having nontrivial many-body correlations, model II is amenable to "first-principles" analytical calculations, whereas model I is studied through Monte-Carlo simulations. We demonstrate that large-scale fluctuations in both models can be characterized in terms of the two density- and tumbling-rate-dependent transport coefficients - the bulk-diffusion coefficient \(D(\rho,\gamma)\) and the collective particle mobility \(\chi(\rho,\gamma)\). Indeed, the current fluctuations and the mobility in model II are calculated exactly, within a previously introduced truncation scheme of Ref. [17], and is expressed in terms of distribution of gaps between two consecutive particles. For convenience, we provide below a brief summary of our main findings.
* _Spatial correlations of current.-_ We calculate steady-state spatial correlation function \(C_{r}^{JJ}=\lim_{t\to\infty}\langle J_{0}(t)J_{r}(t)\rangle\) evaluated at two spatial points separated by distance \(r\), with \(J_{i}(t)\) being instantaneous current across bond \((i,i+1)\) at time \(t\). The correlation functions decay exponentially, \(C_{r}^{JJ}\sim\exp(-r/\xi)\); the correlation length \(\xi(\rho,\gamma)\) is analytically calculated for LLG, with \(\xi\sim\sqrt{\tau_{p}}\) diverging with persistence time \(\tau_{p}\), thus providing a theoretical explanation of the findings in recent simulations and experiments [22; 28].
* We calculate fluctuations of space-time integrated current \(Q_{tot}=\sum_{i=1}^{L}Q_{i}(T)\) or, equivalently, the cumulative displacement of all particles, during time interval \(T\); here \(Q_{i}(T)=\int_{t}^{t+T}dtJ_{i}(t)\) is the cumulative bond current across it bond \((i,i+1)\) during time \(T\). We then study the collective _mobility_\(\chi(\rho,\gamma)\equiv\lim_{L\to\infty}(1/2LT)\langle Q_{tot}^{2}\rangle\), which, in the limit of \(\rho,\gamma\to 0\), obeys a scaling law: The scaled mobility \(\gamma^{a}\chi(\rho,\gamma)/\chi^{(0)}\) as a function of scaled variable \(\psi=\rho/\gamma\) is expressed through a scaling function \({\cal H}(\psi)\); here \(\chi^{(0)}=\rho(1-\rho)\) is the mobility in the limiting case of symmetric simple exclusion process (SSEP) [10]. The scaling function for LLG is calculated analytically and, in the limit of strong persistence \(\psi\gg 1\), it is shown to have an asymptotic behavior \({\cal H}(\psi)\sim\psi^{-3/2}\).
* _Time-integrated bond-current fluctuations.-_ Depending on the density- and tumbling-rate-dependent collective- or bulk-diffusion coefficient \(D(\rho,\gamma)\) and system size \(L\), we find three distinct time regimes for the fluctuations of time-integrated bond current \(Q_{i}(T)\). (i) _Initial-time regime_\(T\ll 1/D\)_: The bond-current fluctuation \(\langle Q_{i}^{2}\rangle(T,L)\) depends on the details of dynamical rules. It exhibits linear (diffusive) growth for model II (LLG), whereas, for model I (standard RTPs), it crosses over from superdiffusive (even ballistic at low densities) to a diffusive growth as tumbling rate increases. (ii) _Intermediate-time regime_\(1/D\ll T\ll L^{2}/D\)_: The current fluctuation displays subdiffusive growth \(\langle Q_{i}^{2}\rangle\sim\sqrt{T}\). (iii) _Long-time regime_\(L^{2}/D\ll T\)_: The bond-current fluctuation \(\langle Q_{i}^{2}\rangle\sim T\) grows diffusively (linear growth). So the qualitative behavior in regimes (ii) and (iii) is _universal_, being independent of dynamical rules of the models; though the prefactors in the growth laws are model-dependent.
* _Universal scaling of bond-current fluctuations.-_ Remarkably, in the limit of \(L,T\to\infty\) with scaled time \(y=DT/L^{2}\) fixed and regardless of the dynamical
rules of the models, the above mentioned behavior can be succinctly expressed through a _universal_ scaling law for the scaled bond-current fluctuation \(D(\rho,\gamma)\langle Q_{i}^{2}\rangle(T,L)/2\chi L\equiv\mathcal{W}(y)\) as a function of \(y=DT/L^{2}\) [see Eq. (81)]. For model II (LLG), the scaling function \(\mathcal{W}(y)\) is calculated exactly within the truncation scheme.
* We also calculate, numerically for model I (standard RTPs) and analytically for model II (LLG), the two-point dynamic correlation function \(\mathcal{C}_{0}^{JJ}(t)=\langle J_{i}(0)J_{i}(t)\rangle\) for instantaneous current \(J_{i}(t)\). For model II, by using our microscopic dynamical calculations, we derive the dynamic correlation function \(\mathcal{C}_{0}^{JJ}(t)\sim-t^{-3/2}\), which is shown to have a long-time power-law tail, with the correlations actually being _negative_.
The paper is organized as follows: In Sec. II, we introduce two models of hardcore RTPs. In sec. III.1 we describe the formal procedure for decomposition of current into "slow" (diffusive) and "fast" (noise-like) components. Then in Sec. III.2, we introduce a truncation scheme, which allows us to calculate the spatio-temporal correlations of time-integrated currents. In Secs. III.3, III.4, we investigate spatio-temporal correlations of instantaneous and fluctuating currents, respectively. Next in Sec. III.5, we characterize fluctuations of total current, leading to the characterization of the collective particle mobility \(\chi(\rho,\gamma)\); here we also find a scaling law in the limit of strong persistence and dilute regime. In Sec III.6, we characterize bond-current fluctuation and find another scaling law, presumably universal, in the large-scale diffusive limit. Finally, we summarize the paper in sec. IV with some concluding remarks.
## II Model description
We consider two minimal models of interacting RTPs on a one-dimensional periodic lattice of \(L\) sites where the number of particles \(N\) is conserved with density \(\rho=N/L\). In both models, particles obey hardcore constraint, i.e., a site can be occupied by at most one particle and also the crossing between particles is not allowed. We denote the occupation variable \(\eta_{i}=1\) or \(0\), depending on whether the site is occupied or not, respectively.
### Model I: Standard hardcore RTPs
We consider standard hardcore RTPs (see the schematic diagram in the _top-panel_ of Fig. 1) introduced in Ref. [46]. In this model, in addition to the occupation variable, a spin \(s=\pm 1\) is assigned to each particle, with \(s=1\) and \(s=-1\) represent its rightward and leftward orientations, respectively. The continuous-time stochastic dynamics is specified below.
(A) _Run:_ With unit rate, a particle hops, along its spin direction, to its nearest neighbor provided that the destination site is vacant.
(B) _Tumble:_ With rate \(\gamma=\tau_{p}^{-1}\), a particle changes its spin orientation, \(s\rightarrow-s\).
Clearly particles retain their spin orientation over a time scale of the persistence time \(\tau_{p}\) and, during this time, they exhibit ballistic motion with a constant speed
Figure 1: _Schematic diagram of model I and II_: The top panel illustrates the typical dynamics of model I, which is composed of standard hardcore RTPs (red circles) on a one-dimensional lattice (green rectangles). RTPs move along the associated spin indicated by the arrows above them. In the bottom panel, we demonstrate the dynamics of hardcore particles (red circle) in model II (i.e., LLG) on one dimensional lattice (green rectangle). Particles hop symmetrically by length \(l\) drawn from distribution \(\phi(l)\sim e^{-l/l_{p}}\). The “crosses” in both panels indicate the impossibility of the time-reversed moves, as shown by the dotted arrows, and thus the violation of detailed balance at the microscopic level in both models I and II.
\(v\) [note that \(v=1\), set by rule (A)] along the vacant stretch available in the direction of its spin.
### Model II: Hardcore long-ranged lattice gas (LLG)
Because of the additional spin variable, model I proves to be difficult to deal with analytically. To address the issue, we also explore a simpler idealized variant of hardcore RTPs, called long-ranged lattice gas (LLG) [48; 15], which is amenable to analytical studies. The long-range hopping mimics ballistic motion ("run") of individual RTPs, having a characteristic run-length - the _persistence length_\(l_{p}=v/\gamma\). Indeed, model II (LLG) is motivated by the fact that, on the time scale of \(\tau_{p}\), a single RTP would hop, on average, by the typical length \(l_{p}\).
The precise dynamical rules of model II (see the schematic diagram in the _bottom-panel_ of Fig. 1) are as following. With unit rate, a particle attempts to hop symmetrically by length \(l\), drawn from a distribution \(\phi(l)\). The hop is successful provided that, along the hopping direction, there is a vacant stretch, called _gap_, of size \(g\) which is of at least size \(l\) (i.e., \(g\geq l\)); otherwise (i.e., for \(g<l\)), due to the hardcore constraint, the particle traverses the entire stretch and sits adjacent to its nearest occupied site. The hop-length distribution \(\phi(l)\) can be arbitrary. However, in order to compare the two models I and II, we chose \(\phi(l)\) to be an exponential function,
\[\phi(l)=A\exp(-l/l_{p}), \tag{1}\]
where the normalization constant \(A=(1-e^{-1/l_{p}})\) as hop-lengths \(l=0,1,2,\dots\) are discrete.
## III Theory: Model II
In this section, we develop a microscopic theoretical framework to analytically calculate current fluctuations in model II (hardcore LLG). We also compare our analytic results with that obtained from direct Monte Carlo simulations of both models I and II.
### Decomposition of current: Slow and fast components
To begin with, we first define cumulative (time-integrated) bond current \(Q_{i}(T)\), which is total current across a bond \((i,i+1)\) in a time interval \(T\). On the other hand, instantaneous current \(J_{i}(t)\) is defined as
\[J_{i}(t)\equiv\lim_{\Delta t\to 0}\frac{\Delta Q_{i}}{\Delta t}, \tag{2}\]
where \(\Delta Q_{i}=\int_{t}^{t+\Delta t}dtJ_{i}(t)\) is cumulative bond current in time interval \(\Delta t\). Note that, while we investigate fluctuation properties of both the quantities \(Q_{i}(t)\) and \(J_{i}(t)\), in simulations it is statistically more efficient to calculate the averages related to the time-integrated current \(Q_{i}(t)\) than the instantaneous one \(J_{i}(t)\).
However, before discussing second moment (fluctuations) of currents, in this section we first investigate their average behavior and set the notations, required in the subsequent part of the paper. To this end, let us define the following stochastic variables,
\[{\cal U}^{(l)}_{i+l} \equiv \overline{\eta}_{i+1}\overline{\eta}_{i+2}\dots\overline{\eta}_{i +l}, \tag{3}\] \[{\cal V}^{(l+2)}_{i+l+1} \equiv \eta_{i}\overline{\eta}_{i+1}\overline{\eta}_{i+2}\dots\overline {\eta}_{i+l}\eta_{i+l+1}, \tag{4}\]
where \(\overline{\eta}_{i}=(1-\eta_{i})\), \({\cal U}^{(l)}\) and \({\cal V}^{(l+2)}\) are indicator functions for a single site being vacant, \(l\) consecutive sites being vacant and a vacancy cluster to be of size \(l\), respectively. Note that, whenever a particle performs long-range hop of length \(l\), it contributes a unit current at each bond in the stretch between the departure and destination sites; that is, for a rightward (leftward) hop, current across _all_ bonds in that stretch increases (decreases) by unity. By considering this, the continuous-time evolution for time-integrated current \(Q_{i}(t)\) in an infinitesimal time interval \([t,t+dt]\) can be written as
\[Q_{i}(t+dt)=\left\{\begin{array}{ll}Q_{i}(t)+1,&\mbox{prob.}\quad{\cal P}^{ R}_{i}(t)dt,\\ Q_{i}(t)-1,&\mbox{prob.}\quad{\cal P}^{L}_{i}(t)dt,\\ Q_{i}(t),&\mbox{prob.}\quad 1-({\cal P}^{R}_{i}+{\cal P}^{L}_{i})dt,\end{array}\right. \tag{5}\]
where \({\cal P}^{R}_{i}dt\) and \({\cal P}^{L}_{i}dt\) are probabilities of the hopping events and the corresponding rates are given by [15]
\[{\cal P}^{R}_{i} \equiv \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{k=1}^{l}\left({ \cal U}^{(l)}_{i+k}-{\cal U}^{(l+1)}_{i+k}\right)+\sum_{g=1}^{l-1}\sum_{k=1}^{ g}{\cal V}^{(g+2)}_{i+k+1}\right]. \tag{6}\] \[{\cal P}^{L}_{i} \equiv \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{k=1}^{l}\left({ \cal U}^{(l)}_{i+k-1}-{\cal U}^{(l+1)}_{i+k}\right)+\sum_{g=1}^{l-1}\sum_{k=1}^ {g}{\cal V}^{(g+2)}_{i+k}\right]. \tag{7}\]
By using the above microscopic update rules and doing some straightforward algebraic manipulations, the average instantaneous current \(\langle J_{i}(t)\rangle\) can be written as [15]
\[\langle J_{i}(t)\rangle = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left( \langle{\cal V}^{(g+2)}_{i+g+1}\rangle-\langle{\cal V}^{(g+2)}_{i+1}\rangle\right)\right. \tag{8}\] \[\left.\hskip 142.26378pt+\left(\langle{\cal U}^{(l)}_{i+l}\rangle- \langle{\cal U}^{(l)}_{i}\rangle\right)\right].\]
Note that, in the above equation, the average current \(\langle J_{i}(t)\rangle\) is written as a (generalized) gradient of the observables \(\langle{\cal V}^{(g+2)}\rangle(\rho)\) and \(\langle{\cal U}^{(l)}\rangle(\rho)\), both of which depend on (local) density and, of course, tumbling rate. By making use of the Taylor's expansion, we can write the average current explicitly in terms of (discrete) gradient of (local) density [15],
\[\langle J_{i}(t)\rangle\simeq-D(\rho,\gamma)[\langle\eta_{i+1}(t)\rangle- \langle\eta_{i}(t)\rangle] \tag{9}\]
where \(\rho_{i}(t)=\langle\eta_{i}(t)\rangle\) is the local density and the bulk-diffusion coefficient \(D(\rho,\gamma)\)[15]
\[D(\rho,\gamma) = -\frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\frac{\partial}{\partial\rho }\left[\sum_{g=1}^{l-1}g\langle\mathcal{V}^{(g+2)}\rangle(\rho)+l\langle \mathcal{U}^{(l)}\rangle(\rho)\right],\]
is a function of the global density \(\rho\) and tumbling rate \(\gamma\).
As the system is homogeneous in the steady-state, the gradients in Eq. (8) simply vanish, implying that the system has _zero_ steady-state current. However, on the level of fluctuations, the (stochastic) instantaneous current is in fact _nonzero_ even in the steady-state. Now, to characterize fluctuations appropriately [10; 11], we decompose the total instantaneous current into two components: A (hydrodynamic) diffusive current \(J_{i}^{(D)}\), which, though stochastic, relaxes very slowly, and a fluctuating or noise current \(J_{i}^{(fl)}\), which relaxes very fast. In other words, we write the instantaneous current as the sum of these slow and fast components,
\[J_{i}(t)=J_{i}^{(D)}(t)+J_{i}^{(fl)}(t), \tag{11}\]
where we identify, by using Eq. (8), the diffusive current
\[J_{i}^{(D)} \equiv \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\Bigg{[}\sum_{g=1}^{l-1} \left(\mathcal{V}_{i+g+1}^{(g+2)}-\mathcal{V}_{i+1}^{(g+2)}\right) \tag{12}\] \[+\left(\mathcal{U}_{i+l}^{(l)}-\mathcal{U}_{i}^{(l)}\right) \Bigg{]}.\]
Indeed, as we derive later [see Eqs. (56) and (63)], time-dependent correlation function for the diffusive current \(J_{i}^{(D)}\) (also the total current \(J_{i}\)) has a power-law tail, whereas that for the fluctuating (noise) current \(J_{i}^{(fl)}\) is delta-correlated. Also, comparing Eqs. (8), (11) and (12), we find that average fluctuating current is simply zero,
\[\langle J_{i}^{fl}(t)\rangle=0. \tag{13}\]
However the space-time correlations of \(J_{i}^{fl}\) has nontrivial spatial structures [see Eq. (63)] and, in the subsequent section, they are analytically calculated by using a truncation scheme, which we discuss next.
### Spatio-temporal correlations of current
We consider the time-integrated currents \(Q_{r}(t^{\prime})\) and \(Q_{0}(t)\), which are measured up to times \(t^{\prime}\) and \(t\) (\(t^{\prime}>t\)) across bonds \((r,r+1)\) and \((0,1)\), respectively, where the bonds are spatially separated by a distance \(r\). In this section, we investigate the space-time-dependent correlation function for bond current \(Q_{i}(t)\),
\[\mathcal{C}_{r}^{QQ}(t^{\prime},t) = \left\langle Q_{r}(t^{\prime})Q_{0}(t)\right\rangle_{c}, \tag{14}\] \[= \left\langle Q_{r}(t^{\prime})Q_{0}(t)\right\rangle-\left\langle Q _{r}(t^{\prime})\right\rangle\left\langle Q_{0}(t)\right\rangle.\]
As we choose \(t^{\prime}>t\), it is easy to see that, in an infinitesimal time-interval \([t^{\prime},t^{\prime}+dt^{\prime}]\), \(Q_{0}(t)\) remains constant and any change in \(\mathcal{C}_{r}^{QQ}(t^{\prime},t)\) occurs solely due to the change in \(Q_{r}(t^{\prime})\). Now, by using infinitesimal update rules Eq. (5), we write down the time-evolution equation for \(\mathcal{C}_{r}^{QQ}(t^{\prime},t)\) as given below (see Appendix A for details),
\[\frac{d}{dt^{\prime}}\mathcal{C}_{r}^{QQ}(t^{\prime},t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\Big{[}\Big{\{}\mathcal{C}_{r +l}^{\mathcal{U}^{(l)}Q}(t^{\prime},t)-\mathcal{C}_{r}^{\mathcal{U}^{(l)}Q}(t ^{\prime},t)\Big{\}} \tag{15}\] \[+\sum_{g=1}^{l-1}\Big{\{}\mathcal{C}_{r+g+1}^{\mathcal{V}^{(g+2) }Q}(t^{\prime},t)-\mathcal{C}_{r+1}^{\mathcal{V}^{(g+2)}Q}(t^{\prime},t)\Big{\}} \Big{]}.\]
In other words, we have the following identity,
\[\frac{d}{dt^{\prime}}\mathcal{C}_{r}^{QQ}(t^{\prime},t)=\left\langle J_{r}^{(D )}(t^{\prime})Q_{0}(t)\right\rangle_{c}, \tag{16}\]
where \(J_{r}^{(D)}\) is the diffusive current at \(r\)th bond and time \(t\). Note that Eq. (15) for two-point current correlation is exact and has been expressed as the gradients of two nontrivial multi-point correlation functions,
\[\mathcal{C}_{r}^{\mathcal{U}^{(l)}Q}(t^{\prime},t)=\langle\mathcal{ U}_{r}^{(l)}(t^{\prime})Q_{0}(t)\rangle_{c}, \tag{17}\] \[\mathcal{C}_{r}^{\mathcal{V}^{(g+2)}Q}(t^{\prime},t)=\langle \mathcal{V}_{r}^{(g+2)}(t^{\prime})Q_{0}(t)\rangle_{c}. \tag{18}\]
However, the difficulty arises here because the two-point correlation in eq. (15) actually involves various multi-point correlation functions, which must be now calculated in order to determine \(\mathcal{C}_{r}^{QQ}(t^{\prime},t)\). Not surprisingly, the hierarchy involving time-evolution of \(\mathcal{C}_{r}^{\mathcal{U}^{(l)}Q}(t^{\prime},t)\) and \(\mathcal{C}_{r}^{\mathcal{V}^{(g+2)}Q}(t^{\prime},t)\) does not close, making exact calculations extremely difficult.
To address the above mentioned difficulty, in this paper we propose a truncation scheme that, though approximate, allows us to close the above hierarchy and write the time-evolution of the two-point current correlations in terms of two-point correlations involving only current and density, which, interestingly, close onto themselves. Indeed, when the fluctuations of local density around the steady state are small, on a long time scale the variables \(\mathcal{V}^{(g+2)}\) and \(\mathcal{U}^{(l)}\) at a particular time, appearing in Eq. (12), are "slave" to the local density and, as a result, the diffusive current could be approximately written in the form of a "microscopic" version of the Fick's law [33], which is evident from Eq. (9),
\[J_{r}^{(D)}(t^{\prime})\simeq D(\rho,\gamma)[\eta_{r}(t^{\prime})-\eta_{r+1}(t^ {\prime})], \tag{19}\]
where we have simply used \(D[\rho_{r}(t),\gamma]\simeq D(\rho,\gamma)\); the symbol "\(\simeq\)" in Eq. (19) should rather be interpreted as an "equivalence", not an "equality", between the random variables there, unless one takes explicit averages. The precise implication of the above equivalence relation in Eq. (19), which has been used in the subsequent calculations, is the following. We can simply write the correlation function for diffusive current \(J_{r}^{(D)}(t^{\prime})\) and any other
stochastic variable \(B(t)\) in terms of correlations between local density and the variable \(B\),
\[\left\langle J_{r}^{(D)}(t^{\prime})B(t)\right\rangle_{c}\simeq-D(\rho,\gamma) \Delta_{r}\left\langle\eta_{r}(t^{\prime})B(t)\right\rangle_{c}, \tag{20}\]
where \(\Delta_{r}h_{r}=h_{r+1}-h_{r}\) is the forward difference operator. Following the above truncation scheme Eq. (20), the time evolution of the current correlations in Eq. (16) greatly simplifies as the gradient of two-point density-current correlation function \(\mathcal{C}_{r}^{qQ}(t^{\prime},t)=\langle\eta_{r}(t^{\prime})Q_{0}(t)\rangle_{c}\), which, as shown below, immediately closes the hierarchy. We thus rewrite Eq. (16) as
\[\frac{d}{dt^{\prime}}\mathcal{C}_{r}^{QQ}(t^{\prime},t)\simeq-D(\rho,\gamma) \Delta_{r}\mathcal{C}_{r}^{qQ}(t^{\prime},t), \tag{21}\]
whereas the time-evolution of the density-current correlation \(\mathcal{C}_{r}^{qQ}(t^{\prime},t)\) can be written as [for details see Appendix B],
\[\frac{d}{dt^{\prime}}\mathcal{C}_{r}^{qQ}(t^{\prime},t)=D(\rho,\gamma)\Delta_ {r}^{2}\mathcal{C}_{r}^{qQ}(t^{\prime},t). \tag{22}\]
Interestingly, the SSEP, despite having a product-measure steady state and the bulk-diffusion coefficient being independent of density [33], shares a similar structure of the density and current correlations for LLG, which have nonzero spatial correlations, albeit. In order to further simplify the time-evolution equations governing the two-point correlations, we represent the correlation functions in the Fourier space by using the transformation,
\[\tilde{\mathcal{C}}_{q}^{AB}(t^{\prime},t)=\sum_{r=0}^{L-1}\mathcal{C}_{r}^{AB }(t^{\prime},t)e^{iqr}. \tag{23}\]
The inverse Fourier transform is given by
\[\mathcal{C}_{r}^{AB}(t^{\prime},t)=\frac{1}{L}\sum_{q}\tilde{ \mathcal{C}}_{q}^{AB}(t^{\prime},t)e^{-iqr}, \tag{24}\]
where
\[q=\frac{2\pi n}{L}, \tag{25}\]
and \(n=0,1,2,\ldots,(L-1)\). We rewrite Eqs. (21) and (22) in terms of the time evolution of the respective Fourier modes,
\[\frac{d}{dt^{\prime}}\tilde{\mathcal{C}}_{q}^{QQ}(t^{\prime},t)=D(\rho,\gamma) \left(1-e^{-iq}\right)\tilde{\mathcal{C}}_{q}^{QQ}(t^{\prime},t), \tag{26}\]
and
\[\frac{d}{dt^{\prime}}\tilde{\mathcal{C}}_{q}^{\eta Q}(t^{\prime},t)=-D(\rho, \gamma)\lambda_{q}\tilde{\mathcal{C}}_{q}^{\eta Q}(t^{\prime},t), \tag{27}\]
where \(\lambda_{q}\) is given by
\[\lambda_{q}=2\left(1-\cos q\right). \tag{28}\]
By integrating Eqs. (26) and (27), we express the unequal-time correlation functions in the following forms,
\[\tilde{\mathcal{C}}_{q}^{QQ}(t^{\prime},t) = D(\rho,\gamma)\!\!\int_{t}^{t^{\prime}}\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
to be determined. To this end, we first derive the time-evolution equation for the correlation function \({\cal C}_{r}^{\eta\eta}(t,t)=\langle\eta_{r}(t)\eta_{0}(t)\rangle_{c}\), in the real space; for details, see Appendix D. Using Fourier transform of Eq. (23), we find the time-evolution equation for Fourier modes \(\tilde{\cal C}_{q}^{\eta\eta}(t,t)\),
\[\left(\frac{d}{dt}+2D(\rho,\gamma)\lambda_{q}\right)\tilde{\cal C}_{q}^{\eta \eta}(t,t)=\tilde{\cal S}_{q}^{\eta\eta}(t), \tag{36}\]
where the source term \(\tilde{\cal S}_{q}^{\eta\eta}=f_{q}\). One can now solve Eq. (36) to obtain the time-dependent solution of \(\tilde{\cal C}_{q}^{\eta\eta}(t,t)\). Since we want the dynamic density-density correlation function to be evaluated in the steady state, we simply drop its time dependence and set \(d\tilde{\cal C}_{q}^{\eta\eta}(t,t)/dt=0\); consequently, we have from Eq. (36),
\[2D(\rho,\gamma)\lambda_{q}\tilde{\cal C}_{q}^{\eta\eta}=\tilde{\cal S}_{q}^{ \eta\eta}=f_{q}. \tag{37}\]
The above Eq. (37) provides the solution for the static density-density correlation function and \(f_{q}\) is then obtained by replacing \(P(g,t)\) by its steady-state value \(P(g)\) in Eq. (33). Upon substituting the static \(\tilde{\cal C}_{q}^{\eta\eta}\) in Eq. (32), the source term \(\tilde{\cal S}_{q}^{\eta Q}\) also becomes time-independent and thus the solution is given by
\[\tilde{\cal S}_{q}^{\eta Q} = -\frac{f_{q}}{2(1-e^{-iq})}. \tag{38}\]
Using this particular form of \(\tilde{\cal S}_{q}^{\eta Q}\) in Eq. (35), we finally obtain the equal- as well as unequal-time density-current correlation function \(\tilde{\cal C}_{q}^{\eta Q}\) in the steady state,
\[\tilde{\cal C}_{q}^{\eta Q}(t,t)=\frac{-f_{q}}{2D(\rho,\gamma) \lambda_{q}(1-e^{-iq})}\left(1-e^{-\lambda_{q}D(\rho,\gamma)t}\right), \tag{39}\] \[\tilde{\cal C}_{q}^{\eta Q}(t^{\prime\prime},t)=\frac{-f_{q}e^{- \lambda_{q}D(\rho,\gamma)t^{\prime\prime}}}{2D(\rho,\gamma)\lambda_{q}(1-e^{- iq})}\left(e^{\lambda_{q}D(\rho,\gamma)t}-1\right), \tag{40}\]
where \(t^{\prime\prime}\geq t\). It should be noted that, by substituting Eq. (40) in Eq.(29), one can readily obtain the first term of the unequal-time current-current correlation function \(\tilde{\cal C}_{q}^{QQ}(t^{\prime\prime},t)\). In the next section, we focus on another equal-time correlation function \({\cal C}_{r}^{QQ}(t,t)\), which is the quantity to be considered while calculating the two-point space-time correlation function \({\cal C}_{r}^{QQ}(t^{\prime},t)\).
#### iii.2.2 Equal-time current-current correlation \({\cal C}_{r}^{QQ}(t,t)\)
To calculate equal-time current-current correlation \({\cal C}_{r}^{QQ}(t,t)\), we first derive its time-evolution equation, which, upon applying the closure scheme as in Eq. (19), leads to the solution for \({\cal C}_{r}^{QQ}(t,t)\) having the following closed form expression,
\[{\cal C}_{r}^{QQ}(t,t) = \frac{D}{L}\sum_{q}\left(1-e^{-iq}\right)(2-\lambda_{qr})\int_{0 }^{t}\tilde{\cal C}_{q}^{\eta Q}(t,t)dt+\Gamma_{r}t. \tag{41}\]
In the rhs of the above equation, \(\tilde{\cal C}_{q}^{\eta Q}(t,t)\) in the first term is readily obtained from Eq. (39), while, in the second term, \(\Gamma_{r}\) can be written in terms of gap distribution as given below,
\[\Gamma_{r}=\rho\!\!\sum_{l=|r|+1}^{\infty}\!\!\!\phi(l)\left[(l- \mid r\mid)\sum_{g=l}^{\infty}P(g)+\sum_{g=|r|}^{l-1}(g-\mid r\mid)P(g)\right]; \tag{42}\]
see Appendix E for calculation details. The quantity \(\Gamma_{r}\) physically corresponds to the strength of the "noise current" and is shown to be equal to the two-point space-time correlation of the fluctuating current, i.e., \(\langle J_{r}^{(fl)}(t)J_{0}^{(fl)}(0)\rangle=\Gamma_{r}\delta(t)\) [see Eq. (63)]. Later we also show that the steady-state fluctuation of the space-time integrated current \(Q_{tot}(L,T)=\int_{0}^{T}dt\sum_{i=0}^{L-1}J_{i}^{(fl)}(t)\) [see Eq. (70)] satisfies, in the thermodynamic limit, a fluctuation relation,
\[2\chi(\rho,\gamma)\equiv\lim_{L\to\infty}\frac{1}{LT}\langle Q_{tot}^{2}(L,T) \rangle=\sum_{r}\Gamma_{r}, \tag{43}\]
where the particle mobility \(\chi(\rho,\gamma)\) has the following expression,
\[\chi(\rho,\gamma)=\frac{\rho}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l- 1}g^{2}P(g)+l^{2}\sum_{g=l}^{\infty}P(g)\right]. \tag{44}\]
The mobility can be written explicitly as a function of density and tumbling rate, provided that the gap distribution \(P(g)\) is known (see the \(\rho,\gamma\to 0\) limit and the corresponding scaling regime, discussed later in Sec. III.5). Note that the sum rule, as in Eq. (43), states that the scaled space-time integrated current fluctuation is equal to the spatially integrated correlation function for fluctuating current and can be directly tested in simulations [see Fig. (4)].
We now perform inverse Fourier transform of Eq. (29) and finally obtain the desired solution for the steady-state unequal-time two-point current-current correlation function \({\cal C}_{r}^{QQ}(t^{\prime},t)\) in real space,
\[{\cal C}_{r}^{QQ}(t^{\prime},t)=-\frac{1}{2LD}\sum_{q}\frac{f_{q}}{\lambda_{q} ^{2}}\left(e^{-\lambda_{q}Dt}-e^{-\lambda_{q}Dt^{\prime}}\right)\left(e^{- \lambda_{q}Dt}-1\right)e^{-iqr}-\frac{1}{2L}\sum_{q}\frac{f_{q}}{\lambda_{q}} \Bigg{\{}t-\frac{\left(1-e^{-\lambda_{q}Dt}\right)}{\lambda_{q}D}\Bigg{\}}(2- \lambda_{qr})+\Gamma_{r}t. \tag{45}\]
Now onwards, to keep the notations simple, we drop the argument of \(D(\rho,\gamma)\) in Eq. (45) and elsewhere.
### Spatio-temporal correlation of the instantaneous current
In this section, we calculate the two-point unequal-time correlation function of the instantaneous current, i.e., \(\mathcal{C}_{r}^{JJ}(t^{\prime},t)\) in the steady state. We do this by differentiating the steady-state integrated current correlation function \(\mathcal{C}_{r}^{QQ}(t^{\prime},t)\) with respect to times \(t^{\prime}\) and \(t\). However, the expression for \(\mathcal{C}_{r}^{QQ}(t^{\prime},t)\) provided in Eq. (45) is valid only for \(t^{\prime}\geq t\). Therefore, in order to obtain \(\mathcal{C}_{r}^{JJ}(t^{\prime},t)\) for arbitrary values of \(t^{\prime}\) and \(t\), the appropriate formula can be written by using the Heaviside-Theta\(\Theta(t)\),
\[\mathcal{C}_{r}^{JJ}(t,t^{\prime})\!=\!\frac{d}{dtdt^{\prime}} \big{[}\mathcal{C}_{r}^{QQ}(t^{\prime},t)\Theta(t^{\prime}-t)\!+\!\mathcal{C} _{r}^{QQ}(t,t^{\prime})\Theta(t-t^{\prime})\big{]}\,, \tag{46}\]
where \(\mathcal{C}_{r}^{QQ}(t,t^{\prime})\) is obtained directly from Eq. (45) by interchanging \(t^{\prime}\) and \(t\). Using Eq. (46), we straightforwardly compute \(\mathcal{C}_{r}^{JJ}(t^{\prime},t)\). After doing some algebraic manipulations and setting \(t^{\prime}=0\), we eventually arrive at the following expression,
\[\mathcal{C}_{r}^{JJ}(t,t^{\prime}) = \Gamma_{r}\delta(t-t^{\prime})-\frac{D}{4L}\sum_{q}\left(2-\lambda _{qr}\right)f_{q}e^{-\lambda_{q}D|t-t^{\prime}|} \tag{47}\] \[\Big{\{}\Theta(t-t^{\prime})+\Theta(t^{\prime}-t)\Big{\}}\]
where \(f_{q}\) is obtained by substituting the steady-state gap-distribution \(P(g)\) in Eq. (33). Clearly, \(\mathcal{C}_{r}^{JJ}(t,t^{\prime})\) can be divided into two distinct parts. The space-dependent prefactor in the first component, which is delta correlated in time, pertains to the equal-time two-point correlation, which is determined by the fluctuating current correlations \(\Gamma_{r}\). The second part comprises of the correlations at different space and time points. In the subsequent analysis, we examine the contribution of each of these terms.
#### iv.3.1 Equal-time unequal-space correlation
To obtain the equal-time spatial correlations of the instantaneous current, we consider the case in Eq. (47) with \(t=t^{\prime}=0\), yielding the leading order contribution,
\[\mathcal{C}_{r}^{JJ}\simeq\left(\frac{\mathcal{C}_{0}^{JJ}}{ \Gamma_{0}}\right)\Gamma_{r}, \tag{48}\]
Note that the spatial dependence of the correlation function \(\mathcal{C}_{r}^{JJ}\) is solely governed by \(\Gamma_{r}\), which can be written in terms of the steady-state gap distribution function \(P(g)\) only [see Eq. (42)]. So, perhaps not surprisingly, the spatial correlations of current is in fact governed by the gap statistics and one would expect the correlation length to be determined by the typical gap size in the system. However, obtaining \(P(g)\) explicitly as a function of \(\rho\) and \(\gamma\) is not an easy task in general. We can still do the following asymptotic analysis, which is quite general. For larger gap size, we expect \(P(g)\) to be an exponential function (which can be shown to be indeed the case for \(\gamma\ll 1\)[49]),
\[P(g)\simeq N_{*}\exp(-g/g_{*}), \tag{49}\]
where \(g_{*}\) is the typcal gap size. Now, using the conservation relation \(\langle g\rangle=\sum_{g=1}^{\infty}gP(g)=1/\rho-1\), one can immediately obtain the prefactor
\[N_{*}\simeq\left(\frac{1}{\rho}-1\right)\frac{(e^{1/g_{*}}-1)^{2 }}{e^{1/g_{*}}}. \tag{50}\]
Upon substituting the above expression of \(P(g)\) into Eq. (42), we obtain the following simplified expression
\[\Gamma_{r}\simeq(1-\rho)\frac{(e^{1/g_{*}}-1)}{(e^{(\gamma+1/g_{* })}-1)}e^{-r/\xi}=\Gamma_{0}e^{-r/\xi}, \tag{51}\]
which immediately leads to the spatio correlation function of current,
\[\mathcal{C}_{r}^{JJ}=\mathcal{C}_{0}^{JJ}e^{-r/\xi}, \tag{52}\]
where the correlation length \(\xi\) is given by
\[\xi=\frac{1}{\gamma+g_{*}^{-1}}. \tag{53}\]
Eqs. (51) and (53) implies that the typical gap size \(g_{*}\) plays a crucial role in determining \(\Gamma_{r}\) and \(\xi\). Although one can calculate \(g_{*}\) numerically without much difficulty, determination of its exact analytical form for any arbitrary parameter regime is a nontrivial task. However, in the limit of strong persistence where \(l_{p}=\gamma^{-1}\to\infty\), there is an analytic expression for typical gap size [49],
\[g_{*}\simeq\sqrt{\frac{1-\rho}{\gamma\rho}}, \tag{54}\]
which leads to the explicit solution of \(\Gamma_{r}\) and hence the correlation function \(C_{r}^{JJ}\). It is worth noting that, in this specific regime of strong persistence, the correlation length \(\xi\) is primarily dominated by \(g_{*}\) alone, which immediately implies \(\xi\sim 1/\sqrt{\gamma}=\sqrt{\tau_{p}}\). That is, correlation length \(\xi\sim\sqrt{\tau_{p}}\) diverges with persistence time \(\tau_{p}\), thus providing a straightforward theoretical explanation of the findings in recent simulations and experiments [28; 22].
In order to verify the theoretical results Eqs. (52), (53) and (54), in simulations we actually calculate, for better statistics, correlation function \(C^{JJ}(r)=\lim_{t\to\infty}\langle\tilde{J}_{0}(t)\tilde{J}_{r}(t)\rangle\) for a coarse-grained current \(\tilde{J}_{t}(t)=(1/\Delta t)\int_{t}^{t+\Delta t}dtJ_{i}(t)\), averaged over a reasonably small time window \((t,t+\Delta t)\) and evaluated at two spatial points separated by distance \(r\). In Fig. (2), we plot the
scaled correlation function \(\mathcal{C}_{r}^{JJ}/\mathcal{C}_{0}^{JJ}\) for models II (LLG, closed points; left-panel) and I (standard RTPs, open points; middle-panel), obtained from Monte Carlo simulation, at different tumbling rates \(\gamma=0.05\) (upper triangle), \(0.02\) (lower triangle), \(0.01\) (diamond) and \(0.005\) (pentagon) while keeping the density fixed at \(\rho=0.5\). We also compare the simulation data with the strong-persistence analytical solution (dotted lines), obtained using Eqs. (52), (53) and (54). We indeed find a quite good agreement between simulations and analytic results in the limit of small \(\gamma\). Finally, in the right panel of Fig. (2), we plot the numerically obtained correlation length \(\xi\) as a function of \(\gamma\) for models II (LLG, closed points) and I (standard RTPs, open points) at a fixed density \(\rho=0.5\) and also compare the results with the strong-persistence analytical solution, obtained using Eqs. (53) and (54). In both cases - models I and II, we find in simulations that the correlation functions decay exponentially, \(C^{JJ}(r)\sim\exp(-r/\xi)\) and agree reasonably well with the analytical results. Notably, at small \(\gamma\), the correlation lengths for models I and II asymptotically converge to each other as implied by our theory Eqs. (52), (53) and (54).
#### iv.2.2 Equal-space unequal-time correlation function
To evaluate the dynamic two-point correlation function for the instantaneous current in the steady state, we set \(r=0\) and consider the case \(t^{\prime}=0\) and \(t>0\) in Eq. (47), leading to the following expression,
\[\mathcal{C}_{0}^{JJ}(t,0)=-\frac{D}{2L}\sum_{q}f_{q}(t)e^{-\lambda_{q}Dt}. \tag{55}\]
It is important to distinguish between the order of space and time limits. In the case, where the long-time limit \(t\to\infty\) is taken first and then the large system size limit \(L\to\infty\) (i.e., \(t\gg L^{2}/D\)), it can be shown from Eq. (55) that \(\mathcal{C}_{0}^{JJ}(t,0)=0\). In the other case, where we first take the limit \(L\to\infty\) and then the \(t\to\infty\) limit (i.e., \(L^{2}/D\gg t\gg 1/D\)), we observe that the dynamic correlation\(\mathcal{C}_{0}^{JJ}(t,0)\) expbecomes a long-ranged one, which we characterize next. In this time regime, the behavior of \(\mathcal{C}_{0}^{JJ}(t,0)\) in Eq. (55) is primarily dominated by the relaxations of the small-\(q\) Fourier modes. Therefore, in order to obtain the large \(t\) behavior, we can employ a small \(q\) analysis by performing the transformations \(\lambda_{q}\to q^{2}\) and \(f_{q}\to\chi(\rho,\gamma)q^{2}\). Moreover, for large \(L\gg 1\), by converting
Figure 3: _Verification of Eq._ (57) - We plot the negative scaled equal-space unequal-time instantaneous current correlation function i.e. \(-\mathcal{C}_{0}^{JJ}(t)/\chi D\), as a function of \(Dt\), for model II (LLG, closed symbols) and model I (standard RTPs, open symbols), at a fixed \(\gamma=0.1\) and different \(\rho=0.3\) (blue circle), \(0.5\) (red square) and \(0.7\) (magenta triangle). We also compare the scaled simulation data points with the theoretical prediction (black dotted line) as shown in Eq. (57).
Figure 2: _Verification of Eqs._ (52), (53) _and_ (54) - We plot the scaled two point spatial correlation of the instantaneous current \(\mathcal{C}_{r}^{JJ}/\mathcal{C}_{0}^{JJ}\) for model II (LLG, left-panel) and model I (standard RTPs, middle-panel), obtained from simulations (points), at a fixed \(\rho=0.5\) and various \(\gamma=0.05\) (upper triangle), \(0.02\) (lower triangle) and \(0.01\) (diamond). We also compare the simulation data in both the models with the strong persistence analytical solution (dotted line) given by Eqs. (52), (53) and (54). At the right panel, we plot the correlation length \(\xi\), as a function of \(\gamma\), at \(\rho=0.5\) for model II (LLG, closed points) and model I (RTPs, open points) and compare them with the strong persistence analytical solution (line) provided by Eqs. (53) and (54).
the summation into an integral, we obtain
\[{\cal C}_{0}^{JJ}(t)\simeq-\frac{\chi(\rho,\gamma)}{4\sqrt{\pi D(\rho,\gamma)}}t^{ -3/2}; \tag{56}\]
see Appendix F for calculation details. Notably the correlation function \({\cal C}_{0}^{JJ}(t,0)\) is _negatively correlated_ for \(t>0\), with a delta-function at \(t=0\), and exhibits a power-law decay. These two important characteristics are a direct consequence of the observed subdiffusive growth in \({\cal C}_{0}^{QQ}(t,t)\) in the time regime \(1\ll Dt\ll L^{2}\). In fact, in this regime, upon suitable rearrangements of Eq. (56), we immediately obtain a scaling relation,
\[\frac{1}{\chi D}{\cal C}_{0}^{JJ}(t)=-\frac{1}{4\sqrt{\pi}}(Dt)^{-3/2}. \tag{57}\]
Here \(\chi\equiv\chi(\rho,\gamma)\) and \(D\equiv D(\rho,\gamma)\) are the density- and tumbling-rate-dependent collective mobility, as in Eq. (44), and bulk-diffusion coefficient, respectively. Interestingly, in a slightly different setting, dynamic fluctuations of "force" on a "passive" tracer immersed in a bath of hardcore RTPs has been studied in Ref. [50], where the associated correlation function was shown to have a similar power-law tail.
In order to verify the above scaling relationship in Eq. (57), we first require to calculate the parameter dependent transport coefficients \(D(\rho,\gamma)\) and \(\chi(\rho,\gamma)\) for models I and II. We calculate \(D(\rho,\gamma)\) and \(\chi(\rho,\gamma)\) for model II (LLG) by numerically computing \(P(g)\) in the steady-state and using it in Eqs. (10) and (44). However, unlike model II, we do not have analytical expressions for the transport coefficients in model I (standard RTPs), forcing us to rely on direct simulations. To calculate \(D(\rho,\gamma)\) for model I, we study the relaxation of long-wavelength perturbation and follow the method provided in our previous work [15]. For the numerical determination of the mobility \(\chi(\rho,\gamma)\), we compute the scaled space-time integrated current fluctuation as given in Eq. (70). We now verify the scaling relation in Eq. (57) by plotting the ratio \(-{\cal C}_{0}^{JJ}(t)/\chi D\) (obtained from simulations) in Fig. (3) as a function of scaling variable \(Dt\), for model II (closed points) and model I (open points) for various densities \(\rho=0.3\) (circle), \(0.5\) (square) and \(0.7\) (triangle) at a fixed tumbling rate \(\gamma=0.1\). We also compare simulation results (points) with the analytic solution as in Eq. (57). We found our theory to agree quite well with simulations at large times.
### Space-time correlations of fluctuating ("noise") current
In this section, we focus on determining the two-point spatio-temporal correlation function for fluctuating current \(J^{(fl)}(t)\) in the steady state. In other words, our aim is to derive the expression for \({\cal C}_{r}^{J^{(fl)}J^{(fl)}}(t,0)\) where \(t\geq 0\). Using the current decomposition as in Eq. (11), we can write the following relation,
\[{\cal C}_{r}^{J^{fl}J^{fl}}(t,0)={\cal C}_{r}^{JJ}(t,0)-{\cal C}_{r}^{J^{D}J} (t,0)-{\cal C}_{r}^{J^{fl}J^{D}}(t,0). \tag{58}\]
Notably, the fluctuation current \(J^{(fl)}(t)\) at time \(t\) is not correlated with the diffusive current \(J_{0}^{D}(0)\) at an earlier time \(t=0\), i.e.,
\[{\cal C}_{r}^{J^{fl}J^{D}}(t,0)=\langle J_{r}^{fl}(t)J_{0}^{D}(0)\rangle=0. \tag{59}\]
Then the third term in Eq. (58) immediately drops out. Moreover, in order to determine the second term \({\cal C}_{r}^{J^{D}J}(t,0)\), we use the following relation
\[{\cal C}_{r}^{J^{D}J}(t,0) = \left[\frac{d}{dt^{\prime}}{\cal C}_{r}^{J^{D}Q}(t,t^{\prime}) \right]_{t^{\prime}=0}, \tag{60}\] \[\simeq D\frac{d}{dt^{\prime}}\left[{\cal C}_{r}^{\eta Q}(t,t^{\prime} )-{\cal C}_{r+1}^{\eta Q}(t,t^{\prime})\right]_{t^{\prime}=0}, \tag{61}\]
where we have used the truncation approximation as in Eq. (19), to arrive at Eq. (61) by using Eq. (60). Following Eq. (23), we now expand the correlators \({\cal C}_{r}^{\eta Q}(t,t^{\prime})\) in the Fourier basis, and then using Eq. (40), we obtain the desired solution,
\[{\cal C}_{r}^{J^{D}J}(t,0)=-\frac{D}{4L}\sum_{q}(2-\lambda_{qr})f_{q}(t)e^{- \lambda_{q}Dt}. \tag{62}\]
Importantly the above solution coincides with the two point unequal-time correlation of \({\cal C}_{r}^{JJ}(t,0)\) which is displayed in the second term of Eq. (47) with \(t\geq t^{\prime}=0\). Finally using Eqs. (47), (59) and (62) in Eq. (58), we readily obtain
\[{\cal C}_{r}^{J^{fl}J^{fl}}(t,0)=\langle J_{r}^{fl}(t)J_{0}^{fl}(0)\rangle= \delta(t)\Gamma_{r}. \tag{63}\]
### Fluctuation of the space-time integrated current
The space-time integrated current \(Q_{tot}(L,T)\) of the system is defined as
\[Q_{tot}(L,T)=\sum_{i=0}^{L-1}Q_{i}(T)=\int_{0}^{T}dt\sum_{i=0}^{L-1}J_{i}(t). \tag{64}\]
Note that, \(Q_{tot}(L,T)\) measures the total current in the system upto the observtion time \(T\). Alternatively, \(Q_{tot}(L,T)\) can be linked to the cumulative tagged particle displacements in the following way:
\[Q_{tot}(L,T)=\sum_{i=1}^{N}X_{i}(T), \tag{65}\]
where \(X_{i}(T)\) is the displacement of the \(i\)th particle in time \(T\). In this section, we will characterize the fluctuation properties of \(Q_{tot}(L,T)\), which essentially calculates
the fluctuation of cumulative displacements of active particles in the system.
It is worth noting that, according to the decomposition shown in Eq. (11), we can decompose \(J_{i}(t)\) in Eq. (64) into diffusive \(J_{i}^{(D)}(t)\) and fluctuating \(J_{i}^{(fl)}(t)\) components. However, for the periodic system under consideration here, we use the identity
\[\sum_{i=0}^{L-1}J_{i}^{(D)}(t)=0. \tag{66}\]
As a result, the diffusive component does not contribute to \(Q_{tot}(L,T)\), leaving us with,
\[Q_{tot}(L,T)=\int_{0}^{T}dt\sum_{i=0}^{L-1}J_{i}^{fl}(t). \tag{67}\]
The above equation clearly reflects the fact that the total current of the system \(Q_{tot}(L,T)\) is solely governed by the fluctuating component \(J_{i}^{(fl)}(t)\), which immediately implies the average current
\[\langle Q_{tot}(L,T)\rangle=\int_{0}^{T}dt\sum_{i=0}^{L-1}\langle J_{i}^{fl}( t)\rangle=0, \tag{68}\]
since \(\langle J_{i}^{fl}(t)\rangle=0\). In a similar way, we write the expression for the fluctuation
\[\langle Q_{tot}^{2}(L,T)\rangle = \int_{0}^{T}dt^{\prime}\int_{0}^{T}dt\sum_{i=0}^{L-1}\sum_{r} \langle J_{i}^{fl}(t)J_{i+r}^{fl}(t^{\prime})\rangle. \tag{69}\]
Using Eq. (63) in the above equation, it is now straightforward to find that the total current fluctuation satisfies the following relation
\[\frac{1}{LT}\langle Q_{tot}^{2}(L,T)\rangle=\sum_{r}\Gamma_{r}=2\chi(\rho, \gamma), \tag{70}\]
where we have already expressed \(\chi(\rho,\gamma)\) in terms of \(P(g)\) in Eq. (44). To verify the above equation for the model II (LLG), we first compute \(\langle Q_{tot}^{2}(L,T)\rangle\) from numerical simulation with \(L=1000\) and \(T=50\) in the parameter range \(0.01\leq\rho\leq 0.9\) and \(0.005\leq\gamma\leq 1\). Moreover, we also numerically compute \(P(g)\) and use them in Eq. (44) to obtain \(\chi(\rho,\gamma)\) for the same parameter values. In panels (a) and (b) of Fig. (4), we plot the numerically obtained scaled fluctuation \(\gamma\langle Q_{tot}^{2}(L,T)\rangle/2LT\) as a function of \(\rho\) and \(\gamma\), respectively. We also plot the already calculated \(\gamma\chi(\rho,\gamma)\) and represent them with the dotted lines. The excellent agreement between \(\gamma\langle Q_{tot}^{2}(L,T)\rangle/2LT\) and \(\gamma\chi(\rho,\gamma)\) immediately verifies Eq. (70) for model II. In panels (c) and (d) of Fig. (4), we plot the functional variation of the numerically obtained \(\langle Q_{tot}^{2}(L,T)\rangle/2LT\), which is also a measure of \(\chi(\rho,\gamma)\) according to Eq. (70), with respect to \(\rho\) and \(\gamma\), respectively for model I (standard RTPs). We notice that the scaled fluctuation is nonmonotonic in both \(\rho\) and \(\gamma\), and we find qualitative similarities between panels (a) and (c), as well as panels (b) and (d), thus establishing the same underlying mechanism of particle transport in models I (standard RTPs) and II (LLG).
#### Scaling regime for the particle mobility \(\chi(\rho,\gamma)\)
In this section, we study the particle mobility \(\chi(\rho,\gamma)\) in the two limiting cases: _Case I, \(\rho\to 0\)_, \(\gamma\to\infty\) and _Case II, \(\rho\to 0\)_, \(\gamma\to 0\). While Case I is qualitatively similar to the SSEP limit, Case II remarkably captures the nontrivial interplay of interaction and persistence in terms of the scaling variable \(\psi=\rho/\gamma\). Previously, in Ref. [15], we investigated the scaling regime for the bulk-diffusion coefficient \(D(\rho,\gamma)\) for Case II; also, in this case, we calculated the associated scaling function analytically for model II. By using the truncation scheme as in Eq. (19), we have been able to calculate the same for the mobility \(\chi(\rho,\gamma)\).
By using Eqs. (49) and (44), we substitute the steady-state gap distribution \(P(g)\) in \(\chi(\rho,\gamma)\) for model II and obtain
\[\chi(\rho,\gamma)\simeq\frac{(1-\rho)}{2}\frac{(e^{1/g_{*}}-1)(e^{\gamma+1/g_ {*}}+1)}{(e^{\gamma+1/g_{*}}-1)^{2}}. \tag{71}\]
Note that, the above expression of \(\chi(\rho,\gamma)\) is valid for any arbitrary \(\rho\) and \(\gamma\). However, in the following discussion, we analyze the limiting cases mentioned in the beginning.
_Case I, \(\rho\to 0\) and \(\gamma\to\infty\).-_ In this case of small persistence and low density limit, the steady-state distribution is a product measure: \(P(g)\sim(1-\rho)^{g}\simeq e^{-\rho g}\), yielding \(g_{*}=1/\rho\). Finally, using this \(g_{*}\) and setting the condition \(\gamma\gg 1\gg\rho\) in Eq. (71), we obtain
\[\chi(\rho,\gamma)\simeq\frac{e^{-\gamma}}{2}\rho(1-\rho)=\frac{e^{-\gamma}}{2 }\chi^{(0)}, \tag{72}\]
which, upto a scale factor \(\exp(-\gamma)\) due to time scaling (explained below), is the particle mobility \(\chi^{(0)}=\rho(1-\rho)\) in SSEP. The exponential prefactor \(e^{-\gamma}\) in the above equation appears because of the following. In this case, \(\phi(l)\) carries maximum weight at \(l=0\), while all other hop-lengths, i.e., \(l>0\), occur with an exponentially smaller probability \(1-\phi(0)=e^{-\gamma}\). Furthermore, among these nonzero hop-lengths, \(l=1\) dominates, with contributions from the larger \(l\) being vanishingly small. Therefore, in the limit \(\gamma\to\infty\) or, equivalently, \(l_{p}\to 0\) in model II (LLG), particles effectively perform SSEP-like hopping dynamics, albeit with a rate that is simply exponentially small, \(e^{-\gamma}\), thus explaining the prefactor in Eq. (72).
_Case II, \(\rho\to 0\) and \(\gamma\to 0:\)_ As we show below, similar to the bulk-diffusion coefficient \(D(\rho,\gamma)\) in hard-core RTPs studied in Ref. [15], the particle mobility
too exhibit interesting scaling properties. Indeed, in the strong-persistence and low-density limit, there are only two relevant length scales in the problem: The persistence length \(l_{p}=v/\gamma\) and the "mean free path" or the average gap \(\langle g\rangle\simeq 1/\rho\). Consequently their ratio \(\psi=l_{p}/\langle g\rangle\) is expected to provide a scaling variable that should quantify the interplay of persistence and interaction. In the regime of strong persistence, \(\psi\to\infty\) denotes the strongly interacting limit, whereas \(\psi\to 0\) corresponds to the noninteracting limit. Now as argued previously in Ref. [15], we have the typical gap-length \(g_{*}\) satisfing the following scaling law - \(g_{*}\simeq\mathcal{G}(\psi)/\rho\). In the limit of \(\psi\to 0\), the model reduces to the well known SSEP for which \(\mathcal{G}(\psi)=1\), while, in the opposite limit of \(\psi\to\infty\), we have the strongly interacting regime, for which previous calculations in Ref. [49] yield \(\mathcal{G}(\psi)=\sqrt{\psi}\). Now combining these two limiting cases, we could simply write \(\mathcal{G}(\psi)\simeq\sqrt{1+\psi}\). Finally, plugging the assumed form of \(g_{*}=\mathcal{G}(\psi)/\rho\) in Eq. (71), and after some algebraic manipulations, we obtain the following scaling law,
\[\chi_{LLG}(\rho,\gamma)\equiv\frac{\chi^{(0)}}{\gamma^{2}}\mathcal{H}_{LLG} \left(\psi=\frac{\rho}{\gamma}\right), \tag{73}\]
where \(\chi^{(0)}=\rho(1-\rho)\) is the particle mobility in the SSEP and the expression for the scaling function can be explicitly written as
\[\mathcal{H}_{LLG}(\psi) = \frac{\mathcal{G}(\psi)}{(\psi+\mathcal{G}(\psi))^{2}}. \tag{74}\]
Finally for model II (LLG), by replacing the above form
Figure 4: In panels (a) and (b), we plot the scaled space-time integrated current fluctuation \(\gamma\langle Q_{tot}^{2}(L,T)\rangle/2LT\) for the LLG, obtained from simulation (points), as a function of \(\rho\) [at different \(\gamma=0.001\) (blue square), \(0.005\) (red circle), \(0.01\) (black upper-triangle), \(0.05\) (magenta down-triangle), and \(0.1\) (green diamond)] and \(\gamma\) [at various \(\rho=0.01\) (blue square), \(0.05\) (red circle), \(0.1\) (black upper-triangle) and \(0.5\) (magenta down-triangle)], respectively. Corresponding dotted lines are \(\gamma\chi(\rho,\gamma)\) calculated by using the numerically obtained \(P(g)\) in Eq. (44). The excellent match between these two quantities verifies Eq. (70). In panels (c) and (d), we plot \((Q_{tot}^{2}(L,T))/2LT\) for model I (standard RTPs), obtained from numerical simulation (line-point), as a function of \(\rho\) and \(\gamma\), respectively for the aforementioned parameters.
of \({\cal G}(\psi)\simeq\sqrt{1+\psi}\), we immediately obtain
\[{\cal H}_{LLG}(\psi) = \frac{\sqrt{1+\psi}}{(\psi+\sqrt{1+\psi})^{2}}; \tag{75}\]
for calculation details, see Appendix I.
In top-panel of Fig. 5, we plot the scaled mobility \(\gamma^{2}\chi_{LLG}(\rho,\gamma)/\chi^{(0)}\), obtained from simulation (points), as a function of the scaling variable \(\psi=\rho/\gamma\) in the parameter ranges \(0.01\leq\rho\leq 0.5\) and \(0.001\leq\gamma\leq 0.1\). We observe the data points collapse with each other, and that the collapsed data points follow the analytically obtained scaling function \({\cal H}_{LLG}(\psi)\) (solid line) very well. This observation indeed verifies the existence of the scaling law as in Eq. (73) and substantiate the scaling function in Eq. (75). We find that the same scaling law holds also for model I (standard RTPs). To demonstrate this, we plot \(\gamma\chi_{RTP}(\rho,\gamma)/\chi^{(0)}\) as a function of \(\psi\) in the bottom-panel of Fig. 5, in the same parameter range as LLG.
### Time-Integrated Bond-Current fluctuation
In order to calculate the steady-state time-integrated bond-current fluctuation, we simply put \(r=0\) and \(t^{\prime}=t=T\) in Eq. (45) and obtain
\[{\cal C}_{0}^{QQ}(T,T) = \Gamma_{0}T-\frac{1}{L}\sum_{q}\frac{f_{q}}{\lambda_{q}}\Bigg{\{} t-\frac{\left(1-e^{-\lambda_{q}DT}\right)}{\lambda_{q}D}\Bigg{\}} \tag{76}\] \[= \frac{2\chi}{L}T+\frac{1}{DL}\sum_{q}\frac{f_{q}}{\lambda_{q}^{2 }}\left(1-e^{-\lambda_{q}DT}\right); \tag{77}\]
see Appendix G for the derivation of Eq. (77). It is important to mention that \({\cal C}_{0}^{QQ}(T,T)\), which is expressed by Eqs. (76) and (77) in its most comprehensive form, exhibits quite interesting characteristics at different time regimes. In the subsequent discussion, we investigate these properties by analyzing the limiting cases.
#### v.6.1 Case I : Small-time regime \(DT\ll 1\)
In this case, we perform a linear expansion of the exponential function in the second term of Eq. (76) with respect to \(DT\), which yields \(e^{-\lambda_{q}DT}\approx 1-\lambda_{q}DT\). As a result, this term drops out and we are left with
\[{\cal C}_{0}^{QQ}(T,T)\simeq\Gamma_{0}T, \tag{78}\]
where \(\Gamma_{0}\) is simply obtained by putting \(r=0\) in Eq. (42).
#### v.6.2 Case II : Intermediate- and long-time regime \(DT\gg 1\)
In general, it is difficult to solve the summation in the second term of Eq. (77). However, in this case, note that the summand only contributes when \(q\to 0\) and vanishes otherwise. In this limit, the eigenvalues are quadratic, i.e. \(\lambda_{q}\to q^{2}\), \(\lambda_{qq}\to g^{2}q^{2}\) and \(\lambda_{lq}\to l^{2}q^{2}\), resulting in a simplified version of Eq. (77),
\[{\cal C}_{0}^{QQ}(T,T)=\frac{2\chi}{L}\left[T+\frac{1}{D}\sum_{q}\frac{1}{ \lambda_{q}^{2}}\left(1-e^{-\lambda_{q}DT}\right)\right]. \tag{79}\]
Considering both of the preceeding cases, the limiting behavior of \({\cal C}_{0}^{QQ}(T,T)=\langle Q^{2}(t)\rangle\) is obtained to be
\[\langle Q^{2}(T)\rangle\simeq\left\{\begin{array}{ll}\Gamma_{0}T,&\mbox{ for }DT\ll 1,\\ \frac{2\chi(\rho,\gamma)}{\sqrt{T(D,\rho)}}\sqrt{T},&\mbox{ for }1\ll DT\ll L^{2},\\ \frac{2\chi(\rho,\gamma)}{L}T,&\mbox{ for }DT\gg L^{2},\end{array}\right. \tag{80}\]
Figure 5: _Verification of Eqs._ (73) _and_ (75)- We plot the ratio \(\gamma^{a}\chi(\rho,\gamma)/\chi^{(0)}\) for model II (LLG, top-panel, \(a=2\)) and model I (RTPs, bottom-panel, \(a=1\)), as a function of scaling variable \(\psi=\rho/\gamma\) in the parameter ranges \(0.01\leq\rho\leq 0.5\) and \(0.001\leq\gamma\leq 0.1\). For LLG, we compare the collapsed simulation data points with the analytic scaling function \({\cal H}_{LLG}(\psi)\) (solid black line) shown in Eq. (75). For both the models, the collapsed data points exhibit \(\psi^{-3/2}\) decay in the asymptotic limit, which is shown here by the red-dotted line.
where the first term simply corresponds to Eq. (78), while the other two are obtained using Eq. (79); for details, see Appendix H. Therefore, the time-integrated bond-current fluctuation \(\langle Q_{i}^{2}(T)\rangle\) exhibits an initial linear growth in time before transitioning to a subdiffusive scaling at intermediate regime \(L^{2}/D\gg T\gg 1/D\), subsequently, at larger time scales \(T\gg L^{2}/D\), it reverts to diffusive or linear scaling behavior where the strength of the fluctuation is governed by \(\chi(\rho,\gamma)\). In order to verify these observations, we plot the numerically obtained bond-current fluctuation \(\langle Q_{i}^{2}(T)\rangle\), as a function of the observation time \(T\), at the top-panel of Fig. (6) for LLG at different parameter values and compare them with our analytical solution given in Eq. (77). We find the numerical data points clearly exhibit previously mentioned three different regimes and it follows the analytical solution quite well. In case of model I (standard RTPs), we plot the similar quantity, obtained from simulation, at the bottom-panel of Fig. (6). We find the data points to follow similar characteristics with model II (LLG) in the time regime much larger than its microscopic time-scale i.e. when \(DT\gg 1\). However, in the other limit of smaller time scale, RTPs exhibit superdiffusive behavior, i.e. \(\langle Q_{i}^{2}(T)\rangle\sim T^{\alpha}\) where \(\alpha>1\) is parameter dependent.
Interestingly, the above intermdetiate and long time regime for time-dependent bond-current fluctuations can actually be unified through a single scaling function as following. Moreover, quite remarkably, the scaling function seems to be universal as it does not dependend on the details of the dynamical rules of the models considered here. In the limit of \(L\to\infty\) and \(DT\to\infty\) such that \(y=DT/L^{2}\) is finite, we find \(\mathcal{C}_{0}^{QQ}(T,T)=\langle Q^{2}(T)\rangle\), as expressed in Eq. (79), to satisfy the following scaling relation,
\[\frac{D}{2\chi L}\langle Q^{2}(T)\rangle=\mathcal{W}\left(\frac{DT}{L^{2}} \right). \tag{81}\]
For model II (LLG), the scaling function is calculated exactly within the truncation scheme and is is given by the following series,
\[\mathcal{W}\left(y\right)=y+\lim_{L\to\infty}\frac{1}{L^{2}}\sum_{q}\frac{1}{ \lambda_{q}}\left(1-e^{-\lambda_{q}yL^{2}}\right). \tag{82}\]
One can approximate the discrete sum in the rhs of the above equation as a quadrature, which can be written in terms of known functions [51],
\[\mathcal{W}\left(y\right)\simeq y+\left(\frac{y}{\pi}\right)^{1/2}\mathrm{ erfc}\left(2\pi\sqrt{y}\right)+\frac{1-\exp(-4\pi^{2}y)}{4\pi^{2}}, \tag{83}\]
with \(\mathrm{erfc}(y)=1-\mathrm{erf}(y)\) and the error function \(\mathrm{erf}(y)=(2/\sqrt{\pi})\int_{0}^{y}\exp(-t^{2})dt\). Note that, as demonstrated in Fig. 7, the scaling function is the same for both models I and II. We also perform an asymptotic analysis to obtain the behavior of \(\mathcal{W}\left(y\right)\) in the two limiting cases when \(y\ll 1\) and \(y\gg 1\) and we find the following,
\[\mathcal{W}\left(y\right)\simeq\left\{\begin{array}{ll}\sqrt{y/\pi},&\quad \mathrm{for}\ y\ll 1,\\ y,&\quad\mathrm{for}\ y\gg 1.\end{array}\right. \tag{84}\]
To verify our theoretical results, as in Eqs. (79), (82), and (84), we plot in Fig. 7 the steady-state scaled bond-current fluctuations as a function of the rescaled hydrodynamic time \(y=DT/L^{2}\) for models II (LLG, left-panel) and I (standard RTPs, right-panel) for various \(\rho\) and \(\gamma\) denoted; simulation results are represented the points. For both models, we see an excellent collapse of the simulation data points. The collapsed data (points) follow
Figure 6: We plot the time-integrated bond-current fluctuation \(\langle Q_{i}^{2}(T)\rangle\), as a function of time \(T\), obtained from simulations (points) for model II (LLG, top-panel) and model I (standard RTPs, bottom-panel) at \(\rho=0.3\), \(0.7\) and \(\gamma=0.1\), \(0.01\). In case of model I, we also compare the simulation data points with the analytical solution shown in Eq. (77) (line). For both these models, \(\langle Q_{i}^{2}(T)\rangle\) exhibits subdiffusive growth at the intermediate time regime, followed by a diffusive (linear) growth, as shown by the dotted lines. However, the behavior at the very small time is model dependent; for model II (LLG), it is always diffusive (linear), but for model I (standard RTPs), it depends on the parameter values. It exhibits superdiffusive growth at smaller \(\gamma=0.01\) and diffusive growth at larger \(\gamma=0.1\).
the analytically derived scaling function \(\mathcal{W}(y)\) as given in Eq. (82) (line) and they exhibit sub-diffusive growth at small \(y\) before crossing over to diffusive growth at large \(y\), thus being consistent with the aymptotic form as in Eq. (84).
## IV Summary and concluding remarks
In this work, we characterize steady-state current fluctuations in two minimal models of hardcore RTPs by using a microscopic approach. Model I is the standard version of hardcore RTPs introduced in Ref. [46], whereas model II is a long-ranged lattice gas (LLG) - a variant of model I [15]; the latter model also has the hardcore constraint. Indeed, one great advantage of studying model II (LLG) is that, despite a lack of the knowledge of the steady-state measure having nontrivial spatial correlations, the model is amenable to analytical studies and is the primary focus of our work. To this end, we introduce a truncation (closure) scheme in our microscopic dynamical framework to analytically calculate various dynamic quantities, which have been of interest in the past in the context of self-propelled particles. In particular, in this model we calculate exactly, within the truncation approximation, the fluctuations of time-integrated current \(Q_{i}(T)\) across a bond \((i,i+1)\) in a time interval \(T\) as well as instantaneous current \(J_{i}(t)\equiv dQ_{i}(t)/dt\). We compare our theoretical results with that obtained from direct Monte Carlo simulations of models I and II and observe that the two models of RTPs share remarkably similar features.
The main results of this paper are as following. The time-integrated bond-current fluctuation exhibits subdiffusive growth at moderately large-time \(1/D\ll T\ll L^{2}\), where \(\langle Q_{i}(T)^{2}\rangle\sim T^{1/2}\), before crossing over to a diffusive growth regime at very long time \(T\gg L^{2}/D\), where \(D(\rho,\gamma)\) is the density- and tumbling-rate-dependent bulk-diffusion coefficient [15]. Notably, the power-law behaviors are qualitatively similar to that observed in symmetric exclusion processes [33]. Although the prefactors in the growth laws as a function of density and tumbling rate are model-dependent, but, they can be expressed in terms of the two macroscopic transport coefficients - the bulk-diffusion coefficient and the mobility. Remarkably, in the limit of \(L\), \(T\) being large, with the dimensionless scaling variable \(DT/L^{2}\) finite, we show that the growth of time-integrated bond-current fluctuation obeys a scaling law, that is presumably _universal_, i.e., independent of dynamical rules of the models and parameter values [see Eq. (81)].
Furthermore, using the truncation scheme introduced in our microscopic theory, we analytically calculate in model II (LLG) the fluctuations of space-time integrated current across the entire system (equivalently, the cumulative displacement of all particles). In this way, we can characterize the current fluctuations in terms of the collective particle mobility \(\chi(\rho,\gamma)\equiv\lim_{L\rightarrow\infty}(1/2LT)\langle[\sum_{i=1}^{L} Q_{i}^{2}(T)]\rangle_{c}\) [see Eq. 70], which is also equal to the scaled space-time integrated fluctuating ("noise ") current. Interestingly, in the limit of small density and strong persistence, i.e., \(\rho,\gamma\to 0\) with scaling variable \(\psi=\rho/\gamma\sim l_{p}/\gamma\) fixed, we show that, similar to the bulk-diffusion coefficient \(D(\rho,\gamma)\) previously studied in Ref. [15], _there exists a scaling regime for the mobility \(\chi(\rho,\gamma)\) too._ Indeed, the scaling regime implies that in this limit, the system is governed by essentially only _two length scales_ - the persistence length \(l_{p}\sim\gamma^{-1}\) and the mean gap \(\bar{g}\sim 1/\rho\)[15]. Thus our analysis highlights the role of persistence and interaction in the collective relaxation and fluctuation in the standard models of hardcore RTPs.
Moreover, our theoretical scheme readily allows us to
Figure 7: _Verification of Eqs. (81) and (82)-_ We plot the scaled bond-current fluctuation \(D\langle Q_{i}^{2}(T)\rangle/2\chi L\) for model II (LLG, left-panel) and model I (standard RTPs, middle-panel), obtained from simulations (points) at various \(\rho\) and \(\gamma\), as a function of the rescaled hydrodynamic time \(y=D(\rho,\gamma)T/L^{2}\). For LLG, we compare the scaled data points with the analytic scaling function \(\mathcal{W}\left(y\right)\) shown in Eq. (82) (black line). In the right-panel, we check the universality of \(\mathcal{W}\left(y\right)\) by plotting these numerically obtained scaled current fluctuation \(D\langle Q_{i}^{2}(T)\rangle/2\chi L\) for model II (LLG, closed points) and model I (standard RTPs, open points) together and compare them with the analytically obtained \(\mathcal{W}\left(y\right)\). In all three panels, the red dotted guiding lines reflect the early time subdiffusive (\(\sim\sqrt{y}\)), followed by the diffusive growth of \(\mathcal{W}\left(y\right)\sim y\) as derived in Eq. (84).
calculate the spatial and dynamic properties of the instantaneous current in model II (LLG). Using our microscopic calculations, we derive that, at long times, two-point dynamic correlation function for instantaneous bond-current as a function of time \(t\) is indeed a power-law of the form \(\sim t^{-3/2}\) and, moreover, it is negative for \(t\neq 0\), with a delta-correlated part at the origin \(t=0\). Similar behavior is also observed for model I (standard hardcore RTPs). On the other hand, the spatial correlations of current in both models are found to decay exponentially \(\exp(-r/\xi)\) with spatial separation \(r\). Interestingly, in the strong-persistence regime, we find that correlation length \(\xi\)_diverges_ as the square root of persistence time \(\tau_{p}\), i.e., \(\xi\sim\sqrt{\tau_{p}}\), the behavior which we derive analytically for model II (LLG). This result provides a microscopic theoretical explanations of the qualitative features of velocity correlations observed in the recent experiments and simulations [22; 28].
Characterizing current fluctuations in driven many-body systems having a nontrivial spatial structure is a difficult problem in general and exact calculations of the dynamic current correlations have been done for few such systems so far [51]. Previously, an exact calculation of the temporal growth of the time-integrated bond-current has been performed for the symmetric simple exclusion process (SSEP), which, despite having hardcore interactions, however has a product measure and, as a result, does not have spatial correlations [33]. However the above analysis for SSEP cannot be easily extended to systems with finite spatial correlations, which is the case here. This is because, in such a case, the equations governing the two-point correlations involve higher-order correlations, resulting in an infinite hierarchy of many-body correlations. To get around the difficulty, in this paper we use a truncation scheme [17], which, though approximate, immediately leads to a closed set of equations for density and current correlations, thus making model II (LLG) solvable.
Notably, our theory is based on two important assertions [10; 11]: (i) The total instantaneous current can be decomposed into a diffusive and a fluctuating ("noise") component, with the latter having mean zero and short-ranged (in fact, delta-correlated) temporal correlations, and (ii) the diffusive current can be written as a product of the bulk-diffusion coefficient and the gradient of mass (occupation) variable. The latter implies that the local relaxation processes are effectively "slaved" to coarse-grained local density, implying a "local-equilibrium" property of the steady state and thus a diffusive relaxation on long-time scales. Indeed our theory leads to explicit analytical calculation of various quantities, such as current fluctuations, the mobility [see Eqs. (77) and (44)] and their scaling properties [see Eqs. (81) and (73)] - all of which agree remarkably well with simulations [see Figs. 7 and 5]. Furthermore, the comparison of models I and II shows that the two systems have qualitatively similar spatio-temporal behaviors. The striking resemblance suggests that the typical models of interacting RTPs can be described by the same theoretical framework, formulated in terms of the two macroscopic transport coefficients - the bulk-diffusion coefficient and the conductivity. Indeed, dynamic fluctuations in interacting SPPs have not yet been fully understood and a general theoretical formulation for dealing with such systems is still lacking. In this scenario, our study could provide useful insights into the large-scale structure of interacting SPPs in general.
## Appendix
Time evolution of two point unequal-time current-current correlation \(\mathcal{C}_{r}^{QQ}(t^{\prime},t)\)
\[Q_{r}(t^{\prime}+dt^{\prime})Q_{0}(t)=\left\{\begin{array}{ll}(Q_{r}(t^{ \prime})+1)Q_{0}(t),&\mbox{prob.}\quad\mathcal{P}_{r}^{R}dt^{\prime},\\ (Q_{r}(t^{\prime})-1)Q_{0}(t),&\mbox{prob.}\quad\mathcal{P}_{r}^{L}dt^{\prime}, \\ Q_{r}(t^{\prime})Q_{0}(t),&\mbox{prob.}\quad 1-(\mathcal{P}_{r}^{R}+\mathcal{P}_{r}^{L}) dt^{\prime}.\end{array}\right. \tag{85}\]
Using the update rules in Eq. (85) and substituting the expressions of \(\mathcal{P}_{r}^{R}\) and \(\mathcal{P}_{r}^{L}\), as shown in Eqs. (6) and (7), respectively, the corresponding time-evolution equation can be written as,
\[\frac{d}{dt^{\prime}}\left\langle Q_{r}(t^{\prime})Q_{0}(t)\right\rangle=\frac {1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\left\{\left\langle\hat{\mathcal{U}}_{r+ l}^{(l)}(t^{\prime})Q_{0}(t)\right\rangle-\left\langle\hat{\mathcal{U}}_{r}^{(l)}(t^{ \prime})Q_{0}(t)\right\rangle\right\}+\sum_{g=1}^{l-1}\left\{\left\langle\hat {\mathcal{V}}_{r+g+1}^{(g+2)}(t^{\prime})Q_{0}(t)\right\rangle-\left\langle \hat{\mathcal{V}}_{r+1}^{(g+2)}(t^{\prime})Q_{0}(t)\right\rangle\right\} \right], \tag{86}\]
Finally using the definition of \({\cal C}^{QQ}_{r}(t^{\prime},t)\) as provided in Eq. (14), we immediately obtain
\[\frac{d}{dt^{\prime}}{\cal C}^{QQ}_{r}(t^{\prime},t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\Big{[}\Big{\{}{\cal C}^{{\cal U }^{(l)}Q}_{r+l}(t^{\prime},t)-{\cal C}^{{\cal U}^{(l)}Q}_{r}(t^{\prime},t) \Big{\}}+\sum_{g=1}^{l-1}\Big{\{}{\cal C}^{{\cal V}^{(g+2)}Q}_{r+g+1}(t^{\prime },t)-{\cal C}^{{\cal V}^{(g+2)}Q}_{r+1}(t^{\prime},t)\Big{\}}\Big{]}, \tag{87}\] \[= \left\langle J^{(D)}_{r}(t^{\prime})Q_{0}(t)\right\rangle_{c}. \tag{88}\]
Here we have used the expression for \(J^{(D)}_{r}(t^{\prime})\) as given in Eq. (12). Note that the above two equations are expressed in the main text as Eqs. (15) and (16).
Time evolution of the two point unequal-time density-current correlation \({\cal C}^{\eta Q}_{r}(t^{\prime},t)\)
In this section, we derive the time-evolution equation for the two point unequal-time density-current correlation \({\cal C}^{\eta Q}_{r}(t^{\prime},t)\) as shown in Eq. (22) in the main text. In order to do so, we first derive the time-evolution equation of the local density \(\rho_{r}(t)\), which is defined as the average occupancy at site \(r\) and time \(t\), i.e., \(\rho_{r}(t)=\langle\eta_{r}(t)\rangle\).
Recall, the average instantaneous bond-current \(\langle J_{r}(t)\rangle\) across the bond \((r,r+1)\) at time \(t\) (as given by Eq. (8) in the main text) is given by,
\[\langle J_{r}(t)\rangle=\frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1} ^{l-1}\left({\cal V}^{(g+2)}_{r+g+1}-{\cal V}^{(g+2)}_{r+1}\right)+\left({\cal U }^{(l)}_{r+l}-{\cal U}^{(l)}_{r}\right)\right]. \tag{89}\]
Since the total number of particle is a conserved quantity, the corresponding local density is a slow variable and its time-evolution must be related to \(\langle J_{r}(t)\rangle\) via the following continuity equation:
\[\frac{d\rho_{r}(t)}{dt}=\langle J_{r-1}(t)\rangle-\langle J_{r}(t)\rangle\,. \tag{90}\]
Using Eq. (89) in Eq. (90), it is now straighforward to obtain the corresponding time-evolution equation in the form of the following balance equation:
\[\frac{d\rho_{r}(t)}{dt}=\left\langle{\cal P}^{+}_{r}(t)\right\rangle-\left\langle {\cal P}^{-}_{r}(t)\right\rangle, \tag{91}\]
where the gain and loss terms are respectively given by,
\[{\cal P}^{+}_{r}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left\{ {\cal V}^{(g+2)}_{r+g}+{\cal V}^{(g+2)}_{r+1}\right\}+\Big{\{}\left({\cal U} ^{(l)}_{r+l-1}-{\cal U}^{(l+1)}_{r+l}\right)+\left({\cal U}^{(l)}_{r}-{\cal U }^{(l+1)}_{r}\right)\Big{\}}\right], \tag{92}\] \[{\cal P}^{-}_{r}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left\{ {\cal V}^{(g+2)}_{r+g+1}+{\cal V}^{(g+2)}_{r}\right\}+\Big{\{}\left({\cal U} ^{(l)}_{r+l}-{\cal U}^{(l+1)}_{r+l}\right)+\left({\cal U}^{(l)}_{r-1}-{\cal U }^{(l+1)}_{r}\right)\Big{\}}\right]. \tag{93}\]
We now use the expression of the local diffusive current operator, as shown in Eq. (12) in the main text, to arrive at the following identity:
\[{\cal P}^{+}_{r}(t)-{\cal P}^{-}_{r}(t) = J^{(D)}_{r-1}(t)-J^{(D)}_{r}(t), \tag{94}\] \[\simeq D(\rho,\gamma)\left[\eta_{r+1}(t)+\eta_{r-1}(t)-2\eta_{r}(t) \right], \tag{95}\]
where Eq. (95) is a direct consequence of the proposed closure approximation, as shown in Eq. (19) in the main text.
Finally using Eqs. (95) and (91) together, we arrive at the desired time-evolution equation,
\[\frac{d\rho_{r}(t)}{dt}\simeq D(\rho,\gamma)\left[\rho_{r+1}(t)+\rho_{r-1}(t) -2\rho_{r}(t)\right]=D(\rho,\gamma)\Delta_{r}^{2}\rho_{r}(t), \tag{96}\]
where \(\Delta_{r}^{2}\) is the discrete laplacian operator. We will now deduce the desired time-evolution of \({\cal C}^{\eta Q}_{r}(t^{\prime},t)\) by writing the following update rules:
\[\eta_{r}(t^{\prime}+dt^{\prime})Q_{0}(t)=\left\{\begin{array}{ll}1\times Q_{ 0}(t),&\mbox{prob.}\quad{\cal P}^{+}_{r}(t^{\prime})dt^{\prime},\\ 0\times Q_{0}(t),&\mbox{prob.}\quad{\cal P}^{-}_{r}(t^{\prime})dt^{\prime}, \\ \eta_{r}(t)Q_{0}(t),&\mbox{prob.}\quad 1-\Sigma dt,\end{array}\right. \tag{97}\]
Using the update equation, as shown above in Eq. (97), we write down the corresponding time-evolution equation as,
\[\frac{d}{dt^{\prime}}\left\langle\eta_{r}(t^{\prime})Q_{0}(t)\right\rangle = \left\langle\left(\mathcal{P}_{r}^{+}(t^{\prime})-\mathcal{P}_{r}^ {-}(t^{\prime})\right)Q_{0}(t)\right\rangle. \tag{98}\] \[\simeq D(\rho,\gamma)\Delta_{r}^{2}\left\langle\eta_{r}(t^{\prime})Q_{0 }(t)\right\rangle, \tag{99}\]
where in the last equation, we have used the identity displayed in Eq. (95). Now, by using the definition of \(\mathcal{C}_{r}^{\eta Q}(t^{\prime},t)=\left\langle\eta_{r}(t^{\prime})Q_{0}( t)\right\rangle-\left\langle\eta_{r}(t^{\prime})\right\rangle Q_{0}(t)\), we directly obtain
\[\frac{d}{dt^{\prime}}\mathcal{C}_{r}^{\eta Q}(t^{\prime},t)\simeq D(\rho, \gamma)\Delta_{r}^{2}\mathcal{C}_{r}^{\eta Q}(t^{\prime},t). \tag{100}\]
Note that Eq. (100) is the desired time-evolution equation which we have used in Eq. (22) in the main text.
### Time evolution of the equal time density-current correlation \(\mathcal{C}_{r}^{\eta Q}(t,t)\)
Here we will derive the time-evolution equation for the equal time density-current correlation \(\mathcal{C}_{r}^{\eta Q}(t,t)\), which is presented in Eq. (31) of the main text. We write down all of the possible ways that the product \(\eta_{r}Q_{0}\) can change in the infinitesimal time interval \([t,t+dt]\), as given by
\[\eta_{r}(t+dt)Q_{0}(t+dt)=\left\{\begin{array}{ll}1\times(Q_{0}(t)+1),&\mbox {prob.}\quad\mathcal{P}_{r}^{3}(t)dt,\\ 1\times(Q_{0}(t)-1),&\mbox{prob.}\quad\mathcal{P}_{r}^{4}(t)dt,\\ 0\times(Q_{0}(t)+1),&\mbox{prob.}\quad\mathcal{P}_{r}^{5}(t)dt,\\ 0\times(Q_{0}(t)-1),&\mbox{prob.}\quad\mathcal{P}_{r}^{6}(t)dt,\\ \eta_{r}(t)(Q_{0}(t)+1),&\mbox{prob.}\quad[\mathcal{P}_{r}^{R}(t)-\mathcal{P}_ {r}^{3}(t)-\mathcal{P}_{r}^{5}(t)]dt,\\ \eta_{r}(t)(Q_{0}(t)-1),&\mbox{prob.}\quad[\mathcal{P}_{L}^{L}(t)-\mathcal{P}_ {r}^{4}(t)-\mathcal{P}_{r}^{6}(t)]dt,\\ 1\times Q_{0}(t),&\mbox{prob.}\quad[\mathcal{P}_{r}^{+}(t)-\mathcal{P}_{r}^{3 }(t)-\mathcal{P}_{r}^{4}(t)]dt,\\ 0\times Q_{0}(t),&\mbox{prob.}\quad[\mathcal{P}_{r}^{-}(t)-\mathcal{P}_{r}^{5 }(t)-\mathcal{P}_{r}^{6}(t)]dt,\\ \eta_{r}(t)Q_{0}(t),&\mbox{prob.}\quad 1-\hat{\Sigma}(t)dt,\end{array}\right. \tag{101}\]
where \(\hat{\Sigma}(t)dt\) is the sum of probability operators corresponding to the all possible ways in which the product \(\eta_{r}(t)Q_{0}(t)\) can change in the infinitesimal time interval \(dt\) and is given by
\[\hat{\Sigma}(t)=\mathcal{P}_{r}^{+}(t)+\mathcal{P}_{r}^{-}(t)+\mathcal{P}_{0} ^{R}(t)+\mathcal{P}_{0}^{L}(t)+\mathcal{P}_{r}^{3}(t)+\mathcal{P}_{r}^{4}(t)+ \mathcal{P}_{r}^{5}(t)+\mathcal{P}_{r}^{6}(t), \tag{102}\]
and the operators \(\mathcal{P}_{r}^{3}(t)\), \(\mathcal{P}_{r}^{4}(t)\), \(\mathcal{P}_{r}^{5}(t)\) and \(\mathcal{P}_{r}^{6}(t)\) are defined as,
\[\mathcal{P}_{r}^{3}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\left(\mathcal{U}_{r} ^{(l)}-\mathcal{U}_{r}^{(l+1)}\right)\sum_{k=1}^{l}\delta_{r,k}+\sum_{g=1}^{l- 1}\mathcal{V}_{r+1}^{(g+2)}\sum_{k=1}^{g}\delta_{r,k}\right], \tag{103}\] \[\mathcal{P}_{r}^{4}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\left(\mathcal{U}_{r+ l-1}^{(l+1)}-\mathcal{U}_{r+l}^{(l+1)}\right)\sum_{k=1}^{l}\delta_{r,-k+1}+\sum_{g=1}^{l- 1}\mathcal{V}_{r+g}^{(g+2)}\sum_{k=1}^{g}\delta_{r,-k+1}\right],\] (104) \[\mathcal{P}_{r}^{5}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\left(\mathcal{U}_{r+ l}^{(l)}-\mathcal{U}_{r+l}^{(l+1)}\right)\sum_{k=1}^{l}\delta_{r,-k+1}+\sum_{g=1}^{l- 1}\mathcal{V}_{r+g+1}^{(g+2)}\sum_{k=1}^{g}\delta_{r,-k+1}\right],\] (105) \[\mathcal{P}_{r}^{6}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\left(\mathcal{U}_{r- 1}^{(l)}-\mathcal{U}_{r}^{(l+1)}\right)\sum_{k=1}^{l}\delta_{r,k}+\sum_{g=1}^{l- 1}\mathcal{V}_{r}^{(g+2)}\sum_{k=1}^{g}\delta_{r,k}\right]. \tag{106}\]
Using the above update rules, shown in Eq. (101), the time-evolution of the quantity \(\left\langle\eta_{r}(t)Q_{0}(t)\right\rangle\) is given by,
\[\frac{d}{dt}\left\langle\eta_{r}(t)Q_{0}(t)\right\rangle = \tag{107}\]
At the steady-state, we can disregard the spatial dependence in the average quantities \(\langle{\cal U}^{(l)}\rangle\) and \(\langle{\cal V}^{(g+2)}\rangle\), which essentially leads to the simplification of the first term in Eq. (107) in the following manner:
(108)
Moreover, using the identity shown in Eq. (95), the second term in Eq. (107) can be transformed into
\[\left\langle\left[{\cal P}_{r}^{+}(t)-{\cal P}_{r}^{-}(t)\right]Q_{0}(t) \right\rangle\simeq D(\rho,\gamma)\Delta_{r}^{2}\left\langle\eta_{r}(t)Q_{0}( t)\right\rangle. \tag{109}\]
Furthermore, using the following relation: \({\cal P}_{0}^{R}(t)-{\cal P}_{0}^{L}(t)=J_{0}^{(D)}(t)\simeq D(\rho,\gamma)( \eta_{0}-\eta_{1})\), we rewrite the third term in Eq. (107) in the following way:
\[\left\langle\eta_{r}(t)\left[{\cal P}_{0}^{R}(t)-{\cal P}_{0}^{L}(t)\right] \right\rangle\simeq D(\rho,\gamma)\left[\langle\eta_{r}(t)\eta_{0}(t)\rangle- \langle\eta_{r}(t)\eta_{1}(t)\rangle\right]=D(\rho,\gamma)\Delta_{r}\left\langle \eta_{r}(t)\eta_{0}(t)\right\rangle. \tag{110}\]
Finally using the last three transformations, the time-evolution equation of \({\cal C}_{r}^{\eta Q}(t,t)\) can be written as the following inhomogeneous differential equation:
\[\frac{d}{dt}{\cal C}_{r}^{\eta Q}(t,t)\simeq D(\rho,\gamma)\Delta_{r}^{2}{\cal C }_{r}^{\eta Q}(t,t)+{\cal S}_{r}^{\eta Q}(t), \tag{111}\]
where the source term is given by
\[{\cal S}_{r}^{\eta Q}(t)=\sum_{l=1}^{\infty}\phi(l)\left[\left({\cal U}^{(l)}- {\cal U}^{(l+1)}\right)\sum_{k=1}^{l}(\delta_{r,k}-\delta_{r,-k+1})+\sum_{g=1 }^{l-1}{\cal V}^{(g+2)}\sum_{k=1}^{g}(\delta_{r,k}-\delta_{r,-k+1})\right]+D( \rho,\gamma)\Delta_{r}{\cal C}_{r}^{\eta\eta}(t,t). \tag{112}\]
Eq. (111) describes the time-evolution of \({\cal C}_{r}^{\eta Q}(t,t)\) in the real space, which in the Fourier space is simply transformed into the following equation:
\[\left(\frac{d}{dt}+D(\rho,\gamma)\lambda_{q}\right)\tilde{\cal C}_{q}^{\eta Q }(t,t)=\tilde{\cal S}_{q}^{\eta Q}(t). \tag{113}\]
Note that, the above equation appears in Eq. (31) in the main text. Here \(-\lambda_{q}\) is the eigen-value of the discrete laplacian operator, which is given by,
\[\lambda_{q}=2\left(1-\cos(q)\right), \tag{114}\]
and \(\tilde{\cal S}_{q}^{\eta Q}(t)\) is the source term in the Fourier space which is trivially obtained to be,
\[\tilde{\cal S}_{q}^{\eta Q}(t) = \frac{1}{(1-e^{-iq})}\left[D(\rho,\gamma)\lambda_{q}\tilde{\cal C} _{q}^{\eta\eta}(t,t)-\sum_{l=1}^{\infty}\phi(l)\Big{\{}\left({\cal U}^{(l)}-{ \cal U}^{(l+1)}\right)\lambda_{lq}+\sum_{g=1}^{l-1}{\cal V}^{(g+2)}\lambda_{gq }\Big{\}}\right]. \tag{115}\]
Finally using the following identities that relates the correlators \({\cal U}^{(l)}\), \({\cal V}^{(g+2)}\) with the gap-distrbution function \(P(g)\) as given by
\[{\cal U}^{(l)}(t) = \rho\sum_{g=l-1}^{\infty}(g-l+1)P(g,t),\] \[{\cal V}^{(g+2)}(t) = \rho P(g,t), \tag{116}\]
we obtain simpler structure of \(\tilde{\cal S}_{q}^{\eta Q}(t)\), which is given by
\[\tilde{\cal S}_{q}^{\eta Q}(t) = \frac{1}{(1-e^{-iq})}\left[D(\rho,\gamma)\lambda_{q}\tilde{\cal C }_{q}^{\eta\eta}(t,t)-f_{q}(t)\right], \tag{117}\]
where \(f_{q}(t)\) is given by
\[f_{q}(t)=\rho\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\lambda_{gq}P(g,t) +\lambda_{lq}\sum_{g=l}^{\infty}P(g,t)\right]. \tag{118}\]
The last two equations compostively express the source term corresponding to the equal time density-current correlation function and they appear in the main text in Eqs. (32) and (33), respectively.
### Time evolution of equal time density-density correlation \(\mathcal{C}_{r}^{\eta\eta}(t,t)\)
\[\eta_{r}(t+dt)\eta_{0}(t+dt)=\left\{\begin{array}{ll}1\times 1,&\mbox{ prob.}\quad\mathcal{P}_{r}^{7}(t)dt,\\ 0\times 0,&\mbox{ prob.}\quad\mathcal{P}_{r}^{8}(t)dt,\\ 1\times 0,&\mbox{ prob.}\quad\mathcal{P}_{r}^{9}(t)dt,\\ 0\times 1,&\mbox{ prob.}\quad\mathcal{P}_{r}^{10}(t)dt,\\ 1\times\eta_{0}(t),&\mbox{ prob.}\quad[\mathcal{P}_{r}^{+}(t)-\mathcal{P}_{r}^{7}(t)- \mathcal{P}_{r}^{9}(t)]dt,\\ 0\times\eta_{0}(t),&\mbox{ prob.}\quad[\mathcal{P}_{r}^{-}(t)-\mathcal{P}_{r}^{8}(t)- \mathcal{P}_{r}^{10}(t)]dt,\\ \eta_{r}(t)\times 1,&\mbox{ prob.}\quad[\mathcal{P}_{0}^{+}(t)-\mathcal{P}_{r}^{7}(t)- \mathcal{P}_{r}^{10}(t)]dt,\\ \eta_{r}(t)\times 0,&\mbox{ prob.}\quad[\mathcal{P}_{0}^{-}(t)-\mathcal{P}_{r}^{8}(t)- \mathcal{P}_{r}^{9}(t)]dt,\\ \eta_{r}(t)\eta_{0}(t),&\mbox{ prob.}\quad 1-\hat{\Sigma}(t)dt,\end{array}\right. \tag{119}\]
where \(\hat{\Sigma}(t)dt\) corresponds to the total probability with which the product of occupations at sites \(r\) and \(0\) changes in the infinitesimal time interval \(dt\) with
\[\hat{\Sigma}(t)=\mathcal{P}_{r}^{+}(t)+\mathcal{P}_{r}^{-}(t)+\mathcal{P}_{0}^ {+}(t)+\mathcal{P}_{0}^{-}(t)-\mathcal{P}_{r}^{7}(t)-\mathcal{P}_{r}^{8}(t)- \mathcal{P}_{r}^{9}(t)-\mathcal{P}_{r}^{10}(t), \tag{120}\]
and the operators \(\mathcal{P}_{r}^{7}(t)\), \(\mathcal{P}_{r}^{8}(t)\), \(\mathcal{P}_{r}^{9}(t)\) and \(\mathcal{P}_{r}^{10}(t)\) are defined as,
\[\mathcal{P}_{r}^{7}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left( \mathcal{V}_{r+1}^{(g+2)}+\mathcal{V}_{r+g}^{(g+2)}\right)\delta_{r,0}+\Big{\{} \left(\mathcal{U}_{r}^{(l)}-\mathcal{U}_{r}^{(l+1)}\right)+\left(\mathcal{U}_ {r+l-1}^{(l)}-\mathcal{U}_{r+l}^{(l+1)}\right)\Big{\}}\delta_{r,0}\right], \tag{121}\] \[\mathcal{P}_{r}^{8}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left( \mathcal{V}_{r}^{(g+2)}+\mathcal{V}_{r+g+1}^{(g+2)}\right)\delta_{r,0}+\Big{\{} \left(\mathcal{U}_{r+l}^{(l)}-\mathcal{U}_{r+l}^{(l+1)}\right)+\left(\mathcal{ U}_{r-1}^{(l)}-\mathcal{U}_{r}^{(l+1)}\right)\Big{\}}\delta_{r,0}\right],\] (122) \[\mathcal{P}_{r}^{9}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left( \mathcal{V}_{g+1}^{(g+2)}\delta_{r,g}+\mathcal{V}_{0}^{(g+2)}\delta_{r,-g} \right)+\Big{\{}\left(\mathcal{U}_{l}^{(l)}-\mathcal{U}_{l}^{(l+1)}\right) \delta_{r,l}+\left(\mathcal{U}_{-1}^{(l)}-\mathcal{U}_{0}^{(l+1)}\right) \delta_{r,-l}\Big{\}}\right],\] (123) \[\mathcal{P}_{r}^{10}(t) = \frac{1}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\left( \mathcal{V}_{r+g+1}^{(g+2)}\delta_{r,-g}+\mathcal{V}_{r}^{(g+2)}\delta_{r,g} \right)+\Big{\{}\left(\mathcal{U}_{r+l}^{(l)}-\mathcal{U}_{r+l}^{(l+1)}\right) \delta_{r,-l}+\left(\mathcal{U}_{r-1}^{(l)}-\mathcal{U}_{r}^{(l+1)}\right) \delta_{r,l}\Big{\}}\right]. \tag{124}\]
Finally using the above update rules in Eq. (119), we can straighforwardly write down the corresponding time-evolution equation, which is given by,
\[\frac{d}{dt}\left\langle\eta_{r}(t)\eta_{0}(t)\right\rangle=\left( \left\langle\mathcal{P}_{r}^{7}(t)\right\rangle+\left\langle\mathcal{P}_{r}^{8} (t)\right\rangle-\left\langle\mathcal{P}_{r}^{9}(t)\right\rangle-\left\langle \mathcal{P}_{r}^{10}(t)\right\rangle\right)+\left\langle\eta_{r}(t)\left( \mathcal{P}_{0}^{+}(t)-\mathcal{P}_{0}^{-}(t)\right)\right\rangle+\left\langle \left(\mathcal{P}_{r}^{+}(t)-\mathcal{P}_{r}^{-}(t)\right)\eta_{0}(t)\right\rangle. \tag{125}\]
Using the concept of spatial homogeneity at the steady-state, we can now ignore the spatial dependence in the averages \(\left\langle\mathcal{U}^{(l)}\right\rangle\) and \(\left\langle\mathcal{V}^{(g+2)}\right\rangle\), which leads to the following simplification of the first term in Eq. (125):
\[\mathcal{S}_{r}^{\eta\eta}(t) = \tag{126}\] \[= \sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\mathcal{V}^{(g+2) }(t)\left(2\delta_{r,0}-\delta_{r,g}-\delta_{r,-g}\right)+\left(\mathcal{U}^{(l) }(t)-\mathcal{U}^{(l+1)}(t)\right)\left(2\delta_{r,0}-\delta_{r,l}-\delta_{r,-l }\right)\right],\] \[= \rho\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}P(g,t)\left(2 \delta_{r,0}-\delta_{r,g}-\delta_{r,-g}\right)+\sum_{g=l}^{\infty}P(g,t)\left(2 \delta_{r,0}-\delta_{r,l}-\delta_{r,-l}\right)\right].\]
Note that, in the last line we have used the identities in Eq. (116) to replace the correlators \(\mathcal{V}^{(g+2)}(t)\) and \(\mathcal{U}^{(l)}(t)\) in terms of the gap-distribution function \(P(g,t)\). Furthermore, using the second identity, shown in Eq. (95), and the definition of \(\mathcal{C}_{r}^{\eta\eta}(t,t)\), we can write down the corresponding time-evolution equation in real space in the following manner:
\[\frac{d}{dt}\mathcal{C}_{r}^{\eta\eta}(t,t)=2D(\rho,\gamma)\Delta_{r}^{2} \mathcal{C}_{r}^{\eta\eta}(t,t)+\mathcal{S}_{r}^{\eta\eta}(t). \tag{127}\]
In fact, using Eq. (23) in the above equation, we immediately obtain the corresponding evolution equation in the Fourier space, which is given by,
\[\frac{d}{dt}\tilde{\mathcal{C}}_{q}^{\eta\eta}(t,t)+2D(\rho,\gamma)\lambda_{q} \tilde{\mathcal{C}}_{q}^{\eta\eta}(t,t)=\tilde{\mathcal{S}}_{q}^{\eta\eta}(t), \tag{128}\]
where \(\tilde{\mathcal{S}}_{q}^{\eta\eta}(t)\) is the corresponding source-term in the Fourier space, which is simply obtained by using Eq. (23) in Eq. (126) and is given by,
\[\tilde{\mathcal{S}}_{q}^{\eta\eta}(t)=\rho\sum_{l=1}^{\infty}\phi(l)\left[ \sum_{g=1}^{l-1}\lambda_{gq}P(g,t)+\lambda_{lq}\sum_{g=l}^{\infty}P(g,t)\right]. \tag{129}\]
It is worth noting that Eqs. (128) is the desired equation used in the main text as Eq. (36).
### Time evolution of equal time current-current correlation \(\mathcal{C}_{r}^{QQ}(t,t)\)
Here we provide the derivation details of Eq. (41) in the main text, which describes the time-evolution of \(\mathcal{C}_{r}^{QQ}(t,t)\). We begin with the update rules corresponding to \(Q_{r}(t)Q_{0}(t)\), as written below,
\[Q_{r}(t+dt)Q_{0}(t+dt)=\left\{\begin{array}{ll}(Q_{r}(t)+1)(Q_{0}(t)+1),& \mbox{prob.}\quad\mathcal{P}_{r}^{1}(t)dt,\\ (Q_{r}(t)+1)Q_{0}(t),&\mbox{prob.}\quad[\mathcal{P}_{r}^{R}(t)-\mathcal{P}_{r }^{1}(t)]dt,\\ Q_{r}(t)(Q_{0}(t)+1),&\mbox{prob.}\quad[\mathcal{P}_{0}^{R}(t)-\mathcal{P}_{r }^{1}(t)]dt,\\ (Q_{r}(t)-1)(Q_{0}(t)-1),&\mbox{prob.}\quad\mathcal{P}_{r}^{2}(t)dt,\\ (Q_{r}(t)-1)Q_{0}(t),&\mbox{prob.}\quad[\mathcal{P}_{r}^{L}(t)-\mathcal{P}_{r }^{2}]dt,\\ Q_{r}(t)(Q_{0}(t)-1),&\mbox{prob.}\quad[\mathcal{P}_{0}^{L}(t)-\mathcal{P}_{r }^{2}(t)]dt,\\ Q_{r}(t)Q_{0}(t),&\mbox{prob.}\quad 1-\hat{\Sigma}(t)dt,\end{array}\right. \tag{130}\]
where \(\hat{\Sigma}(t)dt\) corresponds to the total probability with which the product of currents across bonds \((r,r+1)\) and \((0,1)\) changes in the infinitesimal time interval \(dt\) with
\[\hat{\Sigma}=\mathcal{P}_{r}^{R}(t)+\mathcal{P}_{0}^{R}(t))+\mathcal{P}_{r}^{ L}(t)+\mathcal{P}_{0}^{L}(t))-\mathcal{P}_{r}^{1}(t)-\mathcal{P}_{r}^{2}(t), \tag{131}\]
and the operators \(\mathcal{P}_{r}^{1}\) and \(\mathcal{P}_{r}^{2}\) are defined as,
\[\mathcal{P}_{r}^{1}=\frac{1}{2}\!\!\sum_{l=1}^{\infty}\phi(l)\Bigg{\{}\sum_{k =1}^{l}\left(\mathcal{U}_{r+k}^{(l)}-\mathcal{U}_{r+k}^{(l+1)}\right)\Theta(k +r)\Theta(l-r-k+1)+\sum_{g=1}^{l-1}\sum_{k=1}^{g}\mathcal{V}_{r+k+1}^{(g+2)} \Theta(k+r)\Theta(g-r-k+1)\Bigg{\}},\]
\[\mathcal{P}_{r}^{2}=\frac{1}{2}\!\!\sum_{l=1}^{\infty}\phi(l)\Bigg{\{}\sum_{k =1}^{l}\left(\mathcal{U}_{r+k-1}^{(l)}-\mathcal{U}_{r+k}^{(l+1)}\right)\Theta (k+r)\Theta(l-r-k+1)+\sum_{g=1}^{l-1}\sum_{k=1}^{g}\mathcal{V}_{r+k}^{(g+2)} \Theta(k+r)\Theta(g-r-k+1)\Bigg{\}}. \tag{132}\]
Here \(\Theta(r)\) is the Heaviside theta function, which is defined as
\[\Theta(r)=\left\{\begin{array}{ll}1,&\mbox{ for }r>0,\\ 0,&\mbox{ otherwise.}\end{array}\right. \tag{133}\]
Using the update rules shown in Eq. (130), we write the evolution of two point equal-time current-current correlation as,
\[\frac{d}{dt}\left\langle Q_{r}(t)Q_{0}(t)\right\rangle=\left[\left\langle \mathcal{P}_{r}^{1}\right\rangle+\left\langle\mathcal{P}_{r}^{2}\right\rangle \right]+\left\langle J_{r}^{(D)}(t)Q_{0}(t)\right\rangle+\left\langle J_{0}^{( D)}(t)Q_{r}(t)\right\rangle. \tag{134}\]
At the steady state, we can ignore the position dependence in the averages \(\left\langle\mathcal{P}_{r}^{1}\right\rangle\) and \(\left\langle\mathcal{P}_{r}^{2}\right\rangle\), which leads us to write the first term in a simplified manner through the following quantity:
\[\Gamma_{r} = \left\langle\mathcal{P}_{r}^{1}\right\rangle+\left\langle \mathcal{P}_{r}^{2}\right\rangle, \tag{135}\] \[= \sum_{l=|r|+1}^{\infty}\phi(l)\Bigg{\{}\left(l-\mid r\mid\right) \left(l-\mid r\mid\right)+\sum_{g=1}^{l-1}\mathcal{V}^{(g+2)}(g-\mid r\mid) \Bigg{\}}\] \[= \rho\sum_{l=|r|+1}^{\infty}\phi(l)\Bigg{\{}(l-\mid r\mid)\sum_{g =l}^{\infty}P(g)+\sum_{g=1}^{l-1}(g-\mid r\mid)P(g)\Bigg{\}}, \tag{136}\]
where to arrive the last equation, which is presented as Eq. (42) in the main text, we have used the identities shown in Eq. (116). Finally using the definition of \(\mathcal{C}_{r}^{QQ}(t,t)\) and the closure approximation scheme, as shown in Eq. (19), we obtain the desired time-evolution equation,
\[\frac{d}{dt}\mathcal{C}_{r}^{QQ}(t,t)=\Gamma_{r}-D(\rho,\gamma)\Delta_{r} \mathcal{C}_{r}^{\eta Q}(t,t)-D(\rho,\gamma)\Delta_{-r}\mathcal{C}_{-r}^{\eta Q }(t,t). \tag{137}\]
By using Fourier transform in Eq. (24) in the main text, we now invert the \(\mathcal{C}_{r}^{\eta Q}(t,t)\) and \(\mathcal{C}_{-r}^{\eta Q}(t,t)\) in Eq. (137) in their Fourier basis, and as a result, the corresponding steady-state time-evolution equation of \(\mathcal{C}_{r}^{QQ}(t,t)\) takes the following form:
\[\frac{d}{dt}\mathcal{C}_{r}^{QQ}(t,t)=\Gamma_{r}+\frac{D(\rho,\gamma)}{L}\sum _{q}\left(2-\lambda_{qr}\right)\left(1-e^{-iq}\right)\tilde{\mathcal{C}}_{q}^ {\eta Q}(t,t). \tag{138}\]
Eq. (138) is the resulting time-evolution equation, which is presented in the main text as Eq. (41) after integration.
### Temporal correlation of the instantaneous bond-current
Our aim in this section is to derive the expression of the steady-state temporal correlation of the instantaneous current \(\mathcal{C}_{0}^{JJ}(t)\) in the long-time regime, which is presented in the main text in Eq. (56). Note that, for any time \(t>0\), we have already derived \(\mathcal{C}_{0}^{JJ}(t)\) to obey the following equation (Eq. (55) in the main text):
\[\mathcal{C}_{0}^{JJ}(t,0)=-\frac{D(\rho,\gamma)}{2L}\sum_{q}f_{q}e^{-\lambda_ {q}D(\rho,\gamma)t}, \tag{139}\]
where \(q=2\pi n/L\) with \(n=1,2,\ldots,L-1\) and the quantity \(f_{q}\) at the steady state is defined as,
\[f_{q}=\rho\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l-1}\lambda_{qq}P(g)+ \lambda_{lq}\sum_{g=l}^{\infty}P(g)\right], \tag{140}\]
here \(P(g)\) is the steady-state gap-distribution function of the system. Now, if we first take the infinite system size limit, i.e. \(L\rightarrow\infty\), we have the following transformations: \(q\to q(x)=2\pi x\), \(\lambda_{q}\rightarrow\lambda(x)\) and \(f_{q}\to f(x)\). As a result, we can convert the summation into an integration, as shown in the following:
\[\lim_{L\rightarrow\infty}\mathcal{C}_{0}^{JJ}(t)\simeq-D(\rho,\gamma)\int_{0} ^{1/2}dxf(x)e^{-\lambda(x)D(\rho,\gamma)t} \tag{141}\]
Interestingly, if we now take the large-time limit, i.e. \(L^{2}/D\gg t\gg 1/D\) for \(x>0\), the integrand in Eq. (141) contributes only in the limit \(x\to 0\), while it becomes vanishingly small for any other \(x\) values. This effectively leads to perform a small \(x\) analysis of Eq. (141). Note that, in this limit of \(x\to 0\), \(\lambda(x)\) is quadratic, i.e. \(\lambda(x)\to 4\pi^{2}x^{2}\), \(\lambda(lx)\to 4\pi^{2}l^{2}x^{2}\) and \(\lambda(gx)\to 4\pi^{2}g^{2}x^{2}\). These transformations straightforwardly yield \(f(x)\to 8\pi^{2}x^{2}\chi\) where \(\chi\) is defined in Eq. (44) in the main text. Following all of the aforementioned transformations, Eq. (141) in terms of a new variable \(z=4\pi^{2}x^{2}Dt\) is directly reduced to the following in the limit of large t:
\[\lim_{L\rightarrow\infty}\mathcal{C}_{0}^{JJ}(t)\simeq-\frac{\chi(\rho,\gamma)}{ 2\pi\sqrt{D(\rho,\gamma)}}t^{-3/2}\int_{0}^{\infty}dz\sqrt{z}e^{-z}. \tag{142}\]
Finally, using \(\int_{0}^{\infty}dz\sqrt{z}e^{-z}=\sqrt{\pi}/2\), we get the desired result presented in the main text in Eq. (56),
\[\mathcal{C}_{0}^{JJ}(t)\simeq-\frac{\chi(\rho,\gamma)}{4\sqrt{\pi D (\rho,\gamma)}}t^{-3/2}. \tag{143}\]
### Derivation of the time-integrated bond-current fluctuation \(\mathcal{C}_{0}^{QQ}(t,t)\)
According to Eq. (76) in the main text, the steady-state bond-current fluctuation for LLG is given by,
\[\mathcal{C}_{0}^{QQ}(t,t)=\left[\Gamma_{0}-\frac{1}{L}\sum_{q} \left(\frac{f_{q}}{\lambda_{q}}\right)\right]t+\frac{1}{LD}\sum_{q}\left(\frac {f_{q}}{\lambda_{q}^{2}}\right)\left(1-e^{-\lambda_{q}Dt}\right), \tag{144}\]
where we have defined \(f_{q}\) and \(\lambda_{q}\) in the main text in Eqs. (33) and (28), respectively. Moreover, in order to obatin \(\Gamma_{0}\), we put \(r=0\) in Eq. (42) in the main text, and get
\[\Gamma_{0}=\rho\sum_{l=1}^{\infty}\phi(l)\Bigg{\{}l\sum_{g=l}^{ \infty}P(g)+\sum_{g=1}^{l-1}gP(g)\Bigg{\}}. \tag{145}\]
In order to simplify Eq. (144), we first expand \(f_{q}\) write
\[\sum_{q}\left(\frac{f_{q}}{\lambda_{q}}\right)=\rho\sum_{l=1}^{ \infty}\phi(l)\left[\sum_{g=1}^{l-1}P(g)\sum_{q}\left(\frac{\lambda_{qq}}{ \lambda_{q}}\right)+\left(\sum_{q}\frac{\lambda_{lq}}{\lambda_{q}}\right) \sum_{g=l}^{\infty}P(g)\right]. \tag{146}\]
Note that, the wave vector is given by \(q=2\pi n/L\) where \(n=1,2,3,\ldots,L-1\). Therefore, the above summation over \(q\), appearing at the R.H.S of the above equation, can be equivalently transformed over the integer variable \(n\), which can be solved easily using MATHEMATICA to have the following simplified form:
\[\sum_{q}\left(\frac{\lambda_{qq}}{\lambda_{q}}\right) = \sum_{n=1}^{L-1}\left(\frac{1-\cos\left(\frac{2\pi n}{L}\right)}{ 1-\cos\left(\frac{2\pi n}{L}\right)}\right)=g(L-g), \tag{147}\] \[\sum_{q}\left(\frac{\lambda_{lq}}{\lambda_{q}}\right) = \sum_{n=1}^{L-1}\left(\frac{1-\cos\left(\frac{2\pi n}{L}\right)}{ 1-\cos\left(\frac{2\pi n}{L}\right)}\right)=l(L-l). \tag{148}\]
Applying these relations in Eq. (146) drastically simplifies it and the resulting equation is given by
\[\sum_{q}\left(\frac{f_{q}}{\lambda_{q}}\right)=L\Gamma_{0}-2\chi (\rho,\gamma). \tag{149}\]
Using the above equation in Eq. (144), we finally obtain the expression of \(\mathcal{C}_{0}^{QQ}(t,t)\) used in the main text in Eq. (77), which is given by
\[\mathcal{C}_{0}^{QQ}(t,t) = \frac{2\chi(\rho,\gamma)}{L}t+\frac{1}{D(\rho,\gamma)L}\sum_{q} \frac{f_{q}}{\lambda_{q}^{2}}\left(1-e^{-\lambda_{q}D(\rho,\gamma)t}\right). \tag{150}\]
### Limiting cases of \(\mathcal{C}_{0}^{QQ}(t,t)\)
In this section, we are going to calculate \(\mathcal{C}_{0}^{QQ}(t,t)\) in three distinct time-regimes, which is shown in Eq. (80) in the main text.
#### iv.8.1 Case I: \(t\ll 1/D\)
It is easy to check that in this particular time regime, the second and the third terms in Eq. (144) identically cancels each other, which ultimately results the following:
\[\mathcal{C}_{0}^{QQ}(t,t)=\Gamma_{0}t. \tag{151}\]
Case II: \(1/D\ll t\ll L^{2}/D\)
In order to calculate \(\mathcal{C}_{0}^{QQ}(t,t)\) in the intermediate regime, we use the expression derived in Eq. (152) and follow the footsteps of the analysis in Appendix F. As before, for infinitely large system size, i.e. \(L\to\infty\), one can convert the summation into the following integral form:
\[\mathcal{C}_{0}^{QQ}(t,t)=\frac{2\chi(\rho,\gamma)}{L}t+\frac{2}{D(\rho, \gamma)}\int_{0}^{1/2}\frac{f(x)dx}{\lambda^{2}(x)}\left(1-e^{-\lambda(x)D( \rho,\gamma)t}\right), \tag{152}\]
where we have used the transformations, \(q=2\pi n/L\equiv 2\pi x\), \(\lambda_{q}\to\lambda(x)\) and \(f_{q}\to f(x)\). Note that, the integrand in the above equation primarily contributes in the limit \(x\to 0\) in which case, following Eqs. (28) and (33) in the main text, we can write \(\lambda(x)\simeq 4\pi^{2}x^{2}\) and \(f(x)\simeq 8\pi^{2}x^{2}\chi\). Finally, using the aforementioned transformations the above equation in terms of a new variable \(z=4\pi^{2}x^{2}Dt\) can be written as,
\[\mathcal{C}_{0}^{QQ}(t,t)=\frac{2\chi(\rho,\gamma)}{L}t+\frac{\chi(\rho, \gamma)}{\pi\sqrt{D(\rho,\gamma)}}\int_{0}^{\infty}z^{-3/2}\left(1-e^{-z} \right)dz. \tag{153}\]
Finally using the relation \(\int_{0}^{\infty}z^{-3/2}\left(1-e^{-z}\right)dz=2\sqrt{\pi}\) and neglecting the first term which is a subleading contributor, the leading order contribution to \(\mathcal{C}_{0}^{QQ}(t,t)\) can be written as,
\[\mathcal{C}_{0}^{QQ}(t,t)\simeq\frac{2\chi(\rho,\gamma)}{\sqrt{\pi D(\rho, \gamma)}}\sqrt{t}+\mathcal{O}(t), \tag{154}\]
which is presented in the main text in Eq. (80).
#### vi.9.3 Case III:\(L^{2}/d\ll t\)
From Eq. (152), it is straightforward to see that, in the limit of large \(t\) such that \(t\gg L^{2}/D\), the exponential term contributes nothing, whereas the second term gives a constant value and the leading order contribution comes directly from the first term,, which shows linear growth of \(\mathcal{C}_{0}^{QQ}(t,t)\) with \(t\), and the resulting equation becomes
\[\mathcal{C}_{0}^{QQ}(t,t)=\frac{2\chi(\rho,\gamma)}{L}t. \tag{155}\]
### Scaling relation of the effective mobility \(\chi(\rho,\gamma)\)
In this section, we will obtain the scaling relation for \(\chi(\rho,\gamma)\) in the limit \(\rho\to 0\), \(\gamma\to 0\), such that the ratio \(\psi=\rho/\gamma\) is finite, and calculate the corresponding scaling function \(\mathcal{H}(\psi)\) shown in the main text in Eqs. (73) and (75). We begin our analysis with the expression \(\chi(\rho,\gamma)\) shown in Eq. (44) in the main text, i.e.,
\[\chi(\rho,\gamma)=\frac{\rho}{2}\sum_{l=1}^{\infty}\phi(l)\left[\sum_{g=1}^{l- 1}g^{2}P(g)+l^{2}\sum_{g=l}^{\infty}P(g)\right]. \tag{156}\]
Note that, the hop-length distribution \(\phi(l)\) is given in Eq. (1) in the main text as,
\[\phi(l)=Ae^{-\gamma l}, \tag{157}\]
where the normalization constant \(A=(1-e^{-1/l_{p}})\). Moreover, the steady-state gap distribution function \(P(g)\), which is assumed to be exponentially distrubuted for \(g>0\), has the following form:
\[P(g)\simeq N_{*}\exp(-g/g_{*}), \tag{158}\]
where the prefactor \(N_{*}\), as shown in the main text in Eq. (50), is given by,
\[N_{*}=\left(\frac{1}{\rho}-1\right)\frac{(e^{1/g_{*}}-1)^{2}}{e^{1/g_{*}}}. \tag{159}\]
Now using the above expression of \(P(g)\) in Eq. (156) and performing some algebraic manipulations, we obtain
\[\chi(\rho,\gamma)=\frac{(1-\rho)}{2}\frac{(e^{1/g_{*}}-1)(e^{\gamma+1/g_{*}}+1)}{ (e^{\gamma+1/g_{*}}-1)^{2}}. \tag{160}\]
Note that, the above expression of \(\chi(\rho,\gamma)\) is valid for any arbitrary \(\rho\) and \(\gamma\). However, in the following analysis, we look at two specific cases:
* Case I, \(\rho\to 0\) and \(\gamma\rightarrow\infty:\) In this case, the typical gap-size \(g_{*}\) is given by, \[g_{*}=\frac{1}{\rho}.\] (161) Now, to calculate \(\chi(\rho,\gamma)\), we use Eq. (161) in Eq. (160), and with the limit \(\rho\to 0\) such that \(\gamma+1/g_{*}\simeq\gamma\gg 1\) in consideration, we obtain \[\chi(\rho,\gamma) = \frac{(e^{\rho}-1)(1-\rho)}{2}e^{-\gamma}\] (162) \[\simeq \frac{\rho(1-\rho)}{2}e^{-\gamma}=\frac{\chi^{(0)}e^{-\gamma}}{2},\] (163) where \(\chi^{(0)}=\rho(1-\rho)\) is the particle mobility in SSEP.
* Case II, \(\rho\to 0\) and \(\gamma\to 0:\) In the limit of \(\rho\to 0\), \(\gamma\to 0\), such that the ratio \(\psi=\rho/\gamma\) is finite, we make the following transformations in Eq. (160):
* the typical gap-size \(g_{*}\) obeys the following scaling relation: \[g_{*}\simeq\frac{1}{\rho}\mathcal{G}(\psi),\] (164) where \(\mathcal{G}(\psi)\) is the scaling function corresponding to \(g_{*}\), which upon consideration of the two limiting cases is assumed to be \(\mathcal{G}(\psi)=\sqrt{1+\psi}\) (see the paragraph before Eq. (73) in the main text),
* all the exponential terms are approximated upto the leading order contributions, i.e., \[e^{\gamma+1/g_{*}}-1 \simeq \gamma+1/g_{*}=\gamma+\rho/\mathcal{G}(\psi)=\gamma(1+\psi/ \mathcal{G}(\psi)),\] (165) \[e^{1/g_{*}}-1 \simeq 1/g_{*}=\gamma\psi/\mathcal{G}(\psi),\] (166) \[e^{\gamma+1/g_{*}}+1 \simeq 2.\] (167) Finally, by substituting the above transformation in Eq. (160), we get the leading order contribution to \(\chi(\rho,\gamma)\) in the limit \(\rho\to 0\) and \(\gamma\to 0\), as shown below, \[\chi(\rho,\gamma) \simeq \frac{\rho(1-\rho)}{\gamma^{2}}\frac{\mathcal{G}(\psi)}{(\psi+ \mathcal{G}(\psi))^{2}}.\] (168) Note that, by replacing \(\chi^{(0)}=\rho(1-\rho)\) in the above equation, we immediately obtain the scaling relation shown in Eq. (73) in the main text and the corresponding scaling function is given by, \[\mathcal{H}(\psi)=\frac{\mathcal{G}(\psi)}{(\psi+\mathcal{G}(\psi))^{2}}.\] (169)
|
2310.11037 | Sampling for Remote Estimation of the Wiener Process over an Unreliable
Channel | In this paper, we study a sampling problem where a source takes samples from
a Wiener process and transmits them through a wireless channel to a remote
estimator. Due to channel fading, interference, and potential collisions, the
packet transmissions are unreliable and could take random time durations. Our
objective is to devise an optimal causal sampling policy that minimizes the
long-term average mean square estimation error. This optimal sampling problem
is a recursive optimal stopping problem, which is generally quite difficult to
solve. However, we prove that the optimal sampling strategy is, in fact, a
simple threshold policy where a new sample is taken whenever the instantaneous
estimation error exceeds a threshold. This threshold remains a constant value
that does not vary over time. By exploring the structure properties of the
recursive optimal stopping problem, a low-complexity iterative algorithm is
developed to compute the optimal threshold. This work generalizes previous
research by incorporating both transmission errors and random transmission
times into remote estimation. Numerical simulations are provided to compare our
optimal policy with the zero-wait and age-optimal policies. | Jiayu Pan, Yin Sun, Ness B. Shroff | 2023-10-17T07:06:12Z | http://arxiv.org/abs/2310.11037v2 | # Sampling for Remote Estimation of the Wiener Process over
###### Abstract.
In this paper, we study a sampling problem where a source takes samples from a Wiener process and transmits them through a wireless channel to a remote estimator. Due to channel fading, interference, and potential collisions, the packet transmissions are unreliable and could take random time durations. Our objective is to devise an optimal causal sampling policy that minimizes the long-term average mean square estimation error. This optimal sampling problem is a recursive optimal stopping problem, which is generally quite difficult to solve. However, we prove that the optimal sampling strategy is, in fact, a simple threshold policy where a new sample is taken whenever the instantaneous estimation error exceeds a threshold. This threshold remains a constant value that does not vary over time. By exploring the structure properties of the recursive optimal stopping problem, a low-complexity iterative algorithm is developed to compute the optimal threshold. This work generalizes previous research by incorporating both transmission errors and random transmission times into remote estimation. Numerical simulations are provided to compare our optimal policy with the zero-wait and age-optimal policies.
Key words and phrases:Remote estimation, unreliable channel, optimal multiple stopping times +
Footnote †: journal: Computer Vision and Pattern Recognition
To that end, in this paper, we aim to embark on a sampling problem of a wireless network, as is illustrated in Fig. 1. The sampler takes the sample of a continuous-time source process and transmits the sample to a remote estimator. The continuous-time source process is modeled as the Wiener process \(W_{t}\), which helps describe the dynamics of sensors measuring quantities like movement, providing insights into how these quantities change over time. The Wiener process, also commonly referred to as Brownian motion [18], is one of the best known Levy process, that features stationary and independent increments. It finds widespread applications in various fields such as pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics. The Wiener process \(W_{t}\) has the following key properties: (i) \(W_{0}=0\); (ii) \(W_{t}\) is continuous; (iii) \(W_{t}\) has independent increments; (iv) \(W_{t}-W_{s}\sim\mathcal{N}(0,t-s)\) for \(0\leq s\leq t\), where \(\mathcal{N}\) denotes the normal distribution. The remote estimator, in turn, provides a minimum mean square estimation error (MMSE) estimate \(\hat{W}_{t}\) based on the received samples. The core objective is to control the sequence of sampling times to minimize the estimation error \(W_{t}-\hat{W}_{t}\), specifically, aiming at optimizing the long term average of MMSE.
Organized according to the sampling strategies and optimization metrics, our review of related works encompasses three distinct perspectives.
### Related Works
**Signal-aware sampling with reliable transmissions.** There have been several studies on sampling for remote estimation, e.g., in [19; 21; 30; 31; 32], where the sampling times depend on the source process (signal-aware). A nice survey paper is included in [13]. In [30], the authors consider the Wiener process as the source process and provide an exact solution to minimize the estimation error. According to the optimal solution, the sampler should wait until the instantaneous estimation error exceeds a threshold, and the threshold is given explicitly. A similar result was developed in [21] by extending [30] from the Wiener process to the Ornstein Uhlenbeck (OU) process. The optimal threshold retains its simplicity, remaining a root of a closed-form equation. Further exploration, as in [32], delves into an asymmetric sensor-controller remote system. In this scenario, there are random transmission times in both directions. At the sensor, the sampling time is a stopping time based on the evolution of the Wiener process, and at the controller, the sampling time depends on the information sent from the sensor. The authors yield precise optimal solutions, noting the potential existence of multiple thresholds for the sensor's optimal stopping time. Joint optimality designs on the sampling and the estimation, concerning the Wiener process or the autoregressive process, is investigated in [6; 11].
To summarize, except [6], these previous studies on sampling assume reliable transmissions. However, in a variety of wireless systems, channel errors may occur due to fading, and the transmission times of a packet could be random. While packet drops are considered in [6], a time-slotted system is considered, which assumes that the total transmission time is the same as the transmission instance (one time slot). In contrast, in this paper, our model allows for both packet errors
Figure 1. System model.
and random transmission times. Moreover, we enable the selection of real-valued transmission instances.
**Signal-agnostic sampling.** When the sampling times are independent of the Wiener process (signal-agnostic), the MMSE is equal to the age of information [30]. More generally, the MMSE is a function of the age of information under a linear time invariant system [7, 15]. Thus, our study is closely related to numerous studies on age-based sampling, e.g., in [1, 2, 12, 22, 23, 24, 28]. Age of information, or simply age, is a metric to evaluate data freshness. Age at current time \(t\) is defined as \(\Delta(t)=t-U(t)\), where \(U(t)\) is the generation time of the latest delivered sample. Age has gained much popularity in the recent decade and has contributed to various remote control systems such as sensor networks, UAV navigation, and semantic communication. A recent literature review on the age is provided in [35].
In [2], the paper studies sampling energy harvesting sources with a unit battery buffer under an erasure channel. In the case of a single source, it provides an optimal sampling policy without feedback. With perfect feedback, an optimal policy is offered among the policies that may wait only when the previous transmission is successful. In [1], the paper solves explicit optimal solutions for an energy harvesting source with finite buffer sizes, where the arrived energy can fill up the whole buffer or fill up incrementally. In [28], the authors relate autocorrelation, remote estimation, and mutual information to the nonlinear age penalty functions, and provide an optimal sampling policy under sampling rate constraint. In [24], when the source process is a multidimensional Gaussian diffusion process, and the estimator is the Kalman Filter, the expected square estimation error is an increasing function of the age. For a general non-decreasing age penalty function, the optimal sampling policy has a threshold structure under unreliable channel conditions and random transmission delay. An extended sampling scenario where the sampler can transmit the sample before receiving the feedback is studied in [17].
However, compared to the signal-aware sampling policies, signal-agnostic counterparts exhibit suboptimal performance in terms of minimizing the estimation error. Numerical results in [30] validate that the optimal signal-aware sampling policy can achieve less than half of the long term average MMSE than that of the age-optimal sampling policy. This is intuitive, due to the criticality of the content of information within remote monitoring systems, such as the pedestrian intentions in vehicular networks or target locations in UAV navigations.
**AoII-optimal scheduling.** Recently, researchers have studied signal-aware policies to optimize a new metric: the age of incorrect information (AoII) [8, 14, 16]. AoII incorporates both the content of information (estimation error) and the freshness of information (data freshness). AoII was first advanced in [16], serving as a cornerstone for subsequent research. In the context of a finite symmetric Markov source, [16] provides the transmission strategy with a focus on minimizing the AoII, displaying low computational complexity. In [14], the authors employ dynamic programming to minimize the AoII under a binary Markovian source and exponential channel delay distribution. Meanwhile, the paper in [8] extends [14] to a general transmission time distribution, showing that it is optimal to always transmit whenever the channel is idle and the AoII is not zero.
Although these studies focus on content-aware transmission strategies, they all focus on a finite state Markov source under a discrete-time system. These scenarios restrict transmission choices between transmit and idle at the beginning of each time slot. Instead, we consider an unbounded and continuous-time Markov process, enabling the selection of real-valued transmission instances.
### Our Contributions
In comparison to these three prevailing perspectives, in this paper, we consider a scenario of minimizing the estimation error of the Wiener process. Specifically, we (i) embrace a signal-aware sampling policy and (ii) accommodate an unreliable channel with a random transmission time. Our
contributions expand on (Srivastava et al., 2017) by considering an unreliable channel, and (Srivastava et al., 2017) by allowing sampling time dependence on the content of the Wiener process. Our problem belongs to a semi-Markov decision problem and is difficult to solve. There have been solutions for some special cases. In the first case where the channel is reliable (e.g., (Srivastava et al., 2017; Srivastava et al., 2017; Srivastava et al., 2017)), the original problems are reduced to a single sample problem, which can be further solved by convex optimizations or optimal stopping rules. However, these methods do not hold in our case because our new problem is decoupled to a _recursive optimal stopping problems_ with multiple samples1. Similarly, our work is different from (Srivastava et al., 2017), because this problem is decoupled to a discounted MDP, and each action of the MDP is not a stopping time. Nonetheless, we are able to circumvent these challenges and solve the optimal sampling problem. The main contributions of this paper are stated as follows:
Footnote 1: Also, our problem is significantly different from that with instantaneous transmission time, e.g., (Garshan et al., 2017), because even if there is no sampling rate constraint, the zero-wait sampling policy is not optimal.
* We provide an exact solution to our optimal sampling problem. The optimal sampling strategy has a simple structure: each sampling time is a stopping time that takes the sample when the instantaneous estimation error exceeds a threshold. The optimal threshold remains the same, independent of the Wiener process value and whether the last transmission failed or not. Moreover, the optimal threshold can be solved efficiently, e.g., by using a two layer bisection search algorithm. Our results hold for general distributions of the transmission delay and arbitrary probability of the i.i.d. transmission failure. To solve our _recursive optimal stopping problems_, we developed new approaches. We provide an exact value function to the value iteration problem. Specifically, we solve a sequence of optimal stopping problems, where the action value function implies taking an action at the first sample and taking the optimal stopping times at the remaining samples. The technical tools used to establish the results include (a) the strong Markov property and Martingale properties of the Wiener process, (b) Shiryaev's free boundary method for solving optimal stopping problems.
* When the sampling time does not depend on the Wiener process, the expected square estimation error is equal to the age (Srivastava et al., 2017), and our original problem is equivalent to an age minimization problem. We provide the exact solution as well. The sampler takes the sample when the age first exceeds a threshold. This result also improves (Srivastava et al., 2017, Theorem 1) by removing the assumption of the regenerative process.
* Numerical simulations are provided to validate our results. An interesting observation is that when the channel is highly unreliable, our optimal policy still performs much better than the age-optimal and zero-wait policies.
## 2. Model and Formulation
### System Model and MMSE Estimator
We consider a continuous-time status update system as is depicted in Fig. 1, where a sampler takes the sample from the Wiener process \(W_{t}\) and transmits to a destination through an unreliable channel. The destination provides an estimate \(\hat{W}_{t}\) based on the samples that have been successfully delivered. The extended setting from a reliable channel to an unreliable channel is one of the key features of our study.
We use \(i\in\{1,2,\ldots\}\) to indicate the number of samples generated by the sampler. The \(i\)th sample is generated at time \(S_{i}\) and is transmitted through the unreliable channel. The sample contains the sampling time \(S_{i}\) and the sample value \(W_{S_{i}}\). The unreliable channel has an i.i.d. transmission failure, and we denote \(\alpha\in[0,1)\) as the probability of failure (i.e., the channel condition is _OFF_). The channel also has an i.i.d. transmission time \(Y_{i}\), and we have \(\mathbb{E}[Y_{i}^{2}]<\infty\). The transmission time and the channel condition are mutually independent. In this paper, we also assume that the
transmission time is lower bounded, i.e., there exists \(\epsilon>0\) (which can be sufficiently small) such that \(Y_{i}\geq\epsilon\). The \(i\)th sample is delivered to the destination at time \(D_{i}\), where \(D_{i}=S_{i}+Y_{i}\). At the delivery time \(D_{i}\), the destination knows the outcome of the transmission of the \(i\)th sample. Only if the transmission was successful, the destination receives the sample message (\(S_{i},W_{S_{i}}\)). In addition, at \(D_{i}\), the destination then sends an acknowledgment back to the sampler, informing whether the transmission of the \(i\)th sample was successful or not. We assume that the transmission process of the acknowledgment is instantaneous and error free. Note that the sampler always generates a sample after it receives feedback, i.e., \(S_{i+1}\geq D_{i}\). Otherwise, the generated sample will be queued for waiting to be transmitted, and the queued sample is staled compared to the fresh sample.
The estimator (destination) also provides a minimum mean square error (MMSE) estimator \(\hat{W}_{t}\) based on the successfully received samples until time \(t\).
We denote the random variable \(\underline{i}\) as the index of the latest sample that is _successfully_ delivered to the destination by the time \(D_{i}\). In the special case of a reliable channel, each sample is successfully delivered, so we have \(\underline{i}=i\); otherwise, \(\underline{i}\leq i\). The latest (and thus freshest) sample the destination has received during \(t\in[D_{i},D_{i+1})\) is \((S_{\underline{i}},W_{S_{i}})\). Using the strong Markov property of the Wiener process [25, Eq. (4.3.27)], the MMSE estimator \(\hat{W}_{t}\) is expressed as
\[\hat{W}_{t}=\mathbb{E}[W_{t}|S_{\underline{i}},W_{S_{\underline{i}}}]=\mathbb{ E}[W_{t}|W_{S_{\underline{i}}}]=W_{S_{\underline{i}}},t\in[D_{i},D_{i+1}). \tag{1}\]
A sample path of \(W_{t}\), \(\hat{W}_{t}\), and the estimation error \(W_{t}-\hat{W}_{t}\) are depicted in Fig. 2. In this figure, the 2nd sample is not successfully delivered. Thus, when \(t\in[D_{2},D_{3})\), the estimator \(\hat{W}_{t}\) is still \(W_{S_{\underline{i}}}\), not \(W_{S_{\underline{i}}}\). In other words, \(i=2\), but \(\underline{i}=1\). This is one of the key differences from the previous studies with the reliable channel case, e.g., [30, 31, 21, 32].
### Sampling Times and Problem Formulation
We will control the sequence of causal sampling times \(S_{i}\)'s to minimize the estimation error. We will consider two types of sampling time: (i) the sampling time depends on the Wiener process (signal-aware sampling) and (ii) the sampling time is independent of the Wiener process (signal-agnostic sampling).
Figure 2: A sample path of the Wiener process \(W_{t}\) and the MMSE \(\hat{W}_{t}\) over time \(t\). At \(D_{1},D_{3},D_{4}\), the sample is successfully delivered, so \(\hat{W}_{t}\) is updated to be \(W_{S_{1}},W_{S_{3}},W_{S_{4}}\), respectively. At \(D_{2}\), the sample is not successfully delivered, so \(\hat{W}_{t}\) remains unchanged.
#### 2.2.1. Signal-aware Sampling
When the sampling time \(S_{i}\) depends on the Wiener process, \(S_{i}\) is a _stopping time_, i.e., \(S_{i}\) satisfies:
\[\{S_{i}<t\}\in\mathcal{F}(t)^{+},\;\mathcal{F}(t)^{+}\triangleq\cap_{s>t} \sigma(W_{r},r\in[0,s]). \tag{2}\]
Here, \(\sigma(A_{1},\ldots,A_{n})\) is the \(\sigma-\)field generated by the random variables \(A_{1},\ldots,A_{n}\), and \(\mathcal{F}(t)^{+}\) is a filtration, i.e., a non-decreasing and right-continuous family of \(\sigma-\)field available to the sampler at time \(t\). Intuitively, the sampling time \(S_{i}\) not only depends on the history information prior to \(D_{i-1}\), but also depends on the evolution of the Wiener process starting from \(D_{i-1}\).
Then, we define the sampling policies. The policy space \(\Pi_{\text{signal-aware}}\) is defined as the collection of causal policies \(\pi=S_{1},S_{2},\ldots\) such that: (i) \(S_{i}\) satisfies the condition (2), and \(S_{i}\geq D_{i-1}\); (ii) For each \(i\), the waiting time \(S_{i}-D_{i-1}\) is bounded by a stopping time that is independent of the history information before \(S_{i-1}\)2. In addition, this bounded stopping time \(\tilde{\tau}\) satisfies \(\mathbb{E}\left[W_{\tilde{\tau}}^{4}\right]<\infty\)3.
Footnote 2: It is the upper bound stopping time that is independent of the history information before \(S_{i-1}\), not all of the stopping times. For example, the upper bound stopping time is to stop where the estimation error \(W_{t}-\hat{W}_{\tilde{\tau}}\) exceeds a sufficiently large value. Setting this stopping time as an upper bound is reasonable, because we want to minimize the estimation error. Then, this stopping time is independent of the history information before \(S_{i-1}\).
Footnote 3: If the condition (ii) does not hold, then the term \(\lim_{T\to\infty}\mathbb{E}[R_{N(T)}]/T\) may not be \(0\), where \(N(T)\) is the largest number \(n\) such that \(S_{n}<T\), and \(R_{n}=\int_{D_{n-1}}^{D_{n}}(W_{t}-\hat{W}_{\tilde{\tau}})^{2}dt\). If \(\lim_{T\to\infty}\mathbb{E}[R_{N(T)}]/T\neq 0\), \(R_{n}\) will diverge to infinity, which is not our concern.
#### 2.2.2. Signal-agnostic Sampling
When the sampling time is independent of the Wiener process, we then define the collection of policies \(\Pi_{\text{signal-agnostic}}\) as the collection of policies \(\pi=S_{1},S_{2},\ldots\) such that: (i) \(S_{i}\) satisfies \(S_{i}\geq D_{i-1}\); (ii) For each \(i\), \(S_{i}-D_{i-1}\) is bounded by a finite 2nd moment random variable that is independent of the history information before \(S_{i-1}\).
Note that for any finite 2nd moment random variable \(A\), we have \(\mathbb{E}\left[W_{A}^{4}\right]=3\mathbb{E}\left[A^{2}\right]<\infty\). Therefore, \(\Pi_{\text{signal-agnostic}}\subset\Pi_{\text{signal-aware}}\).
#### 2.2.3. Problem Formulation
Our objective in this paper is to optimize the long-term average mean square estimation error (MSE) for both signal-aware and signal-agnostic cases:
\[\text{mse}_{\text{opt}}=\inf_{\pi\in\Pi}\limsup_{T\to\infty}\frac{1}{T} \mathbb{E}\left[\int_{0}^{T}(W_{t}-\hat{W}_{t})^{2}dt\right]. \tag{3}\]
We aim to find a sampling policy \(\pi\) from the set \(\Pi\) of all causal policies, in order to minimize the MSE. The value \(\text{mse}_{\text{opt}}\) is also called the optimal objective value. Problem (3) is typically hard to solve due to the following reasons. (i) Problem (3) is an infinite horizon undiscounted semi-Markov decision problem with an uncountable state space. (ii) For the case of signal-aware sampling, each action (sampling time) is a stopping time.
## 3. Main Results
### Optimal Signal-aware Sampling Policy
We first break down the time-horizon problem (3) into a series of optimal sampling subproblems. Each of these subproblems determines the optimal sampling times between \(D_{\overline{j}}\) and \(D_{\overline{j+1}}\), where\(D_{\overline{j}}\) represents the time of the \(j\)th successful delivery.
Lemma 1 ().: _Solving the problem (3) is the same as solving a series of equivalent optimal sampling subproblems, where the \(j\)th subproblem is given by_
\[J(w,\beta)\triangleq\inf_{\pi\in\Pi}\mathbb{E}\left[\int_{D_{\overline{j}}}^{ D_{\overline{j+1}}}(W_{t}-\hat{W}_{t})^{2}dt-\beta(D_{\overline{j+1}}-D_{ \overline{j}})\Big{|}W_{D_{\overline{j}}}-\hat{W}_{D_{\overline{j}}}=w\right], \tag{4}\]
_where \(\beta=\text{mse}_{\text{opt}}\)._
Lemma 1 is a restatement of Lemma 4 in Section 5. We note that, by choosing \(\beta=\text{mse}_{\text{opt}}\), the sequence of linearized optimal stopping subproblems (4) have the same solution as the original problem (3). Note that these subproblems are independent and thus _equivalent_. In other words, we only need to solve one subproblem (4) regardless of \(j\), and \(J(w,\beta)\) remains the same for any given \(j\). This is because each \(W_{D\overline{j}}-\hat{W}_{D\overline{j}}\) is independent of any history information before \(D_{\overline{j}}\). Moreover, Lemma 1 improves similar results in e.g., [21, 24, 30, 32], by removing the assumption that the \(S_{i}\)'s is a regenerative process. Overall, to solve (3), we can firstly solve (4) with any given parameter \(\beta>0\).
However, problem (4) is still hard to solve. Let \(M_{j}\) be the total number of transmission attempts between \(D_{\overline{j}}\) and \(D_{\overline{j+1}}\). Then, \(\overline{j+1}=\overline{j}+M_{j}\). Problem (4) needs to determine a sequence of sampling times \(S_{\overline{j}+1},S_{\overline{j}+2},\ldots,S_{\overline{j}+M_{j}}\) until a successful packet delivery occurs at time \(S_{\overline{j}+1}\). Hence, problem (4) is a _repeated optimal stopping problem with continuous-time control and a continuous state space_. This is the key technical challenge of our study. To the extend of our knowledge, this type of problems has not been addressed before. One limiting case of problem (4) was studied in [30, Eq. 47], where there exists no transmission errors and hence \(M_{j}=1\).
We develop a value iteration algorithm that can find the optimal stopping times for solving problem (4). To that end, we define a sequence of optimal stopping problems:
\[J_{n}(w,\beta) \triangleq \inf_{\pi\in\Pi}\mathbb{E}\left[\int_{D_{\overline{j+1}-\min(M_{ j},n)}}^{D_{\overline{j+1}-\min(M_{j},n)}}(W_{t}-\hat{W}_{t})^{2}dt-\beta(D_{ \overline{j+1}}-D_{\overline{j+1}-\min(M_{j},n)})\right. \tag{5}\] \[\left.\left|W_{D_{\overline{j+1}-\min(M_{j},n)}}-\hat{W}_{D_{ \overline{j+1}-\min(M_{j},n)}}=w\right|,n=1,2,\ldots.\right.\]
Hence, \(J_{n}(w,\beta)\) determines the optimal solution for at most the last \(n\) transmission attempts in problem 4. The principle of backward induction implies that \(J_{n}\) satisfies the _value iteration algorithm_:
\[J_{0}(w,\beta) \triangleq 0,\] \[J_{n+1}(w,\beta) \triangleq \inf_{\tau}g(w;\tau)+\alpha\mathbb{E}\left[J_{n}(w+W_{t+Y},\beta )\right],\ n=0,1,2,\ldots, \tag{6}\]
where \(w+W_{t+Y}\) is the estimation error after a stopping time \(\tau\) and the transmission time \(Y\). And the per-stage cost function \(g(w;\tau)\) is defined as the square estimation error minus \(\beta\) from the last delivery time to the next delivery time with a stopping time \(\tau\):
\[g(w;\tau)=\mathbb{E}\left[\int_{0}^{\tau+Y}(w+W_{t})^{2}dt-\beta(\tau+Y)\right]. \tag{7}\]
The following theorem provides an exact solution to (6), which is the key contribution in this paper:
Theorem 1: _The sequence of optimal stopping times \(\tau_{n}\)'s to problem (6) is given as follows:_
\[\tau_{n}=\inf_{t}\left\{t\geq 0:|w+W_{t}|\geq v_{n}(\beta)\right\}, \tag{8}\]
\(v_{n}(\beta)\) _is the unique positive root of the free boundary differential equation:_
\[\frac{\partial}{\partial w}J_{n}(w,\beta)\Big{|}_{w=v_{n}(\beta)^{+}}=\frac{ \partial}{\partial w}J_{n}(w,\beta)\Big{|}_{w=v_{n}(\beta)^{-}}, \tag{9}\]
\(J_{n}(w,\beta)\) is updated as:_
\[J_{0}(w,\beta) =0,\] \[J_{n}(w,\beta) =g(w,o_{n}(\beta),\beta)+\alpha\mathbb{E}_{W_{Y}}\left[J_{n-1}(\max \{|w|,o_{n}(\beta)\}+W_{Y},\beta)\right],\ n=1,2,\ldots, \tag{10}\]
_the function \(g(w,v,\beta)\) is equal to_
\[g(w,v,\beta)=\frac{1}{2}\mathbb{E}\left[Y^{2}\right]+\mathbb{E} \left[Y\right]w^{2}-\mathbb{E}\left[Y\right]\beta+\frac{1}{6}\max(v^{4}-w^{4}, 0)-(\beta-\mathbb{E}[Y])\max(v^{2}-w^{2},0). \tag{11}\]
_Moreover, the sequence \(\{o_{n}(\beta)\}_{n}\) is decreasing and thus convergent._
The proof of Theorem 1 is provided in Section 5.3. Theorem 1 implies that each optimal stopping time \(\tau_{n}\) is a hitting time that will stop when the estimation error exceeds a threshold \(o_{n}(\beta)\). The threshold \(o_{n}(\beta)\) is chosen by the free boundary method [25], where the optimal value function \(J_{n}(w,\beta)\) should be continuously differentiable on \(w\in\mathbb{R}\). Since \(o_{n}(\beta)\) is decreasing and convergent, \(\tau_{n}\) is also convergent.
In addition, the optimal threshold \(o_{n}(\beta)\) can be solved efficiently. In Theorem 5 of Section 5.3, we showed that the root of the free boundary method in (9) is equivalent to:
\[G_{n}^{x}(w,\beta)+\frac{1}{3}w^{3}-\beta w=0. \tag{12}\]
Interestingly, \(G_{n}^{x}(w,\beta)=\frac{1}{2}\frac{\partial}{\partial w}J_{n}(w,\beta)|_{w=o _{n}(\beta)^{+}}\), and \(-\frac{1}{3}w^{3}+\beta w=\frac{1}{2}\frac{\partial}{\partial w}J_{n}(w,\beta )|_{w=o_{n}(\beta)^{-}}\).
\(G_{0}^{x}(w,\beta)=0\), and the function \(G_{n}^{x}(w,\beta)\) is updated as
\[G_{n}^{x}(w,\beta)=\mathbb{E}\left[Y\right]w\] \[+\alpha\mathbb{E}_{W_{Y}}\left[G_{n-1}^{x}(w+W_{Y},\beta)\mathds{1 }_{|w+W_{Y}|\geq o_{n-1}(\beta)}+\left(\beta(w+W_{Y})-\frac{1}{3}(w+W_{Y})^{3 }\right)\mathds{1}_{|w+W_{Y}|<o_{n-1}(\beta)}\right]. \tag{13}\]
Because (13) contains only an expectation over \(W_{Y}\) without derivatives, computing \(G_{n}^{x}(w,\beta)\) is easy. We also showed that \(G_{n}^{x}(w,\beta)+\frac{1}{3}w^{3}-\beta w\) is strongly convex for \(w>0\). Thus, we only need logarithm time complexity to solve \(o_{n}(\beta)\) for each \(n\) in (12), such as bisection search or Newton's method. Fig. 3 illustrates some intuitive properties of \(o_{n}(\beta)\) and its root function, \(G_{n}^{x}(w,\beta)+\frac{1}{3}w^{3}-\beta w\).
Figure 3: The evolution of the root function: \(G_{n}^{x}(w,\beta)+\frac{1}{3}w^{3}-\beta w\) over \(w\), with \(n=1,2,4\). In this example, we set \(\beta=11.0,\alpha=0.3\), and a constant transmission delay \(Y=6\). It is easy to see that \(o_{n}(\beta)\), which is the positive root of \(G_{n}^{x}(w,\beta)+\frac{1}{3}w^{3}-\beta w\), is decreasing in \(n\).
Further, \(J_{n}\) converges linearly to \(J\). To illustrate, we first define a norm. Let us pick any value \(\rho\) with \(\alpha<\rho<1\), and denote a weight function \(u(w)=\max(\tilde{b},w^{2})\), where \(\tilde{b}\) can take any positive value such that \(\mathbb{E}\left[1+\frac{2|W_{Y}|}{\sqrt{b}}+\frac{W_{Y}^{2}}{b}\right]\leq\frac {\rho}{\alpha}\).The weight function \(u(w)\) is not related to \(\beta\). The sup-norm \(\|\cdot\|\) of a function \(f(w)\) is defined as \(\|u\|=\sup_{w\in\mathbb{R}}\left|\frac{f(w)}{u(w)}\right|\).We have the following result:
Lemma 2: \(\|J_{n}(\cdot,\beta)-J(\cdot,\beta)\|\leq\rho\|J_{n-1}(\cdot,\beta)-J(\cdot, \beta)\|\)_._
Lemma 2 is restated in Lemma 9 at Section 5.4. Since \(\tau_{n}\) is also convergent, each of the optimal stopping (waiting) times in (4) should also be a hitting time with the threshold \(v(\beta)=\lim_{n\to\infty}v_{n}(\beta)\). We finally conclude the following result:
Theorem 2: _An optimal sampling solution \(S_{i}\)'s to the series of problem (4) is:_
\[S_{i+1}=\inf_{t}\left\{t\geq D_{i}:|W_{t}-\hat{W}_{t}|\geq v(\beta)\right\}, \ i=0,1,2,\ldots, \tag{14}\]
_where \(v(\beta)\) is the limit of the sequence \(v_{n}(\beta)\)'s, and \(v_{n}(\beta)\) can be computed by solving (9), or more efficiently, by solving (12) and (13)._
The proof of Theorem 2 is provided in Section 5.4.
Theorem 2 illustrates an important property of an optimal sampling policy for a given parameter \(\beta\). Note that \(|W_{t}-\hat{W}_{t}|\) is the estimation error at the current time \(t\). Theorem 2 implies that the optimal sampling policy given in (14) has a simple structure. The optimal policy is a _threshold type_: the sampler may wait until the instantaneous estimation error \(|W_{t}-\hat{W}_{t}|\) exceeds the threshold \(v(\beta)\). Specifically, if the estimation error at the initial time \(D_{i}\) exceeds \(v(\beta)\), then it is optimal to immediately transmit the sample. The optimal threshold \(v(\beta)\) is independent of the evolution of the Wiener process.
After solving (4) with a given \(\beta\), we will finally determine the optimal objective value \(\beta=\text{mse}_{\text{opt}}\). Note that in (4), \(W_{D_{\overline{\gamma}}}-\hat{W}_{D_{\overline{\gamma}}}\) has the same distribution as \(W_{Y}\), where \(Y\) has the same distribution as the i.i.d. transmission delay \(Y_{i}\)'s. Then, we have the following result:
Theorem 3: \(\beta=\text{mse}_{\text{opt}}\) _is the root of_
\[\mathbb{E}\left[J(W_{Y},\beta)\right]=0, \tag{15}\]
_where \(\text{mse}_{\text{opt}}\) is the optimal objective value of (3)._
Theorem 3 is shown in Lemma 3 at Section 5. Combining Theorem 2 and Theorem 3, we finally provide the optimal solution to (3).
Moreover, we showed that we can also use a low complexity algorithm, such as bisection search, to compute the root of \(\beta\). So in conclusion, we can efficiently solve \(v(\text{mse}_{\text{opt}})\) and \(\text{mse}_{\text{opt}}\) with low complexity, which is provided in Algorithm 1:
* Line 4\(-\)10 in Algorithm 1 is an _inner layer_ update to efficiently compute the optimal threshold \(v(\beta)\) and the function \(J(w,\beta)\) for a given \(\beta\) (corresponding to Theorem 1 and Theorem 2). In Line 5, due to Lemma 2, we only need a logarithm number of iterations. In Line 8, since the root function in (12) is strongly convex, we only need a simple Newton's method to obtain \(v_{n}(\beta)\).
* Line 2,3,11 serves as an _outer_ layer that uses a simple bisection method to determine the root of \(\beta\) (corresponding to Theorem 3).
In the special case where \(\alpha=0\), it is easy to observe that \(v_{n}(\beta)=v_{1}(\beta)\), and \(J_{n}(w,\beta)=J_{1}(w,\beta)=g(w,v_{1},\beta)\) for all \(n=1,2,\ldots\). As a result, the optimal threshold \(v(\beta)=v_{1}(\beta)\), and the optimal value function \(J(w,\beta)=g(w,v_{1},\beta)\). By (12), \(v_{1}(\beta)=\sqrt{3(\beta-\mathbb{E}[Y])}\). Therefore, Theorem 2 and 3 reduces to the following corollary:
**Corollary 1**.: _Suppose that \(\alpha=0\), then an optimal solution \(S_{i}\)'s to problem (3) satisfies:_
\[S_{i+1}=\inf_{t}\left\{t\geq D_{i}:|W_{t}-\hat{W}_{t}|\geq\sqrt{3(\beta-\mathbb{ E}[Y])}\right\}, \tag{16}\]
_where \(\beta\) is the root of_
\[\mathbb{E}\left[g(W_{Y},\sqrt{3(\beta-\mathbb{E}[Y])},\beta)\right]=0. \tag{17}\]
_Moreover, \(\beta=\text{mse}_{\text{opt}}\) is the optimal objective value of (3)._
The optimal policy provided in Corollary 1 is the same as that of (30, Theorem 1). In addition, we have also improved (30, Theorem 1) by removing the assumption of the regenerative process. The optimal sampling policy provided in Corollary 1 is a threshold type on the instantaneous estimation error, and the optimal threshold is given in closed-form.
There are several variations of Corollary 1 with a reliable channel case \(\alpha=0\). In (21), the paper changes the source process to be the Ornstein-Uhlenbeck process and shows that the optimal threshold is a root of the closed-form equation. The model where the source can reset the Wiener process is described in (32). Theorem 2 and Theorem 3 are different from these studies by generalizing to an i.i.d. unreliable channel scenario (\(\alpha\geq 0\)). Note that the last transmission may be successful or failed for each sample. However, in Theorem 2 and Theorem 3, each sampling time follows the same threshold type with the same threshold \(o(\beta)\), regardless of whether the last transmission failed or not.
The expression (14) in Theorem 2 implies that our optimal policy relies on the value of the Wiener process at the sampling time of the successfully delivered sample, \(S_{i}\), but may not on \(S_{i}\). This is also a key difference from the case of a reliable channel (\(\alpha=0\)), e.g., (21; 30; 32) and Corollary 1.
### Optimal Signal-agnostic Sampling Policy with Sampling Rate Constraint
Finally, we turn to the signal-agnostic case and provide the exact solution to Problem (3). Using (30), for any signal-agnostic policy, we have
\[\mathbb{E}\left[(W_{t}-\hat{W}_{t})^{2}\right]=\Delta_{t}=t-S_{i},\ t\in[D_{i},D_{i+1}). \tag{18}\]
In other words, when the sampling time does not depend on the Wiener process, the expected square estimation error MMSE is equal to the _age of information_. So our MSE-optimal sampling problem (Problem (3)) is equivalent to the age-optimal sampling problem. Problem (3) is equivalent to
\[\text{age}_{\text{opt}}=\inf_{\pi\in\Pi}\limsup_{T\rightarrow\infty}\frac{1}{T} \mathbb{E}\left[\int_{0}^{T}\Delta_{t}dt\right]. \tag{19}\]
Age of information \(\Delta_{t}\), or simply the age, is a metric for evaluating the data freshness. As is mentioned in (18), the age \(\Delta_{t}\) is defined as the time elapsed since the freshest delivered sample is generated [29]. If a fresh sample is successfully delivered to the estimator, the age decreases to the system time of the sample. Otherwise, the age increases linearly in time. A sample path of the age \(\Delta_{t}\) is depicted in Fig 4.
We then have the following result:
Theorem 4: _An optimal solution \(S_{i}\)'s to the problem (19) is provided as:_
\[S_{i+1}=\inf_{t}\left\{t\geq D_{i}:\Delta_{t}\geq\beta-\frac{\mathbb{E}\left[ Y\right]}{1-\alpha}\right\}. \tag{20}\]
\(\beta\) _is the root of_
\[\mathbb{E}\left[\int_{Y}^{\max(\beta-\frac{\mathbb{E}\left[Y\right]}{1- \alpha},Y)+Y^{\prime}}tdt\right]-\beta\mathbb{E}\left[\max(\beta-Y,\frac{ \mathbb{E}\left[Y\right]}{1-\alpha})\right]=0, \tag{21}\]
_where \(Y^{\prime}=\sum_{k=1}^{M}Y_{j,k}\), \(Y\) and \(Y_{j,1},Y_{j,2},\ldots\) are i.i.d. and have the same distribution as the transmission delay \(Y_{i}\)'s._
Theorem 4 provides the same sampling policy as that of [24, Theorem 1]. But we slightly improve [24, Theorem 1] by removing its assumption of the regenerative process. The proof of this improvement is provided in Appendix H.
Different from Theorem 2, the optimal sampling policy is a threshold policy on the age, or equivalently, the MMSE, instead of the instantaneous estimation error. Note that the age keeps increasing over time if there is no successful delivery. As a result, if the previous transmission failed, the age is always larger than the optimal threshold \(\beta-\frac{\mathbb{E}\left[Y\right]}{1-\alpha}\). Therefore, Theorem 4 tells that if the previous transmission is successful, the sampler may wait for some time until the current
Figure 4: Evolution of the age \(\Delta_{t}\) over time \(t\).
age exceeds the threshold \(\beta-\frac{\mathbb{E}[Y]}{1-\alpha}\). If the previous transmission failed, the sampler chooses zero-wait. This is another key difference from the optimal signal-aware sampling policy in Theorem 3.2. In Theorem 3.2, due to the randomness of the Wiener process, each sampler may need to wait, regardless of the outcome of the previous transmission. In addition, since there is only one waiting time between two successful deliveries, the optimal objective value \(\beta\) is the root of the closed form expression (21). But the root function of \(\beta\) for the signal-aware case in (15) is not closed-form. Instead, as is illustrated in Theorem 3.2 and Algorithm 1, we need to construct a sequence of functions \(J_{n}\)'s to approach the root function.
## 4. Simulation
In this section, we will compute the long term average MMSE (average MSE) of the following three sampling policies:
1. Our Results: our optimal sampling policy, which is the solution to problem (3), provided in Theorem 3.2. The average MSE is then computed in Algorithm 1. It waits until the estimation error exceeds a threshold.
2. Zero-wait: The source transmits a sample once it receives the feedback, i.e., \(S_{i+1}=D_{i}\). This simple policy can achieve the maximum throughput and the minimum delay. However, even in the case of a reliable channel, it may not optimize the age of information [34] or optimize the
Figure 5. Average MSE versus \(\sigma\), where the channel delay is lognormal distributed with the parameter \(\sigma\). As \(\sigma\) increases, the channel delay distribution is more heavy-tailed. The probability of i.i.d. transmission failure \(\alpha=0.65\).
Figure 6. Average MSE versus the probability of i.i.d. transmission failure \(\alpha\), where the channel delay is lognormal distributed with the parameter \(\sigma=1.5\).
estimation error [30]. In our study with an unreliable channel, Theorem 4 implies that the zero-wait policy does not optimize the age. Moreover, Theorem 1\(-\)3 imply that the zero-wait policy does not optimize the estimation error.
3. Age-optimal: This policy is provided in Theorem 4, restated in [24, Theorem 1], and the average MSE is computed by [24, Algorithm 1]. Age-optimal policy achieves the optimal average age. It waits until the age (i.e., MMSE \(\mathbb{E}[(W_{t}-\hat{W}_{t})^{2}]\)) exceeds a threshold.
We will follow the same network system as is illustrated in Section 2 and Fig. 1. We consider two scenarios about the delay distribution of the unreliable channel: heavy-tailed distribution (e.g., lognormal distribution) and short-tailed distribution (e.g., constant).
In the first scenario, we assume that the channel delay follows a lognormal distribution. The lognormal random variable with scale parameter \(\sigma\) is expressed as \(e^{\sigma A}/\mathbb{E}[e^{\sigma A}]\), where \(A\) is the standard normal random variable. Fig. 5 illustrates the relationship between the average MSE of the four sampling policies with parameter \(\sigma\) of lognormal channel delay, given a discount factor \(\alpha\) (probability of failure of the channel). The numerical results validate that our proposed policy always achieves the lowest average MSE. Note that as \(\sigma\) increases, the lognormal distribution of the channel becomes more heavy-tailed. We observe that the zero-wait policy is far from optimality, and the age-optimal policy also grows much quicker than our optimal policy. Therefore, our optimal policy substantially outperforms the age-optimal and zero-wait policies when the channel delay becomes heavy-tailed. Fig. 6 plots the evolutions of the average MSE with the change of \(\alpha\) given that the parameter \(\sigma=1.5\). From our observation, the zero-wait policy is always far from our optimal policy.
In the second scenario, we assume that the channel delay is a constant. Fig. 7 depicts the evolution of the average MSE of different policies with the change of \(\alpha\). Note that the age-optimal policy is equivalent to the zero-wait policy when the delay is a constant, as is shown in [24, Corollary 3]. We observe that when the channel connectivity is more reliable (\(\alpha\) very small), then the zero-wait policy is only slightly inferior to the optimal policy. However, as \(\alpha\) increases, the zero-wait policy becomes far from optimality. The intuitive reason is as follows: since the Wiener process oscillates, with a nontrivial probability, our optimal policy waits at each sample, no matter whether the last transmission failed or not. Compared to the zero-wait policy, such a quite different sampling strategy leads to much improvement for the average MSE. This is the newly observed phenomenon that has not been found in the previous studies, e.g., [21, 24, 28, 30].
In summary, our optimal policy can perform much better than the zero-wait and the age-optimal policy when either (i) the transmission time is heavy-tailed, or (ii) the transmission time is light-tailed, and the channel is highly unreliable.
Figure 7: Average MSE versus \(\alpha\), where the channel delay is a constant with the delay \(Y=6\).
## 5. Proof of main results
In this section, we provide the proof for efficiently solving the optimal signal-aware policy for (3). In Section 5.1, we first show that there exists an optimal policy such that the inter-sampling time of the successfully delivered packet is i.i.d. Thus, the long term average MMSE in (3) is equal to the average MMSE only between the two successful delivery times. In Section 5.2, after linearizing, the reduced problem is equivalent to optimizing a discrete time discounted problem with multiple stopping times (27). This new problem a strict generalization to a discrete time discounted MDP, where each action is extended to be a stopping time. To solve (27), in Section 5.3, we first speculate that the optimal policy and its optimal value function satisfy the Bellman equation. Then, we use a value iteration algorithm to approach the optimal value function, where each iteration is an optimal stopping problem. Interestingly, we analytically solve the optimal stopping time for each iteration, which is a key technical contribution in this paper. Finally, in Section 5.4, we use the contraction mapping property to show that the optimal value function of the value iteration algorithm convergences linearly to that of the Bellman equation. Thus, we exactly solve (27). This ends the proof.
### Reducing to a Single-epoch Problem
#### 5.1.1. Replacing the subscript \(i\) by \((j,k)\)
The proof relies on the number of successfully delivered samples and the number of samples attempted for a successful delivery. These messages cannot be easily described in \(\{S_{i},Y_{i},D_{i}\}\)'s by using only one subscript \(i\). Therefore, for notational simplicity, throughout Section 5, we will replace \(S_{i},Y_{i},D_{i}\) by \(S_{j,k},Y_{j,k},D_{j,k}\), respectively. Here, we denote the \(j\)th epoch to be the time interval between the \((j-1)\)th and the \(j\)th successful deliveries. Let \(M_{j}\) represent the total number of transmissions attempted during the \(j\)th epoch. Then, \(M_{j}\) has a geometric distribution with parameter \(1-\alpha\). Note that if the channel is reliable, then \(M_{j}=1\). In addition, \(k\in\{1,2,\ldots,M_{j}\}\) represents the index of transmission for the \(j\)th epoch, where the case \(k=1\) implies that the last transmission was successful. Note that the mapping from \(i\) to \((j,k)\) is one-to-one. For example, in Fig 2, \(S_{1}=S_{1,1}\) with \(M_{1}=1\), \(S_{2}=S_{2,1},S_{3}=S_{2,2}\) with \(M_{2}=2\), and \(S_{4}=S_{3,1}\) with \(M_{3}=1\).
By (1), the MMSE estimator \(\hat{W}_{t}\) is expressed as
\[\hat{W}_{t}=W_{S_{j-1},M_{j-1}},t\in[D_{j-1,M_{j-1}},D_{j,M_{j}}). \tag{22}\]
#### 5.1.2. Reducing to a Single-epoch Problem
We aim to show that solving the original problem (3) can be reduced to solving the optimal sampling times \(S_{j,1},S_{j,2},\ldots\) within an epoch \(j\) over a subset of the policy space \(\Pi_{\text{signal-aware}}\). We denote such the subset \(\Pi_{j}\) as a collection of sampling times \(S_{j,1},S_{j,2},\ldots\) within epoch \(j\) such that each inter-sampling time \(\{S_{j,k}-S_{j-1,M_{j-1}},k=1,2,\ldots\}\) is independent of the history information before \(S_{j-1,M_{j-1}}\). The following result shows that our average cost problem (3) reduces to a single epoch problem (with arbitrary index \(j\)) that contains possibly multiple samples from one successful delivery time until the next successful delivery time.
Proposition 1 ().: _There exists an optimal policy for the problem (3) such that \(\{S_{j,M_{j}}-S_{j-1,M_{j-1}}\}_{j}\) are i.i.d. Moreover, problem (3) is equivalent to_
\[mse_{opt}=\inf_{(S_{j,\downarrow}S_{j,2},\ldots)\in\Pi_{j}}\frac{\mathbb{E} \left[\int_{D_{j-1,M_{j-1}}}^{D_{j,M_{j}}}(W_{t}-W_{S_{j-1,M_{j-1}}})^{2}dt \right]}{\mathbb{E}\left[D_{j,M_{j}}-D_{j-1,M_{j-1}}\right]}. \tag{23}\]
Proof.: See Appendix B.
Proposition 1 implies that to solve the long term average MMSE problem (3), we can solve a problem with only a single epoch. Each sampling decision in this epoch is independent of the history information prior to the final sampling time of the previous epoch. Proposition 1 is motivated by [28, 30] under a reliable channel. In these studies, the original problem is reduced to an average MMSE problem between two delivery times (a single sample problem). One of the key reasons is that at each delivery time, the estimation error is updated and is independent of the history information before the last sampling time. But in our unreliable case, at a failed delivery time, the estimation error is not updated and is still correlated to that history information. Thus, our single epoch problem cannot be further reduced to a single sample problem. In addition, we also improve [28, 30] by removing the assumption of the regenerative process. A similar result to Proposition 1 is presented in [2] with an unreliable channel and signal-agnostic sampling, without the assumption of the regenerative process. We also generalize [2] since our sampling time depends on the Wiener process.
Although we have reformulated the long term average MMSE problem (3) into an average MMSE problem within a single epoch (23), problem (23) is still hard to solve. This is because it contains a fraction and thus is a repeated semi-MDP.
### Reformulating as a Multiple Stopping Times Problem: an Extension to a Discounted MDP
In this section, we will linearize problem (23) and reformulate it as a discounted cost and repeated Markov decision process (MDP), where each action is a stopping time.
Let us define a minimization problem with a parameter \(\beta\in\mathbb{R}\):
\[h(\beta)=\inf_{\pi\in\Pi_{j}}\mathbb{E}\left[\int_{D_{j-1,M_{j-1}}}^{D_{j,M_{ j}}}(W_{t}-W_{S_{j-1,M_{j-1}}})^{2}dt-\beta(D_{j,M_{j}}-D_{j-1,M_{j-1}}) \right]. \tag{24}\]
Here, \(\pi=(S_{j,1},S_{j,2},\ldots)\). By Dinkelbach's method [9], we have
**Lemma 3**.: _(i) \(h(\beta)\lessneqq 0\) if and only if \(\text{mse}_{opt}\lessneqq\beta\)._
_(ii) When \(\beta=\text{mse}_{opt}\), the solution to (23) and (24) are equivalent._
Therefore, to solve (23), we will solve \(h(\text{mse}_{\text{opt}})=0\).
We denote \(Z_{j,k}\) as the waiting time for the \(k\)th sample in epoch \(j\). Then,
\[D_{j,M_{j}}-D_{j-1,M_{j-1}}=\sum_{k=1}^{M_{j}}Z_{j,k}+Y_{j,k}. \tag{25}\]
Then, combined with (25) and the strong Markov property of the Wiener process, given that \(W_{D_{j-1,M_{j-1}}}-W_{S_{j-1,M_{j-1}}}=w\), \(w\in\mathbb{R}\), we have
\[\int_{D_{j-1,M_{j-1}}}^{D_{j,M_{j}}}(W_{t}-W_{S_{j-1,M_{j-1}}})^{2}dt=\int_{0 }^{\sum_{k=1}^{M_{j}}Z_{j,k}+Y_{j,k}}(W_{t}+w)^{2}dt. \tag{26}\]
As a result, (25) and (26) give:
**Lemma 4**.: _An optimal solution to (23) given that \(W_{D_{j-1,M_{j-1}}}-W_{S_{j-1,M_{j-1}}}=w\), \(w\in\mathbb{R}\) satisfies_
\[J(w) \triangleq \inf_{\pi\in\Pi_{j}}J_{\pi}(w), \tag{27}\] \[J_{\pi}(w) \triangleq \mathbb{E}\left[\int_{0}^{\sum_{k=1}^{M_{j}}Z_{j,k}+Y_{j,k}}(W_{ t}+w)^{2}dt-\text{mse}_{opt}(\sum_{k=1}^{M_{j}}Z_{j,k}+Y_{j,k})\right]. \tag{28}\]
Here, \(J(w)\) is the total cost of the optimal policy, which is also called the _optimal value function_. And \(J_{\pi}(w)\) is the total cost of a policy, which is also called the _action value function_ with a policy \(\pi\).
For any policy \(\pi\), the action value function \(J_{\pi}(w)\) in (28) is further written as
\[J_{\pi}(w)=\mathbb{E}\left[\sum_{k=1}^{M_{j}}g(\tilde{W}_{k};Z_{j,k})|\tilde{W} _{1}=w\right], \tag{29}\]
where the state values \(\tilde{W}_{k},\ k=1,2\ldots\) satisfy
\[\tilde{W}_{k+1}=\tilde{W}_{k}+W_{Z_{j,k}+Y_{j,k}},k=1,2,\ldots, \tag{30}\]
\(g(w;\tau)\), also called a _per stage cost function_, is the expected integration of square estimation error minus \(\text{mse}_{\text{opt}}\) from the last delivery time to the next delivery time,4 where the initial estimation error is \(w\), and the sampler's waiting time is \(\tau\). \(g(w;\tau)\) is defined as:
Footnote 4: For comparison, \(J_{\pi}(w)\) is the expected integration of square estimation error minus \(\text{mse}_{\text{opt}}\) from the last delivery time to the next _successsful_ delivery time.
\[g(w;\tau)=\mathbb{E}\left[\int_{0}^{\tau+Y}(w+W_{t})^{2}dt-\text{mse}_{\text{ opt}}(\tau+Y)\right], \tag{31}\]
where \(Y\) has the same distribution as the channel delay. The equation (29) holds because of the strong Markov property of the Wiener process.
Note that \(J_{\pi}(w)\) represents the expected cost of square estimation error minus a constant \(\text{mse}_{\text{opt}}\) within an epoch. In an epoch, if the transmission is successful with probability \(1-\alpha\), then the system will stop. Thus, the system state will enter a "stopping" set with \(0\) cost; If the transmission fails with probability \(\alpha\), the system state will enter the next transmission with a per-stage cost \(g\). Therefore,
\[J_{\pi}(w)=\sum_{k=1}^{\infty}\alpha^{k-1}\mathbb{E}\left[g(\tilde{W}_{k};Z_{j,k})|\tilde{W}_{1}=w\right], \tag{32}\]
which is proven in (24, Appendix F). The \(k\)th stage state \(\tilde{W}_{k}\) implies that all the previous \(k-1\) transmissions failed, and the coefficient \(\alpha^{k-1}\) is the probability of \(k-1\) consecutive failures.
Equations (27)--(32) imply that problem (27) belongs to a discounted cost problem with multiple stopping times, or in other words, a _repeated MDP_, because there are multiple waiting times \(Z_{j,1},Z_{j,2},\ldots\), and each waiting time is a stopping time. Suppose that each waiting time is not a stopping time, i.e., the waiting time policy chooses a real value that is independent of the Wiener process. Then, problem (27) is reduced to a discrete time discounted cost MDP [3]. This is because: (i) the state at each stage \(k\) is the estimation error at the \(k-1\)th delivery time, \(\tilde{W}_{k}\) (when \(k=1\), \(\tilde{W}_{1}=w\) (32)). (ii) The action at each stage \(k\) is the waiting time for the \(k\)th sample, \(Z_{j,k}\). (iii) The state transition is provided in (30). (iv) The cost function is defined in (31).
Note that the waiting times \(Z_{j,1},Z_{j,2},\ldots\) are correlated. Thus, despite that we have linearized the problem (23) into a multiple stopping time problem (27), problem (27) still faces the curse of dimensionality.
### Analytical Solution to the Value Iteration (35) for the Multiple Stopping Times Problem (27)
In the special case where each waiting time \(Z_{j,1},Z_{j,2},\ldots\) is not a stopping time, the optimal policy and the optimal value function to the discounted MDP satisfies the Bellman equation (5, Chapter 9). The advantage of the Bellman equation is that it turns the MDP with correlated waiting times into
an optimization problem over a single waiting time and thus helps reduce the complexity of the MDP. Suppose that we can propose a waiting time decision \(z(w),w\in\mathbb{R}\) and the action value function of the stationary policy \(z,z,\ldots\) that is the unique solution to the Bellman equation. Then, the policy \(z,z,\ldots\) is an optimal policy.
Similar to the previous MDP case, we believe that the optimal policy and the optimal value function of our repeated MDP (27) still satisfies the Bellman equation5. Because except that each waiting time is extended to be a stopping time, our repeated MDP (27) has the same components as that of a discounted MDP. The Bellman equation for our repeated MDP (27) is defined as follows:
Footnote 5: This statement is technically true if we can show that our action space is a Borel space (We call \(B\) as a Borel space if there exists a complete separable metric space \(R\) and a Borel subset \(\tilde{B}\in\mathcal{B}_{R}\) such that \(B\) is homeomorphic to \(\tilde{B}\)) [5, Chapter 9]. Examples of a Borel space are \(\mathbb{R}\) and any real-valued intervals. For showing that our action space is a Borel space, we leave to our future studies.
\[J(w)=TJ(w)\triangleq\inf_{r\in\mathfrak{Re}}g(w;\tau)+\alpha\mathbb{E}\left[J( w+W_{\tau+Y})\right], \tag{33}\]
where \(\mathfrak{Re}\) is the set of stopping times on the Wiener process \(W_{t}\) such that
\[\mathfrak{Re}=\left\{\tau:\{\tau<t\}\in\mathcal{F}(t)^{+},\mathbb{E}\left[ \tau^{2}\right]<\infty\right\}, \tag{34}\]
where \(\mathcal{F}(t)^{+}=\cap_{s>t}\sigma(W_{r},r\in[0,s])\). In (33), \(w+W_{\tau+Y}\) is the next state of estimation error, after a stopping time \(\tau\) and a channel delay \(Y\).
However, problem (33) is not an optimal stopping problem because the function \(J\) exists in both sides. To overcome this issue and exactly solve (33), our method in this paper is to use the value iteration algorithm [4] to convert (33) into multiple standard optimal stopping problems that are solvable. Specifically, we will construct a sequence of optimal stopping problems to approach the problem (33), where in each optimal stopping problem, the action value functions are well-defined.
We define the value iteration algorithm regarding to the problem (33) as follows:
\[J_{0}(w) \triangleq 0,\] \[J_{n+1}(w) \triangleq TJ_{n}(w)=\inf_{r\in\mathfrak{Re}}g(w;\tau)+\alpha \mathbb{E}\left[J_{n}(w+W_{\tau+Y})\right],\ n=0,1,2,\ldots \tag{35}\]
We also denote \(\tau_{1},\tau_{2},\ldots\) as the optimal stopping time of the problem (35) when \(n=1,2,\ldots\), respectively. Then, \(J_{n}(w)=T^{n}0(w)\) is the discounted integrated cost from the first delivery time (the last transmission was successful) until at most the \(n\)th delivery time, where the \(n\)th transmission implies that previous \(n-1\) transmissions have failed. In addition, the waiting times for the \(n\) transmissions are \(\tau_{1},\tau_{2},\ldots,\tau_{n}\), respectively. Note that \(J(w)\) is the discounted cost about infinite number of transmissions. Thus, our objective is to exactly solve (35) by figuring out \(\tau_{1},\tau_{2},\ldots\) and show that \(T^{n}0(w)\to J(w)\) as \(n\to\infty\).
#### 5.3.1. Candidate Solutions to (35)
We speculate that each optimal stopping time \(\tau_{1},\tau_{2},\ldots\) for (35) is a _hitting time_, or in other words, _threshold type_, defined as follows:
\[\tau_{n}=\inf_{t\geq 0}\{t:|w+W_{t}|\geq v_{n}\},\ n=1,2,\ldots, \tag{36}\]
where \(w\), called the initial state, is the estimation error at the \(n-1\)th delivery time \(D_{j,n-1}\) (\(n=1\) implies that the last transmission was successful, and the delivery time is \(D_{j-1,M_{j-1}}\)). Next, we aim to find out the sequence of the optimal thresholds \(v_{1},v_{2},\ldots\).
Let us define a function \(G_{n}(w)\) as follows:
\[G_{n}(w)=g(w;0)+\alpha\mathbb{E}\left[J_{n-1}(w+W_{Y})\right]. \tag{37}\]
Intuitively, \(G_{n}(w)\) is the action value function that chooses \(0\) waiting time at the first stage, incurs the cost \(g(w;0)\), and chooses the optimal waiting times at the remaining \(n-1\) stages. Since the
speculated optimal waiting time (36) is a hitting time, \(J_{n}(w)=G_{n}(w)\) if \(|w|\geq v_{n}\). In addition, we provide an alternative expression of \(g(w;\tau)\):
Lemma 5.: \[g(w;\tau)=\mathbb{E}\left[\int_{0}^{\tau}(w+W_{t})^{2}-mse_{opt}dt+\mathbb{E} \left[Y\right](w+W_{\tau})^{2}\right]+\frac{1}{2}\mathbb{E}\left[Y\right]^{2}- \mathbb{E}\left[Y\right]mse_{opt}.\] (38)
_Moreover, if \(\tau\) is a hitting time with a threshold \(v\) given the initial value \(w\). i.e., \(\tau=\inf_{t\geq 0}\{t:|w+W_{t}|\geq v\}\), then we have_
\[g(w;\tau)=g(w,v,mse_{opt}), \tag{39}\]
_where \(g(w,v,mse_{opt})\) is defined in (11)._
Proof.: See Appendix A
Then, our problem (35) is augmented as the sequence of standard optimal stopping problem [25, Chapter 1]:
\[\tilde{J}_{n}(w,q)=\inf_{\tau\in\overline{\mathbb{M}}}\mathbb{E}\left[\tilde{ G}_{n}(w+W_{\tau},q+Q_{\tau})\right],\text{ for all }w,q\in\mathbb{R}, \tag{40}\]
where
\[\tilde{G}_{n}(w+W_{t},q+Q_{t}) \triangleq\tilde{g}(w+W_{t},q+Q_{t})+\alpha\mathbb{E}\left[J_{n-1 }(w+W_{t}+W_{Y})\right], \tag{41}\] \[\tilde{g}(w+W_{t},q+Q_{t}) \triangleq q+Q_{t}+\mathbb{E}\left[Y\right](w+W_{t})^{2}+\frac{1}{2} \mathbb{E}\left[Y\right]^{2}-\mathbb{E}\left[Y\right]mse_{opt},\] (42) \[Q_{t} \triangleq\int_{0}^{t}(w+W_{r})^{2}-mse_{opt}dr. \tag{43}\]
By Lemma 5, for any \(\tau\), we have \(g(w;\tau)=\mathbb{E}\left[\tilde{g}(w+W_{t},Q_{t})\right]=\mathbb{E}\left[ \tilde{g}(w+W_{t},q+Q_{t})\right]-q\).
According to [20, Chapter 10] and [25, Section 8], the free boundary method implies that the optimal objective function \(\tilde{J}_{n}(w,q)\) should satisfy
\[\frac{1}{2}\frac{\partial^{2}}{\partial w^{2}}\tilde{J}_{n}(w,q)+ w^{2}-mse_{opt}=0,w\in(-v_{n},v_{n}), \tag{44}\] \[\tilde{J}_{n}(w,q)=\tilde{G}_{n}(w,q),w\in(-\infty,-v_{n}]\cup[v_ {n},\infty),\] (45) \[\frac{\partial}{\partial w}\tilde{J}_{n}(w,q)\Big{|}_{w=\pm v_{n }}=\frac{\partial}{\partial w}\tilde{G}_{n}(w,q)\Big{|}_{w=\pm v_{n}}. \tag{46}\]
The first equation (44) tells that in the continuation set \((-v_{n},v_{n})\), the infinitesimal operator of \(\tilde{J}_{n}(w,q)\) is zero. In the second equation (45), at the stopping set \((-\infty,-v_{n}]\cup[v_{n},\infty)\), the stopping time \(\tau_{n}\) is zero. The third equation (46) implies that \(\tilde{J}_{n}(w,q)\) should be continuously differentiable at the boundary points \(w=\pm v_{n}\). These three equations are then simplified to:
\[\frac{1}{2}J_{n}^{\prime\prime}(w)+w^{2}-mse_{opt}=0,w\in(-v_{n},v _{n}), \tag{47}\] \[J_{n}(w)=G_{n}(w),w\in(-\infty,-v_{n}]\cup[v_{n},\infty),\] (48) \[J_{n}^{\prime}(w)\Big{|}_{w=\pm v_{n}}=G_{n}^{\prime}(w)\Big{|} _{w=\pm v_{n}}. \tag{49}\]
By (47)\(-\)(49), \(v_{n}\) is the positive solution to \(J_{n}^{\prime}(v_{n}^{-})=G_{n}^{\prime}(v_{n})\). Combined with Lemma 5, we provide the following results for deriving the sequence \(v_{1},v_{2},\ldots\):
**Lemma 6**: _For all \(n=1,2,\ldots\) we have that:_
_(a) If \(|w|<v_{n}\), then_
\[J_{n}^{\prime}(w)=\frac{\partial}{\partial w}g(w,v_{n},\,mse_{opt})=-\frac{2}{3} w^{3}+2\,mse_{opt}w. \tag{50}\]
_If \(|w|>v_{n}\), then_
\[J_{n}^{\prime}(w)=G_{n}^{\prime}(w)=\frac{\partial}{\partial w}g(w,0,\,mse_{ opt})+\alpha\mathbb{E}\left[J_{n}^{\prime}(w+W_{Y})\right]=2\mathbb{E}\left[Y \right]w+\alpha\mathbb{E}\left[J_{n-1}^{\prime}(w+W_{Y})\right]. \tag{51}\]
_The optimal threshold \(v_{n}\) is the positive solution to_
\[G_{n}^{\prime}(w)+\frac{2}{3}w^{3}-2\,mse_{opt}w=0. \tag{52}\]
_Moreover, \(G_{n}^{\prime\prime}(w)\) and \(G_{n}^{\prime\prime\prime}(w)\) are continuous._
_(b) \(G_{n}^{\prime}(0)=0\), and \(G_{n}^{\prime\prime}(w)+2w^{2}-2mse_{opt}\geq 0\) for all \(w\in[v_{n},\infty)\)._
_(c) \(G_{n}^{\prime\prime\prime}(w)\geq 0\), and \(G_{n}^{\prime\prime\prime}(w)+4w\geq 0\) for all \(w\geq 0\)._
_(d) The sequence of thresholds \(v_{1},v_{2},\ldots\) is bounded with \(v_{n}\leq\sqrt{3\,mse_{opt}}\) and is decreasing, thus converges._
See Appendix D.
#### 5.3.2. Optimality of the Candidate Solution to (35)
We finally validate that the hitting time (36) is the optimal solution. Combined with Lemma 6, we have the following result:
**Theorem 5**: _(a) An optimal sequence of waiting times \(\tau_{1},\tau_{2},\ldots\) for (35) satisfies (36), and each threshold \(v_{n}\) is the positive root of (52), where \(G_{0}^{\prime}(w)=0\), \(G_{n}^{\prime}(w)\) is updated by (51), \(J_{0}^{\prime}(w)=0\), and \(J_{n}^{\prime}(w)\) is updated by (50)(51)._
_(b) The function \(G_{n}^{\prime}(w)+\frac{2}{3}w^{3}-2\,mse_{opt}w\) in (52) is convex for \(w\geq 0\) and strongly convex for \(w>0\). Therefore, the positive root of \(v_{n}\) is unique. In addition, \(v_{n}\) decreases and thus converges._
Theorem 5 (b) is directly shown by Lemma 6. It remains to show that the exact solution provided in Theorem 5 (a) is optimal to the value iteration problem (35).
Proof of Theorem 5 (a).: we obtain the two following results:
**Lemma 7**: _We have \(\tilde{J}_{n}(w,q)\leq\tilde{G}_{n}(w,q)\) for any \((w,q)\in\mathbb{R}^{2}\) and the iteration number \(n=1,2,\ldots\)._
See Appendix E.
**Definition 1**: _A function f(w,q) is excessive if \(\mathbb{E}[\tilde{f}(w+W_{t},q+Q_{t})]\leq\tilde{f}(w,q)\) for all \(t\geq 0\) and \((w,q)\in\mathbb{R}^{2}\)._
**Lemma 8**: _The negative value function \(-\tilde{J}_{n}(w,q)\) is excessive for any \((w,q)\in\mathbb{R}^{2}\) and the iteration number \(n=1,2,\ldots\)._
See Appendix F.
By Lemma 7 and Lemma 8, using Corollary to Theorem 1 in [27, Section 3.3.1], we have that the stopping times \(\tau_{1},\tau_{2},\ldots\) in (36) are optimal to (40), thus are optimal to (35). This completes the proof of Theorem 5 (a).
### Linear Convergence of Value Iteration to the Repeated MDP
In this section, we will show that the optimal value functions \(J_{n}\) of the value iteration algorithm (35) converge linearly to the optimal value function \(J\) of our problem in (27). We have the following result:
Lemma 9 ().: _(a) Suppose that the continuation set of \(\tau\) is is bounded by \(\bar{b}\), i.e., if \(w^{2}\geq\bar{b}\), then \(\tau=0\). Then, \(\frac{\mathbb{E}[u\left(w\ll W_{\tau}\right)]}{u\left(w\right)}\leq\frac{ \rho}{\alpha}\)._
_(b) The function \(J_{n}(w)=T^{n}0(w)\) satisfies the contraction mapping property, i.e., \(\|T^{n}0\|<\infty\), \(\|T^{\infty}0\|<\infty\), and \(\|T^{n+1}0-T^{n}0\|\leq\rho^{n}\|T0\|\). The Bellman operator \(T\) is defined in (33)._
_(c) \(J(w)=T^{\infty}0(w)\) is the unique solution to the Bellman equation \(J=TJ\) (33) (with \(\|J\|<\infty\)). Further, \(\|T^{n}0-J\|\leq\rho\|T^{n-1}0-J\|\)._
Proof.: See Appendix G.
By Lemma 9(c), \(J\) is the unique solution to the Bellman equation \(J=TJ\). Therefore, \(J\) is the optimal value function for the problem (27). Due to the linear convergence of \(J_{n}\) to \(J\), Lemma 6(d) implies that the optimal stopping time for (27) is also a hitting time, where the optimal threshold is \(v=\lim_{n\to\infty}v_{n}\). This completes the proof of Theorem 2. In addition, Lemma 3 implies that \(\operatorname{mse}_{\mathrm{opt}}\) is the solution to \(\mathbb{E}[\lim_{n\to\infty}J(W_{Y})]=0\). These statements combined with Theorem 5 completes the solution to the problem (3).
### Discussion
In this section, we compare our proof and technical contributions with some related works and discuss some interesting future directions.
#### 5.5.1. Special Case \(1\): Reliable Channel [30]
In the special case of a reliable channel \((\alpha=0),M_{j}=1\). The problem (27) is then reduced to:
\[J(w)\triangleq\inf_{\tau\in\Re}g(w;\tau). \tag{53}\]
The problem (27) for general \(\alpha\geq 0\) is a repeated MDP, because we need to determine multiple correlated waiting times in an epoch, and each waiting time is a stopping time. However, when \(\alpha=0\), the problem (53) reduces to an MDP, or in other words, an optimal stopping problem with a single waiting time. Note that solving (53) is still nontrivial. We speculate that the optimal waiting time \(\tau\) is a hitting time. Using Lemma 6 (a), the optimal threshold \(v\) is the positive root of
\[\frac{2}{3}v^{3}-2(\operatorname{mse}_{\mathrm{opt}}-\mathbb{E}[Y])v=0, \tag{54}\]
which is \(v=\sqrt{3(\operatorname{mse}_{\mathrm{opt}}-\mathbb{E}[Y])}\). By Theorem 5, the speculated waiting time is optimal. This implies the final result Corollary 1 ([30, Theorem 1]).
Similar studies with a reliable channel are also indicated, e.g., in [21, 28, 32]. The key insight is to solve an optimal stopping time like (53). Our study with an unreliable channel is different from these studies, because we need to solve a problem with multiple correlated stopping times (27). To solve this, we need to analytically solve a value iteration algorithm (35) that includes a sequence of optimal stopping problems. Compared to (53), for each iteration \(n\), our optimal stopping problem is more challenging to solve, because the optimal value function is a more complicated expression that contains a summation of \(n\) correlated samples.
#### 5.5.2. Special Case \(2\): Signal-agnostic Sampling [24]
When the sampling time is independent of the Wiener process, each waiting time takes a nonnegative value based on the timing history information, but not the evolution of the Wiener process. The previous problem (27) is reduced
from a discounted and repeated MDP to a discounted MDP. The study in [24] has shown that the optimal policy is a threshold policy on the age (i.e., MMSE).
Since the optimal signal-aware sampling policy is different from the optimal signal-aware sampling policy, the proof of solving our problem (27) is different from that of the discounted MDP in [24]. The authors in [24] solve their problems as follows: (i) they first propose a threshold based waiting decision \(\mu(\delta)=\max(\text{age}_{\text{opt}}-\delta-\mathbb{E}[Y]/(1-\alpha),0)\), where \(\delta\) is the age state, and \(\text{age}_{\text{opt}}\) is the optimal average age; (ii) then they show that \(\mu\) and its value function are the unique solution to the Bellman equation: \(J_{\text{agnostic}}(\delta)=\inf_{z\geq 0}g_{\text{agnostic}}(\delta,z)+ \mathbb{E}[J_{\text{agnostic}}(\delta+z+Y)]\), where \(g_{\text{agnostic}}(\delta,z)\triangleq\mathbb{E}[\int_{\delta}^{\delta+z+Y }(t-\text{age}_{\text{opt}})dt]\).
However, such the proof ideas cannot be applied to our case, due to the following challenges that do not appear in [24]: (i) Since each waiting time is a stopping time, solving (27) faces the curse of dimensionality. For example, when \(\alpha=0\), (27) reduces to (53), but (53) is still an optimal stopping problem. In the signal-agnostic case, (53) is reduced to a convex optimization problem [28, Lemma 7], thus is much easier to solve; (ii) In [24], the Bellman equation is solvable. Since \(\mu\) is threshold type on the age, it is optimal to wait (\(\mu>0\)) only when the last transmission was successful. Thus, the optimal value function is a closed-form expression: \(J_{\text{agnostic}}(\delta)=\mathbb{E}\left[\int_{\delta}^{\delta+\mu(\delta )+Y^{\prime}}\delta dt-\text{age}_{\text{opt}}(\mu(\delta)+Y^{\prime})\right]\), where \(Y^{\prime}\) is given in Theorem 4. Since the Bellman equation is a minimization over nonnegative values, solving the Bellman equation is the same as comparing a few closed-form expressions. In our case, however, it is hard to compare, because the optimal value function \(J(w)\) is not closed-form. This is due to the randomness of the Wiener process, and we may wait for each sample.
#### 5.5.3. Future Direction \(1\): Non i.i.d. Channel Failure
When the channel failure is extended from i.i.d. to Markovian, we still believe that the statements in Section 5.1 and Section (5.2) are correct. However, there is a key difference in Section 5.3: the problem (27) (32) is changed to be
\[J(w)=\inf_{\pi=Z_{j,1},Z_{j,2},\ldots}g(w;Z_{j,1})+(1-\alpha^{ \prime})\mathbb{E}\left[g(\tilde{W}_{2};Z_{j,2})|\tilde{W}_{1}=w\right]+\sum_{ k=3}^{\infty}\alpha^{k-1}\mathbb{E}\left[g(\tilde{W}_{k};Z_{j,k})|\tilde{W}_{1}=w \right], \tag{55}\]
where \(\alpha\) is the self transition probability from \(OFF\) state to \(OFF\) state, and \(\alpha^{\prime}\) is the self transition probability from \(ON\) state to \(ON\) state. In the non i.i.d. case where \(1-\alpha^{\prime}\neq\alpha\), Problem (55) has a changing discount factor. Thus, the Bellman equation and the value iteration algorithm are not well-defined, making this new problem challenging to solve.
#### 5.5.4. Future Direction \(2\): Non i.i.d. Transmission Delay
Suppose that we consider a Markovian transmission delay. Then, the waiting time not only should depend on the evolution of the Wiener process, but also should depend on the last transmission delay. This is because the last transmission delay effects the next transmission delay. Therefore, the value iteration (35) should be extended as:
\[J_{n+1,\text{markov}}(w,y)=\inf_{\tau\in\mathfrak{N}}g(w;\tau)+ \alpha\mathbb{E}\left[J_{n,\text{markov}}(w+W_{\tau+Y},Y)\right],\ n=0,1,2,\ldots, \tag{56}\]
where \(y\) is the last transmission delay, and the distribution of \(Y\) is affected by \(y\). Due to the space limitation, we will consider this extended problem in the future directions.
## 6. Conclusion
In this paper, we provide a sampling policy to minimize the mean square estimation error, where the sampler generates the sample at the source and transmits it to the remote estimator over a time-varying channel. We show that the optimal sampling policy is a threshold policy on the instantaneous estimation error, and the threshold is computed efficiently. The curse of dimensionality that originates from the randomness of the Wiener process, channel conditions, and the channel delay is circumvented. We believe that the proof of our main results provides an insight about how to solve a problem with discounted and multiple stopping times.
## Acknowledgment
This work has been supported in part by NSF grants NSF AI Institute (AI-EDGE) CNS-2112471, CNS-2106933, CNS-2106932, CNS-2312836, CNS-1955535, CNS-1901057, and CNS-2239677, by Army Research Office under Grant W911NF-21-1-0244, and was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-23-2-0225. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
We thank Tasmeen Zaman Ornee, Md Kamran Chowdhury Shisher, and Yining Li for their valuable suggestions for this paper.
|
2302.05253 | Cloud on-demand emulation of quantum dynamics with tensor networks | We introduce a tensor network based emulator, simulating a programmable
analog quantum processing unit (QPU). The software package is fully integrated
in a cloud platform providing a common interface for executing jobs on a HPC
cluster as well as dispatching them to a QPU device. We also present typical
emulation use cases in the context of Neutral Atom Quantum Processors, such as
evaluating the quality of a state preparation pulse sequence, and solving
Maximum Independent Set problems by applying a parallel sweep over a set of
input pulse parameter values, for systems composed of a large number of qubits. | Kemal Bidzhiev, Aleksander Wennersteen, Mourad Beji, Mario Dagrada, Mauro D'Arcangelo, Sebastian Grijalva, Anne-Claire Le Henaff, Anton Quelle, Alvin Sashala Naik | 2023-02-10T14:08:05Z | http://arxiv.org/abs/2302.05253v1 | # Cloud on-demand emulation of quantum dynamics with tensor networks
###### Abstract
We introduce a tensor network based emulator, simulating a programmable analog quantum processing unit (QPU). The software package is fully integrated in a cloud platform providing a common interface for executing jobs on a HPC cluster as well as dispatching them to a QPU device. We also present typical emulation use cases in the context of Neutral Atom Quantum Processors, such as evaluating the quality of a state preparation pulse sequence, and solving Maximum Independent Set problems by applying a parallel sweep over a set of input pulse parameter values, for systems composed of a large number of qubits.
## I Introduction
Quantum technology proposes leveraging the laws of quantum mechanics to process information in a framework that can enable solving hard computational problems more efficiently than classical alternatives. A new scientific age of quantum information has been precipitated by this idea, and constructing the first performant quantum processing units (QPUs) has become a key goal in recent years. As QPUs gain in effectiveness and reliability, experimental demonstrations have come forward solving specific computational tasks which may show an advantage over classical methods, such as sampling problems [1, 2, 3, 4], and quantum simulation implementations [5, 6]. Platforms for QPUs have been proposed based on diverse physical architectures, including neutral atoms, superconducting circuits, trapped ions and photons.
As well as continuing the experimental development of QPUs, it is becoming increasingly important to simulate the behavior of particular physical QPU architectures, referred to as _emulation_. Emulators encode realistic physical constraints including hardware-specific time dynamics and noise effects. Furthermore, they can inform about the expected performance of the QPU and benchmarking [7]. This allows the user to circumvent overhead by performing test jobs e.g. on a High-Performance Computing (HPC) backend before forwarding the full job on to the QPU. Moreover, integrating QPU and emulator backends through cloud access, offering the choice of backend to the user, allows efficient workflows as part of a full-stack quantum computer. The hybrid workflow would send suitable routines to the quantum processor, while executing larger computational tasks on the emulator backend. Systems supporting such hybrid workflows with emulated quantum devices already provide valuable services to researchers [8, 9, 10, 11, 12].
In most cases, the simulation of quantum systems has a complexity that grows exponentially with the system size. A significant instrument which has emerged to tackle this challenge are tensor networks [13], providing a powerful structure to represent complex systems efficiently. The framework of tensor networks has allowed the efficient simulation and study of quantum systems, such as, most importantly for our purposes, the numerical study of time-evolution of quantum states which are ground states of gapped local Hamiltonians. Further, they have found uses in the identification of new phases of matter [14, 15], as a structure for processing data [16, 17], or the numerical benchmarking of state-of-the-art QPU experiments with large qubit numbers [5, 18].
The paper is structured as follows. In Section II we review the elementary setup for the numerical simulation of quantum dynamics and introduce relevant tensor-network methods. Then, in Section III we describe the framework for integrating HPC-supported numerical algorithms into the cloud. We demonstrate applications of the emulator in IV, simulating a neutral atom quantum processor in three examples: producing 1D \(\mathbb{Z}_{n}\)-ordered states, preparing 2D antiferromagnetic states and per
Figure 1: Role of emulation in a workflow for large quantum systems. Pulse-level design tools are complemented with tensor network representations of the corresponding quantum states. Algorithms performing time evolution take into account Hamiltonian dynamics as well as realistic hardware conditions, which are simulated on a HPC backend. These can be used to prepare optimized instructions on a QPU. We present a cloud-based platform integrating these tasks on-demand.
forming a parallel sweep of pulse parameters to search for Maximum Independent Sets on a particular graph. We conclude with comments about extensions to the system and a discussion about emulated environments in heterogeneous computing.
## II Numerical simulation of quantum dynamics
### Time-evolution of a Quantum System
_Qubits_, or 2-level quantum systems, are the fundamental building blocks of the quantum systems that we will study. A _quantum state_ for an \(N\)-qubit system is an element of a \(2^{N}\)-dimensional Hilbert space. It can be represented by a complex vector \(|\psi\rangle=\sum_{x}c_{x}|x\rangle\) of size \(2^{N}\), where we choose \(\{|x\rangle\}\) to be an orthonormal basis of the space. We concentrate on an orthonormal basis that is built from the eigenstates of the \(\hat{\sigma}^{z}\) operator on each site, which is called the _computational basis_. Its elements can be written as \(|s_{1}s_{2}\cdots s_{N}\rangle\), where each \(s_{i}\in\{0,1\}\). The strings \(s_{1}s_{2}\cdots s_{N}\) are often called _bistrings_. A general quantum state is thus written as:
\[|\psi\rangle=\sum_{s_{1}\cdots s_{N}}c_{s_{1}\cdots s_{N}}|s_{1}\cdots s_{N}\rangle, \tag{1}\]
where the \(c_{s_{1}\cdots s_{N}}\) is a complex number corresponding to a probability amplitude for a given set of indices \(s_{1}s_{2}\cdots s_{N}\) representing the outcome of a measurement. The QPU takes an initial state and evolves it through a series of gates and/or analog pulses. To obtain the final state, it makes a projective measurement on it with respect to a certain measurement basis, repeating the cycle until a certain condition is met. Of these operations, the most computationally expensive calculation is to compute the final time-evolved state, which we describe next.
The time evolution of a quantum state is governed by the _Schrodinger Equation_
\[i\frac{d|\psi\rangle}{dt}=\hat{H}(t)|\psi\rangle, \tag{2}\]
where we consider a particular time-dependent _Hamiltonian_ operator \(\hat{H}(t)\) which is a sum of \(J\) time-dependent _control_ terms and \(N(N-1)/2\) two-body _interaction_ terms \(V_{ij}=V(\mathbf{r}_{ij})\) that depend on the relative positions \(\mathbf{r}_{ij}\) of each pair of qubits \(i,j\) in \(\{1,\ldots,N\}\):
\[\hat{H}(t)=\sum_{j=1}^{J}\hat{H}_{\text{ctrl}}^{(j)}(t)+\sum_{i>j}V_{ij}\hat{ P}_{i}\hat{P}_{j}. \tag{3}\]
with \(\hat{P}_{i}\) a local operator constructed with Pauli matrices1 and \(\hat{H}_{\text{ctrl}}^{(j)}(t)\) the \(j\)-th control term, a time-dependent function representing the action of the experimental parameters affecting the system, usually constructed using waveform generators and applied through laser or microwave fields.
Footnote 1: The Pauli matrices are \(\hat{\sigma}^{x}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\hat{\sigma}^{y}=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\text{ and }\hat{\sigma}^{z}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\). We represent their action on the \(i^{th}\) site of the quantum system by a Kronecker product of matrices: \(\hat{P}_{i}=\begin{matrix}1&\otimes\cdots\otimes&P\otimes\cdots\end{matrix}\), for \(\hat{P}\in\{\mathbb{I},\hat{\sigma}^{x},\hat{\sigma}^{y},\hat{\sigma}^{z}\}\) occupying the \(i\)-th entry. Composite operators such as \(\hat{P}_{i}\hat{P}_{j}\) are then formed via the matrix product.
The solution of the differential equation (2) can be obtained by a variety of methods [19]: one can approximate the time-evolution operator by \(\mathcal{U}(t)\approx\prod\hat{U}(\delta t)\), with \(\hat{U}(\delta t)=\exp(-i\hat{H}\delta t)\) for small time intervals \(\delta t\), so that \(\hat{H}\) can be considered time independent during \(\delta t\). However, computing the exponential of an operator can be prohibitive for large systems [20]. One can also expand \(U(\delta t)\) into a product of simpler terms up to a desired degree of accuracy, depending on the form of \(\hat{H}\)[21]. In addition, one can try to numerically solve the full differential equation, such as by using Runge-Kutta methods [22]. Another family of methods consists of implementing an algorithm simulating the time evolution in a quantum computer, for example, by decomposing the exponential into evolution blocks using the Trotter product formulas [23; 24; 25], or by using tools from quantum machine learning [26; 27]. One could also try to _learn_ the final state vector using tools from machine learning [28], or apply Monte Carlo techniques [29; 30; 31].
Finally, a very successful approach is given by tensor network techniques, which is the approach we choose to implement as part of our emulation framework. For readers unfamiliar with tensor networks, we briefly review them below. An in-depth detailed description can be found for example in [32; 33; 16; 13].
### Representing quantum states and their evolution using tensor networks
Simualting time evolution of non-equilibrium interacting systems is challenging due to the growth of the entanglement entropy with time [34]. Tensor network operations are a matter of linear algebra, and the size of the matrices involved scales with the entanglement of the system under consideration. Different types of tensor networks exhibit different scaling, depending on the geometry and dimension of the physical system [35; 36; 37; 38], however, for all of these methods, the matrices involved eventually become prohibitively large. Furthermore, only some existing time evolution algorithms can efficiently encode the dynamics of long-range Hamiltonians [39; 40], which are necessary to simulate two-body interactions (3) with Hamiltonians contains terms of the form \(V_{ij}\sim 1/|\mathbf{r}_{ij}|^{\alpha}\).
In order to simulate a quantum many-body system (1), we encode the wave function into a _matrix-product state_ (MPS) [13; 41]. MPS allow efficient representations of the \(N\)-order tensor \(c_{s_{1}\cdots s_{N}}\) as a product of \(N\) smaller tensors \(A[\sigma]^{s_{s_{\alpha}-1}^{s_{\alpha}}}_{i_{\sigma-1}s_{\alpha}^{\alpha}}\):
\[|\psi\rangle=\sum_{\{s\}}\sum_{\{i\}}A[1]^{s_{1}}_{i_{1}}A[2]^{s_{2}}_{i_{1}i_{2 }}\cdots A[N]^{s_{N}}_{i_{N-1}}|s_{1}\cdots s_{N}\rangle, \tag{4}\]
where each tensor index \(i_{\sigma}\in i\) runs from 1 to \(\chi_{\sigma}\). The bond dimension \(\chi_{\sigma}\) controls the size of each \(A[\sigma]\), which determines the computation complexity of the simulation [34].
One of the most successful algorithms able to treat long-range interactions, while maintaining a sufficiently
small bond dimension, is the Time-Dependent Variational Principle (TDVP) [42]. TDVP constrains the time evolution to the MPS manifold \(\mathcal{M}(\psi[A])\) with a given \(\chi\) by projecting the right hand side of the Schrodinger equation onto the tangent space \(T\mathcal{M}(\psi[A])\) :
\[\frac{d|\psi[A]\rangle}{dt}=-i\hat{\Pi}_{T\mathcal{M}(\psi[A])}\hat{H}|\psi\rangle. \tag{5}\]
The resulting equations (5) are nonlinear and couple all degrees of freedom in the MPS. Approaches to solving (5) have different ways of controlling accuracy and convergence. In the emulator presented here, we implement 2-site TDVP to deal with the core numerical simulations. Details of the implementation are given in Appendix B.
## III Platform architecture
In this section we discuss the full architecture of our platform. We describe how the HPC cluster is integrated with cloud services to provide our quantum device emulation. The emulator, which we will refer to as EMU-TN, includes the constraints of a particular QPU as detailed in Section II.1, implementing the tensor network algorithms of Section II.2 as the main numerical backend. Below, we describe the input to the emulator, the preprocessing, as well as post-processing before an output is returned. We finally discuss the cloud infrastructure, including orchestration, scheduling and dispatching.
### Encoding the dynamics information
Our platform takes as input an encoded abstract representation of the control parameters \(\hat{H}_{\text{ctrl}}(t)\). It then performs the required quantum dynamic evolution, applies the measurements to the final state, and returns the readout data to the design tool.
EMU-TN includes a JSON-parser, which is important to uniformize the information sent to the cluster. We take as initial expression the Hamiltonian in eq. (3), with a set of \(J\) control fields and the positions \(\{\mathbf{r}_{ij}\}\) of the qubits in the register. Each control field \(\hat{H}^{(j)}_{\text{ctrl}}(t)\) acts on a subset \(I^{(j)}\) of qubits in the register,
\[\hat{H}^{(j)}_{\text{ctrl}}(t)=\Omega^{(j)}(t)\sum_{i\in I^{(j)}}\hat{P}^{(j )}_{i}. \tag{6}\]
For example, if the \(j\)-th control is a global field of magnitude \(\Omega^{(j)}(t)\) in the \(z\)-axis, then \(\hat{P}^{(j)}=\hat{\sigma}^{z}_{i}\) and \(\hat{H}^{(j)}_{\text{ctrl}}=\Omega^{(j)}(t)\sum_{i\in\{1\ldots N\}}\hat{ \sigma}^{z}_{i}\). To parse this information, a design tool will have to specify \(\Omega^{(j)}(t)\) as either an array of numbers, the initial and final values plus a total duration, or a generating function. Additionally, locality has to be specified with the indices \(I^{(j)}\).
Finally one needs to inform about the initial state preparation and measurement of the state. The initial state is usually experimentally convenient to produce, such as a _product state_ indicated typically by a bitstring. Platforms such as neutral atom quantum processors further allow choosing the positions of the qubits, determining the interaction strength between them. More generically, initial state conditions can be provided as labels produced by the design tool. The measurement protocol can consist of a basis (or a set of bases) where the measurements take place and a number of repetitions of the experiment, in order to generate statistics to estimate observables. This can be specified through a number of _runs_ of the job submission, together with a label for the desired basis, as long as it is compatible with the emulated QPU device.
There is an increasing list of software packages that offer tools for the design and customization of pulse sequences for different types of qubit architectures and quantum processing tasks [43; 44; 45; 46; 47; 48]. These have been used in the context of the design of quantum algorithms, of quantum optimal control as well as the research of the dynamical properties of many-body quantum systems. A great majority of them use directly or indirectly a state-vector representation of the quantum state, and are available as open source tools for personal use. EMU-TN can be included as an alternative of the solver, thereby extending the types of tasks that can be handled by these design tools.
### Cloud infrastructure
We present how the cloud platform dispatches jobs to the classical emulation and quantum computing backends.
#### iii.2.1 Orchestration and scheduling
The infrastructure, illustrated in Fig. 2, is split between public cloud services hosted on OVHCloud and orchestrated through Kubernetes and an HPC cluster hosted in a private data center running a Slurm scheduler. We describe below their integration for providing quantum computing as a service.
Kubernetes [49] is a container orchestration system centered around cloud computing, public, private, or hybrid. It simplifies the deployment and operation of containerised applications on a set of resources, as well as the scaling of said resources. Slurm [50], on the other hand, was created with HPC workloads in mind. It provides prioritization aware scheduling, job queues, topology aware job allocations, reservations and backfill policies. These features led to software like Slurm often being chosen to manage HPC clusters over newer solutions bespoke for other compute paradigms. Slurm submits a general Bash script to run its job, which can also include a sequence of containerised applications.
It is possible to run the same workloads on a Kubernetes pod, either as a long-running service responsible for receiving and executing a job or scheduled using Kubernetes compatible schedulers. However, a downside of these approaches is that it would be challenging to integrate the on-premise HPC cluster with the Kubernetes scheduler. Not scheduling the jobs on the cluster, which would mean just running a job execution service in a Kubernetes pod, would put limitations both on how effectively the resources in the cluster are used, and ultimately on the performance of the HPC applications.
As cloud computing and HPC become increasingly mixed, extensions between Kubernetes and Slurm have
recently seen a lot of progress, see for example [51; 52; 53]. However, these approaches have generally centered around submitting Kuberenetes pods, i.e. containers, to Slurm systems. This involves relying on Kubernetes to schedule jobs. The Kubernetes scheduler does not provide out-of-the-box support to work with often changing and highly heterogeneous environments consisting of several QPU types, CPUs and GPUs. In this work we use a custom scheduler for our pulse sequences. Thus, rather than relying on mechanisms similar to those proposed in [51; 52; 53] a service that directly submits regular Slurm jobs was chosen as this integrates better with our existing hardware and is more similar to the way jobs are scheduled on the QPU.
Moreover, we think that for the foreseeable future hybrid quantum-classical workloads will constitute a large part of quantum jobs. The classical part of these can often be GPU-accelerated, motivating the need to be able to request such compute resources when needed. For example, NVIDIA recently introduced a platform for hybrid quantum-classical computing, known as Quantum Optimized Device Architecture (QODA) [54].
#### ii.1.2 Job dispatching
To connect our two clusters, we used the recently introduced Slurm REST API [55] to submit jobs to Slurm from the regular Backend service. A "Slurm connector" service has been developed to act as a bridge between the existing PASQAL cloud Services and the HPC cluster. The service creates a control script on-demand for the job which is submitted to Slurm as the batch script. This script then takes care of executing all the higher-level logic of the job. The service is also responsible for setting up the job with appropriate resources and inserting all necessary information for future communication with the cloud service.
Each pulse sequence is encoded in a JSON format and sent to PASQAL cloud Services by making an HTTP POST request to the job submission endpoint. The pulse sequence format is the same for QPUs and emulators. We extend the body of the HTTP request with a Configuration argument that allows the user to control the parameters of the emulator, see Code Sample 1. This design allows executing the same quantum program seamlessly on both QPUs and emulators, while also allowing control of the numerical simulation parameters such as the maximum bond dimension and the number of cores.
The internal logic of the job submission endpoint validates the pulse sequence and the emulation configuration. The validation step checks that the request and the underlying data such as the sequence and emulator configuration are correctly formatted, that the requested resources are accessible to the user and, for a real device, that the sequence is physically valid. It then dispatches the request to the appropriate scheduler. In Code Sample 1 below, we show an example of sending a serialized pulse sequence to PASQAL Cloud Services for execution. To build the sequences we have used Pulser[46], an open source package for designing neutral atom QPU sequences. To communicate with cloud services we have used the PASQAL Cloud Services Python SDK [56], which provides helper functions to the end user to request the APIs.
Code Sample 1. Running jobs through the cloud service.
```
1frompuiserimportPulse,Sequence,Register
2frompuiser.devicesimportMockDevice
3fromsdkimportConfiguration,Endpoints,SDK
4importjson
5
6#1.DefineQubitRegister
7N=6#Qubitsperside
8reg=Register.aquare(side=N,spacing=7,prefix="q")
9
10#2.DefinePulseSequenceandencodetoJSON
11seg=Sequence(reg,MockDevice)
12seg.declare_channel(
13name="ch0",
14channel_id="rydberg_global"
15)
16seq.add(
17Pulse.ConstantPulse(
18duration=1000,#ns
19amplitude=2,#rad/us
Figure 2: Architecture of the cloud platform providing quantum computing as a service. The cloud services are deployed on a Kubernetes cluster and use a microservice architecture, each microservice exposing a public REST API to end users. The HPC cluster where EMU-TN runs, consists of 10 DGX A100 systems from NVIDIA (See Appendix A for more details) and uses Slurm, an HPC cluster management and job scheduling system, for resource management and job submission
20detuning-6,#rad/us
21phase=0),
22channel="ch0"
23}
24encoded_seq=seq.to_abstract_repr()#JSONfile
25
26#3.SendencodedsequencetoPASQALCloudServices
27#GetAPIkeysat
28#[https://portal.pasqal.cloud/api-keys](https://portal.pasqal.cloud/api-keys)
29
30endpoints=Endpoints(
31core="[https://api.pasqal.cloud/core](https://api.pasqal.cloud/core)",
32account="[https://apis.pasqal.cloud/account](https://apis.pasqal.cloud/account)"
33}
34
35sdk=SDK(
36client_id="my_client_id",
37client_secret="my_client_secret",
38 endpoints=endpoints,
39}
40
41#ConfigureJob
42my_job={"runs":1000}#Numberofshots
43config=Configuration(#Configurationofemulator
44d=10.0,
45precision="normal",
46extra_config="("max-bond-dim":100}
47}
48
49#CreatebatchontheCloudservice,waituntiltheexecutioncompletes,andgetresults
50batch=sdk.create_batch(
51encoded_seq,
52device.type="EMU_TN",
53jobs=[my_job],
54configurationconfig,
55fetch_results=True,
56}
57res=[job.resultforjobinbatch.jobs.values()]
In this context, EMU-TN is built as a Singularity container and executed as a Slurm job. The Slurm job consists of the numerical simulation and is preceded and followed by custom-made scripts to communicate with the cloud platform. Since the pulse sequence has been previously validated, the pre-processing script only translates the user configuration into the format required by EMU -TN. After the completion of the simulation, the post-processing script collects the results and communicates with the cloud service to ensure the results are correctly stored and accessible by the frontend. This architecture allows the emulators to be stand-alone and also to be easily executable outside the overall platform. It also allows for easy extension for more frequent two-way communication between the platform and the emulator, should it be desirable in the future.
#### iii.2.3 Other considerations
Storing bistring results or heavily compressed tensor-states in a columnar database works very well as they can be serialised to short text strings. For larger datasets this can cause performance degradation in the database and eventually the dataset will be too large for the specific database implementation. Thus, object storage may be useful in such cases, such as for the tensor network representations produced by the emulator. Since obtaining such a state is computationally costly, the platform makes this state available as a serialised object that can be de-serialised for further processing as required by the numerical algorithm.
A second consideration is that not every user request constitutes a feasible computational objective. Some pulse sequences (e.g. those that introduce steep variations) usually require a larger matrix size to simulate. It could also be that the algorithm does not converge to the desired accuracy. There is an unavoidable iterative process where a candidate pulse sequence and quantum system is studied by the user on physical grounds and successive verification with exact diagonalization methods for smaller systems is crucial. Once a promising job is selected, the data is deserialized in the cloud core service where a validation script checks that it describes a valid quantum program, for a given emulated device based on existing QPUs. This ensures that the pulse sequence is physically implementable on the hardware. The Slurm-connector also does some simple validation of Slurm configuration.
## IV Applications
In this section we discuss some applications of the EMU -TN framework, motivated by typical tasks that can be performed with a Neutral Atoms Quantum Processor [57; 58; 59]. The native Hamiltonian will be taken as the following particular form of (3):
\[\hat{H}(t)=\frac{\Omega(t)}{2}\sum_{i=1}^{N}\tilde{\sigma}_{i}^{x}-\delta(t) \sum_{i=1}^{N}\hat{n}_{i}+\sum_{i>j}\frac{C}{|\mathbf{r}_{ij}|^{6}}\hat{n}_{i} \hat{n}_{j} \tag{7}\]
where \(\hat{n}=|1\rangle\langle 1|=(\mathbb{I}-\hat{\sigma}^{z})/2\) is the projector into the excited state and \(C\) is a constant that depends on the properties of the targeted excited state. After the quantum evolution and readout, one obtains a bitstring of 0's and 1's representing excited and ground-state qubits, respectively. To calculate the expectation value of a given observable, many cycles need to be performed in order to generate enough statistics. The output of the emulator provides bistrings, which simulates the actual output of the GPU.
We remark that EMU-TN can be applied to other Hamiltonians, for example long-range "XY" Hamiltonians that can be constructed with neutral atom [60] and trapped ion [61] arrays.
### 1D Regular Lattice: Rydberg Crystals
One notable property of the Hamiltonian (7) is that, in certain regimes, the simultaneous excitation of two neighbouring atoms is suppressed. This phenomenon is called Rydberg blockade [62; 63], and it takes place when the interaction term \(C/r^{6}\) exceeds the Rabi frequency \(\Omega\) (it is therefore distance dependent). The Rydberg blockade mechanism is the main source of entanglement in neutral atom systems, since the maximally entangled state \(|01\rangle+|10\rangle\) becomes energetically favored over the product state \(|11\rangle\).
The Rydberg blockade has been used in [64] to create highly regular excitation patterns in 1D chains of neutral atoms. These structures are known as Rydberg crystals and, for experimentally reasonable values of atomic spacing and Rabi frequency, they can be prepared in such a
way as to display \(\mathbb{Z}_{n}\) order for \(n=2,3,4\). The situation is depicted schematically in Fig. 3.
We first present the results of a test simulation on a chain of 16 atoms. For the pulse sequence shown in the top part of Fig. 4 with \(\Omega_{\max}=6.3\) rad\(/\mu s\) and a spacing of 4 \(\mu m\), we expect the final ground state of the system to be in the \(\mathbb{Z}_{3}\) order. The excitation probability for each qubit in its final state is also reported for both EMU -TN and an exact Schrodinger equation solver in Fig. 4, together with a heatmap representing the time evolution of the excitation probability for each qubit given by the exact solver and EMU-TN respectively.
Next, we present the results for the same simulation but with a chain of 37 atoms, which is intractable by an exact solver. The pulse in this case is chosen to be twice as long in order to allow correlations to spread from the borders all the way to the center of the chain and observe a clearer alternation of the excitations. The excitation probability in the final state and its time evolution are reported in Fig. 5.
### 2D Regular Lattice: Antiferromagnetic State Preparation
One of the most common tasks one can try to simulate is the preparation of particular states with a regular register. This has been used in milestone implementations in programmable neutral-atom arrays of hundreds of qubits [5, 6].
A typical pulse sequence (Fig. 6, above) represents a path through the phase diagram in the thermodynamic limit that ends in a point of the expected antiferromagnetic phase, which has been analytically studied before. We present the results from the sampling of the evolved MPS as well as from a straightforward implementation of exact diagonalization solved numerically on a local computer. Using exact diagonalization it is possible to run simulations just above the 20-qubit range, but it soon becomes impractical once the number of qubits increases. On the other hand, adjusting the bond dimension of the tensor network algorithm, one can aim to explore the behavior of sequences in the range of 20 to 60 qubits in a comparably short time. Moreover, the flexibility of the cluster approach allows for parallelization of tasks which provide information about parameter sweeps without the user having to spend time adapting their code. In Figure 4, we include information about the elapsed time of the performed simulations, for different array sizes and bond dimensions, with and without access to GPUs.
### Maximum Independent Set on Unit Disk graphs
We present a typical use case of neutral atom devices for two-dimensional registers. The possibility of placing atoms in arbitrary 2D configurations allows to solve certain classes of hard graph problems on the QPU. An interesting application is solving the Maximum Independent Set (MIS) problem on Unit Disk (UD) graphs [66].
A graph is an object with nodes and connections between nodes. A UD graph is a graph where two nodes are connected if an only if they are closer than a certain minimal distance. By representing the nodes of a graph with neutral atoms and matching the Rydberg blockade
Figure 4: Comparison between an exact Schrödinger equation solver (using QuTiP [65]) and the emulator (EMU-TN) for \(\mathbb{Z}_{3}\) Rydberg crystal preparation on a 1D chain of 16 atoms placed 4 \(\mu m\) apart. The driving sequence represents a pulse of \(t_{tot}=10\mu s\) with \(\Omega_{\max}=6.3\) rad\(/\mu s\). The histogram represents the excitation probability for each qubit in the final state, while the heatmaps represent the evolution over time of the excitation probability obtained with the exact solver and EMU-TN respectively.
Figure 3: Examples of 1D Rydberg crystals for a chain of 13 atoms. A solid circle represents an excited atom, while an empty one represents an atom in the ground state. The \(\mathbb{Z}_{2}\), \(\mathbb{Z}_{3}\) and \(\mathbb{Z}_{4}\) orders are characterized by excitations separated by respectively 1, 2, or 3 atoms in the ground state.
radius with the UD graph minimal distance, one can establish a direct mapping where a connection exists between atoms that are within a blockade distance of each other. The quantum evolution of such a system is naturally restricted to those sectors of the Hilbert space where excitations of connected atoms are forbidden. These configurations translate to independent sets of a graph, i.e. subsets of nodes that are not directly connected to each other. Driving the quantum evolution in such a way as to produce as many excitations as possible, one would obtain with high probability as the outcome of a measurement an independent set of high cardinality, representing a good candidate solution to the MIS problem.
The best pulse sequence to find the MIS of a graph is based on the same adiabatic pulses already shown in Figs. 4 and 6, partly since Rydberg crystals and antiferromagnetic states can be seen as the MIS of regular 1D and 2D lattices. As shown in [67], however, for arbitrary graphs it is often necessary to optimize the pulse shape in order to find the MIS with a high enough probability. We present in Fig. 8 a one-parameter family of pulses where the parameter \(t_{c}\) controls at which time the detuning goes from negative to positive. Rather than sweeping over the free parameter sequentially, one can send multiple simultaneous jobs for each value to be tested, and finally select the parameter giving the best result. Parallelizing this task is a direct advantage of the cluster infrastructure, and the cloud service provides a dedicated interface in which all jobs can be organized and retrieved, saving large amounts of time. We performed a sweep over 9 parameters for a pulse of 5 \(\mu s\) on a graph with 30 nodes. The results are shown in Fig. 8, below, where the optimal zero-crossing for the detuning is found to be at around 1.6 \(\mu s\).
While the example shown here is centered on pulse parameters, other tasks suited for parallelization are also implementable, e.g. averages over noise and disorder re
Figure 5: Same simulation as Fig. 4, but with a chain of 37 qubits and a pulse of 20 \(\mu s\).
Figure 6: Comparison of sampling results (\(10^{3}\) samples) from the prepared quantum state between exact diagonalization (ED) and the emulator (EMU-TN) for increasing array sizes. The driving sequence (same structure as the one in Fig. 4, above) represents a pulse of \(t_{\text{tot}}=2.7\mu s\) moving in the phase diagram of the Hamiltonian towards the antiferromagnetic phase. The \(5\times 7\) and \(5\times 9\) arrays required \(10^{4}\) samples to resolve better the peaks (1% and 0.4% respectively), since the short pulse breaks adiabaticity for large arrays.
Figure 7: Computation time for increasing array sizes. We compare only CPU use and then an added GPU backend, for different maximal bond dimension, \(\chi_{\text{max}}\).
alizations. Being able to quickly set up and run studies of this kind for systems of many qubits opens the door for creative algorithm design that is at the same time compatible with a QPU. We remark that in 1D systems, the TDVP algorithm has been used to study dynamics for \(\mathcal{O}(10^{2})\) spins [68, 69].
## V Discussion
Due to the rising interest in developing and using quantum processing units integrated in a modern computational infrastructure, we envision the emergence of workflows requiring intensive use of emulators. They are not just a placeholder for the actual quantum device, but serve for diagnostics, resource estimation and management. Given the current price and availability of QPUs, emulation is crucial in the initial phases of an application, providing a faster feedback loop during the R&D phase, as well as enforcing a realistic level of constraints based on a given quantum computing architecture. This is also relevant for designing upgrades to a quantum device, as the relevant figures of merit for expected behavior can be compared quickly.
By providing a seamless interface between emulators running locally, emulators of large systems in a HPC environment and the QPUs, the researcher can distribute the most convenient compute option. By further centralising this in a workflow and allowing the scheduler to manage the optimal software resource one can both reduce the total time and cost to solution. Workflows will also have to take into account whether the computational task should be executed in the shortest walltime, the most energy efficient way, the most cost-effective way or some weighted combination. The computational community has taken great strides in workflow management, and the ever increasing use of machine learning has created new fields like MLOps (Machine Learning Operations). It is expected that the quantum computing community strives for such state of the art workflows, and to be able to integrate into existing ones, to achieve real industrial advantage. This will include not only execution of algorithms, but also providing applicable training, and testing.
In this paper we described the components at the basis of such workflows. By leveraging tensor network algorithms for quantum dynamics that can exploit the resources provided by an HPC cluster, we discussed how a cloud on-demand service can make use of a cluster backend to simplify access to the required computational operations. The numerics can be easily adapted to different architectures and devices, separating the role of design tools from the computation of quantum dynamics. Other techniques for the study of dynamical quantum systems can be adapted as well: future objectives include considering the low qubit (\(<20\)) range, where state-vector evolution methods allow implementing a microscopic model of hardware noise effects and other state-preparation and readout errors [70]. This includes implementing parallelization routines on the algorithm management side, controlled exploitation of the cluster resources for heavy computational loads and efficient solvers for other dynamical equations (quantum master equations and stochastic Schrodinger equation). A suite of benchmarks for common tasks in the low qubit range is also desirable for understanding speed bottlenecks, and will be the subject of a follow up work. In the case of larger arrays, the use of tensor network structures like PEPS or Tree Tensor Networks, provide other convenient representations for two-dimensional systems and their application to a time evolution algorithms is under active study [71, 72, 73, 74, 75]
Developing workflows is much easier with a fully fledged cloud backend to orchestrate the tasks. This enables creating workflows that automatically assess the validity of the submitted pulse sequence for the QPU. Simulating the dynamics from a particular pulse sequence requires attention to tuning and understanding how to set the algorithm parameters. The cloud platform can provide the user suggestions for future hyperparameters based on live monitoring of the simulation. Finally, the worker nodes on the cluster have large amounts of local storage in a so-called "scratch" directory. This allows storing large amounts of data cheaply on a temporary basis. A future extension of the platform could allow subse
Figure 8: Parameter sweep. We arrange 30 qubits in 2D (above), where the dashed circles show half the blockade radius. The pulse sequence (middle, \(t_{\mathrm{tot}}=5\mu s\)) contains a tunable parameter \(t_{c}\) controlling the time when the detuning goes from negative to positive. We sweep in parallel for 9 values of \(t_{c}\). The results (below) show the average size of the MIS found for each value of \(t_{c}\). The entire sweep, took about 2 hours of cloud compute time.
quent job-steps to use previous results, recycling previous computational work for new tasks.
We believe the platform presented here can greatly improve reproducibility and benchmarking of research in the developing quantum industry, as well as allowing a broader scope of researchers, engineers and developers to understand, explore and design the behavior of quantum systems and quantum devices.
###### Acknowledgements.
We thank Loic Henriet, Caroline de Groot and Matthieu Moreau for valuable conversations, suggestions for this manuscript and collaboration in related work.
|
2308.10163 | New Solutions of Coupled Nonlocal NLS and Coupled Nonlocal mKdV
Equations | We provide several novel solutions of the coupled Ablowitz-Musslimani (AM)
version of the nonlocal nonlinear Schr\"odinger (NLS) equation and the coupled
nonlocal modified Korteweg-de Vries (mKdV) equations. In each case we compare
and contrast the corresponding solutions of the relevant coupled local
equations. Further, we provide new solutions of the coupled local NLS and the
coupled local mKdV equations which are not the solutions of the corresponding
nonlocal equations. | Avinash Khare, Avadh Saxena | 2023-08-20T05:12:30Z | http://arxiv.org/abs/2308.10163v1 | ###### Abstract
###### Abstract
We provide several novel solutions of the coupled Ablowitz-Musslimani (AM) version of the nonlocal nonlinear Schrodinger (NLS) equation and the coupled nonlocal modified Korteweg-de Vries (mKdV) equations. In each case we compare and contrast the corresponding solutions of the relevant coupled local equations. Further, we provide new solutions of the coupled local NLS and the coupled local mKdV equations which are not the solutions of the corresponding nonlocal equations.
**New Solutions of Coupled Nonlocal NLS**
**and Coupled Nonlocal mKdV Equations**
**Avinash Khare**
Physics Department, Savitribai Phule Pune University
Pune 411007, India
**Avadh Saxena**
Theoretical Division and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
## 1 Introduction
In recent years the nonlocal nonlinear equations have received considerable attention in the literature. It primarily started with the papers of Ablowitz and Musslimani (AM) [1, 2] about the nonlocal NLS equation and its integrability [3]. Subsequently, several other nonlocal equations have been proposed [4, 5, 6, 7, 8]. It is hoped that some of these nonlocal equations may have applications in areas of physics such as optics, photonics, etc. [9]. Besides one hopes to understand the nature of the relevant nonlocality. It is clearly possible that in some cases coupled nonlocal equations could be relevant in a number of physical settings.
It is with this motivation that recently we have studied coupled AM type nonlocal NLS as well as nonlocal coupled mKdV equations [10, 11, 12] and obtained a large number of solutions of these coupled equations. We thought that perhaps one has essentially covered most of the allowed solutions. Of course, we were aware that nonlinear equations being rich one can never be sure about it. Recently, we were able to obtain new solutions of the uncoupled symmetric \(\phi^{4}\) equation [13] as well as well as the coupled \(\phi^{4}\) equations [14] and that has inspired us to look for similar solutions of the
coupled nonlocal equations. The purpose of this paper is to report on several new solutions of the coupled AM variant of the nonlocal NLS as well as the coupled nonlocal mKdV equations [4, 5]. In each case we also enquire if the corresponding local equations also admit such a solution, and if yes, under what conditions.
The plan of the paper is the following. In Sec. II we present new solutions of the coupled nonlocal AM variant of the NLS equation. In each case we compare and contrast with the solutions of the (local) coupled NLS equations (in case they are admitted). Particular mention may be made of a truncated coupled nonlocal NLS equation which admits novel nonreciprocal solutions where one field is in terms of the Lame polynomial of order two while the other field is in terms of the Lame polynomial of order one. It may be noted that the full coupled nonlocal or local NLS equations do not admit similar solutions. In Appendix A we present those solutions of the local coupled NLS equations which are not the solutions of the nonlocal coupled NLS equations. In Sec. III we present new solutions of the nonlocal coupled mKdV equations and compare and contrast them with the corresponding solutions of the local coupled mKdV equations. Particular mention may be made of the plane wave solutions as well as the soliton solutions multiplied by the plane wave factor of the coupled nonlocal mKdV equations. In contrast, the corresponding local coupled mKdV equations do not admit either the plane wave or the soliton solutions multiplied by the plane wave factor. In Appendix B we present those solutions of the local coupled mKdV equations which are not the solutions of the nonlocal coupled mKdV.
## 2 Novel Solutions of a Coupled nonlocal Ablowitz-Musslimani NLS Model
Let us consider the following coupled nonlocal Ablowitz-Musslimani NLS model [11, 12]
\[iu_{t}(x,t)+u_{xx}(x,t)+[g_{11}u(x,t)u^{*}(-x,t)+g_{12}v(x,t)v^{*}(-x,t)]u(x,t) =0\,, \tag{1}\]
\[iv_{t}(x,t)+v_{xx}(x,t)+[g_{21}u(x,t)u^{*}(-x,t)+g_{22}v(x,t)v^{*}(-x,t)]v(x,t) =0\,. \tag{2}\]
In order to find exact solutions to the coupled Eqs. (1) and (2), we start with the ansatz
\[u(x,t)=e^{i\omega_{1}(t+t_{0})}u(x)\,,\ \ v(x,t)=e^{i\omega_{2}(t+t_{1})}\,, \tag{3}\]
where \(t_{0},t_{1}\) are arbitrary real constants. In that case Eqs. (1) and (2) take the form
\[u_{xx}(x)=\omega_{1}u(x)-[g_{11}u(x)u^{*}(-x)+g_{12}v(x)v^{*}(-x)]u(x)\,, \tag{4}\]
\[v_{xx}(x)=\omega_{2}v(x)-[g_{21}u(x)u^{*}(-x)+g_{22}v(x,t)v^{*}(-x)]v(x)\,. \tag{5}\]
For simplicity, now onwards we will put \(t_{0},t_{1}=0\). However, it is understood that such a shift in time \(t\) is always allowed. However, in view of the nonlocality, similar shift in \(x\) is not allowed. This is unlike the local NLS case where such a shift is allowed.
We now discuss new solutions of these coupled equations and point out whether the corresponding coupled local NLS equations
\[iu_{t}(x,t)+u_{xx}(x,t)+[g_{11}u^{2}(x,t)+g_{12}v^{2}(x,t)]u(x,t)=0\,, \tag{6}\]
\[iv_{t}(x,t)+v_{xx}(x,t)+[g_{21}u^{2}(x,t)+g_{22}v^{2}(x,t)]v(x,t)=0\,, \tag{7}\]
also admit these solutions and if yes under what conditions. In Appendix A, we discuss those solutions which are admitted by the local coupled Eqs. (6) and (7) but not by the coupled nonlocal Eqs. (1) and (2).
**Solution I**
It is not difficult to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m)}{B+{\rm dn}(\beta x,m)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{D\sqrt{m}{\rm cn}(\beta x,m)}{B+{\rm dn}(\beta x,m)}\,, \tag{8}\]
with \(B>0\), is an exact solution of the coupled Eqs. (1) and (2) provided
\[\omega_{1}=\omega_{2}=-\frac{(2-m)\beta^{2}}{2}\,,\ \ g_{12}D^{2}=3g_{22}D^{2 }=\frac{3(B^{2}-1)\beta^{2}}{2}\,,\] \[3g_{11}A^{2}=g_{21}A^{2}=\frac{3(B^{2}-1+m)\beta^{2}}{2}\,. \tag{9}\]
Thus \(g_{12},g_{22}>(<)\) 0 provided \(B^{2}>(<)\) 1 while \(g_{11},g_{21}>(<)\) 0 provided \(B^{2}<(>)\) 1 \(-m\). On the other hand \(\omega_{1},\omega_{2}<0\). Here \(m\) is the modulus of the Jacobi elliptic functions [15]. It may be noted that \(B>0\) is true for all the thirteen solutions given below.
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the solution (8) provided relations as given by Eq. (9) are satisfied except that \(g_{11}\) and \(g_{21}\) have opposite values compared to those given in (9).
**Solution II**
In the limit \(m=1\), the solution I goes over to the hyperbolic solution
\[u(x,t)=e^{i\omega_{1}t}\frac{A\tanh(\beta x)}{B+\mbox{sech}(\beta x)}\,,\;\;v(x,t )=e^{i\omega_{2}t}\frac{D\mbox{sech}(\beta x)}{B+\mbox{sech}(\beta x)}\,, \tag{10}\]
provided
\[\omega_{1}=\omega_{2}=-\frac{\beta^{2}}{2}\,,\;\;g_{12}D^{2}=3g_{2 2}D^{2}=\frac{3(B^{2}-1)\beta^{2}}{2}\,,\] \[B>0\,,\;\;3g_{11}A^{2}=g_{21}A^{2}=\frac{3B^{2}\beta^{2}}{2}\,. \tag{11}\]
Thus \(g_{12},g_{22}>(<)\) 0 provided \(B^{2}>(<)\) 1 while \(g_{11},g_{21}>0\). On the other hand \(\omega_{1},\omega_{2}<0\).
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the hyperbolic solution (10) provided relations as given by Eq. (11) are satisfied except that \(g_{11}\) and \(g_{21}\) have opposite values compared to those given in (11).
**Solution III**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\mbox{sn}(\beta x,m)}{B+\mbox{cn}(\beta x,m)}\,, \;\;v(x,t)=e^{i\omega_{2}t}\frac{D\mbox{dn}(\beta x,m)}{B+\mbox{cn}(\beta x,m) }\,, \tag{12}\]
is an exact solution of the coupled Eqs. (1) and (2) provided
\[\omega_{1}=\omega_{2}=-\frac{(2m-1)\beta^{2}}{2}\,,\;\;g_{12}D^{2 }=3g_{22}D^{2}=\frac{3(B^{2}-1)\beta^{2}}{2}\,,\] \[B>1\,,\;\;3g_{11}A^{2}=g_{21}A^{2}=\frac{3(mB^{2}+1-m)\beta^{2}} {2}\,. \tag{13}\]
Thus \(g_{12},g_{22},g_{11},g_{21}\) are all \(>0\). On the other hand \(\omega_{1},\omega_{2}<0(>)\) 0 depending on if \(m>(<)\) 1/2.
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the solution (12) provided relations as given by Eq. (13) are satisfied except that \(g_{11}\) and \(g_{21}\) have opposite values compared to those given in (9).
**Solution IV**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A}{B+\cos(\beta x)}\,,\;\;v(x,t)=e^{i\omega_{2}t }\frac{D\sin(\beta x)}{B+\cos(\beta x)}\,, \tag{14}\]
with \(B>1\), is an exact solution of the coupled Eqs. (1) and (2) provided
\[\omega_{1}=\omega_{2}=\frac{\beta^{2}}{2}\,,\ \ 3g_{11}A^{2}=g_{21}A^{2}=3(B^{2}-1 )\frac{\beta^{2}}{2}\,,\] \[g_{12}D^{2}=3g_{22}D^{2}=\frac{3\beta^{2}}{2}\,. \tag{15}\]
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the solution (21) provided relations as given by Eq. (15) are satisfied except that \(g_{12}\) and \(g_{22}\) have opposite values compared to those given in (15).
**Solution V**
It is straightforward to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\cos(\beta x)}{1+B\cos^{2}(\beta x)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{D\sin(\beta x)}{1+B\cos^{2}(\beta x)}\,,\ \ B>0\,, \tag{16}\]
is an exact solution of the nonlocal coupled NLS Eqs. (1) and (2) provided
\[\omega_{1}=\omega_{2}=-\beta^{2}\,,\ \ g_{12}D^{2}=3g_{22}D^{2}=-6B\beta^{2}\,, \ g_{21}A^{2}=3g_{11}A^{2}=-6B(B+1)\beta^{2}\,. \tag{17}\]
Note that the condition \(B>0\) is also valid for the solutions VI to XIII presented below and we will not mention it again.
It is worth noting that the solution (16) is also a solution of the coupled local NLS Eqs. (6) and (7) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (17).
**Solution VI**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\cos(\beta x)}{1+B\cos^{2}(\beta x)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{D}{1+B\cos^{2}(\beta x)}\,, \tag{18}\]
is an exact solution of the nonlocal coupled Eqs. (1) and (2) provided
\[\omega_{1}=-\beta^{2}\,,\ \ \omega_{2}=-4\beta^{2}\,,\ \ g_{12}D^{2}=-6 B\beta^{2}\,,\ \ g_{11}A^{2}=-2B(B+4)\beta^{2}\,,\] \[g_{22}D^{2}=-2(2-B)\beta^{2}\,,\ \ g_{21}A^{2}=-6B(B+2)\beta^{2}\,. \tag{19}\]
It is worth noting that the solution (18) is also a solution of the coupled local NLS Eqs. (6) and (7).
**Solution VII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A}{1+B\cos^{2}(\beta x)}\,,\ \ v(x,t)=e^{i\omega_{2}t} \frac{D\sin(\beta x)}{1+B\cos^{2}(\beta x)}\,, \tag{20}\]
is an exact solution of the nonlocal coupled Eqs. (1) and (2) provided
\[\omega_{1}=-4\beta^{2}\,,\ \ g_{12}D^{2}=-6B(B+2)\beta^{2}\,,g_{11}A^ {2}=-2(B+1)(3B+2)\beta^{2}\,,\] \[\omega_{2}=-\beta^{2}\,,\ \ g_{22}D^{2}=-2B(3B+4)\beta^{2}\,,\ \ g_{21}A^ {2}=-6B(B+1)\beta^{2}\,. \tag{21}\]
It is worth noting that the solution (20) is also a solution of the coupled local NLS Eqs. (6) and (7) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (21).
**Solution VIII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A}{1+B\cos^{2}(\beta x)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{D \sin(\beta x)\cos(\beta x)}{1+B\cos^{2}(\beta x)}\,, \tag{22}\]
is an exact solution of the nonlocal coupled Eqs. (1) and (2) provided
\[\omega_{1}=\omega_{2}=2\beta^{2}\,,\ \ g_{12}D^{2}=3g_{22}D^{2}=6B^{2 }\beta^{2}\,,\] \[g_{21}A^{2}=3g_{11}A^{2}=6(B+1)\beta^{2}\,. \tag{23}\]
It is worth noting that the solution (22) is also a solution of the coupled local NLS Eqs. (6) and (7) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (23).
**Solution IX**
It is not difficult to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\cos(\beta x)}{1+B\cos^{2}(\beta x)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{D\sin(\beta x)\cos(\beta x)}{1+B\cos^{2}(\beta x )}\,, \tag{24}\]
is an exact solution of the nonlocal coupled Eqs. (1) and (2) provided
\[\omega_{1}=-6(B+1)\beta^{2}\,,\ \ g_{12}D^{2}=-6B^{3}\beta^{2}\,,\] \[g_{11}A^{2}=-2B(B+1)(3B+4)\beta^{2}\,,\ \ \omega_{2}=-6(B+4)\beta^{2}\,,\] \[g_{21}A^{2}=-6B(B+1)(B+2)\beta^{2}\,,\ \ g_{22}D^{2}=-2(3B+2)\beta^{2}\,. \tag{25}\]
It is worth noting that the solution (24) is also a solution of the coupled local NLS Eqs. (6) and (7) except the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (25).
**Solution X**
It is easy to check that
\[u(x,t)=\frac{A\sin(\beta x)}{1+B\cos^{2}(\beta x)}\,,\ \ v(x,t)=\frac{D\sin( \beta x)\cos(\beta x)}{1+B\cos^{2}(\beta x)}\,, \tag{26}\]
is an exact solution of coupled Eqs. (1) and (2) provided
\[(1+B)a_{1}=(5B-1)\beta^{2}\,,\ \ (1+B)d_{1}D^{2}=6B^{3}\beta^{2}\,,\] \[(1+B)b_{1}A^{2}=-2B(B+4)\beta^{2}\,,\ \ (1+B)a_{2}=2(B-2)\beta^{2}\,,\] \[(1+B)d_{2}D^{2}=2B^{2}(B-2)\beta^{2}\,,\ \ (1+B)b_{2}A^{2}=-6B(B+2)\beta^{2}\,. \tag{27}\]
It is worth noting that the solution (26) is also a solution of the coupled local NLS Eqs. (6) and (7) except the signs of \(g_{11},g_{12},g_{21}\) and \(g_{22}\) are all opposite to those given in Eq. (27).
**Solution XI**
We now present three soliton solutions with a power law tail. It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A(F+x^{2})}{B+x^{2}}\,,\ \ v(x,t)=e^{i\omega_{2}t} \frac{Dx}{B+x^{2}}\,, \tag{28}\]
is an exact solution of the coupled Eqs. (1) and (2) provided
\[\omega_{1}=\omega_{2}<0\,,\ \ 3g_{11}=4g_{21}<0\,,\ \ 5g_{12}=6g_{22}>0\,,\ \ \omega_{1}=-\frac{9}{B}\,,\] \[A^{2}=\frac{|\omega_{1}|}{|g_{11}|}\,,\ \ 6d_{2}=5d_{1}\,,\ \ D^{2}= \frac{24}{g_{12}}\,,\ \ F=-B/3\,. \tag{29}\]
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the solution (28) provided the relations as given by Eq. (29) are satisfied except that \(g_{12}\) and \(g_{22}\) have opposite values compared to those given in (29).
**Solution XII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A(F+x^{2})}{B+x^{2}}\,,\ \ v(x,t)=e^{i\omega_{2}t} \frac{Dx}{\sqrt{B+x^{2}}}\,, \tag{30}\]
is an exact solution of the coupled Eqs. (1) and (2) provided
\[\omega_{1}=8\omega_{2}<0\,,\ \ 3g_{11}=8g_{21}>0\,,\ \ 7g_{12}=32g_{22}>0\,,\ \ \omega_{1}=-\frac{15}{2B}\,,\] \[5A^{2}=\frac{3|\omega_{1}|}{|g_{11}|}\,,\ \ D^{2}=\frac{12}{Bg_{12}}\,,\ \ F=-B/3\,. \tag{31}\]
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the solution (30) provided the relations as given by Eq. (31) are
satisfied except that \(g_{12}\) and \(g_{22}\) have opposite values compared to those given in (31).
**Solution XIII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{Ax}{B+x^{2}}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{Dx}{ \sqrt{B+x^{2}}}\,, \tag{32}\]
is an exact solution of the coupled Eqs. (1) and (2) provided
\[\omega_{1}=2\omega_{2}<0\,,\ \ 3g_{11}=8g_{21}>0\,,\ \ 7g_{12}=32g_{22}>0\,,\ \ \omega_{1}=-\frac{6}{B}\,,\]
\[g_{11}A^{2}=8\,,\ \ D^{2}=\frac{6}{Bg_{12}}\,. \tag{33}\]
It is worth pointing out that the local coupled NLS Eqs. (6) and (7) also admit the solution (32) provided the relations as given by Eq. (33) are satisfied except that \(g_{11},g_{12},g_{21}\) and \(g_{22}\) have opposite values compared to those given in (33).
### Nonreciprocal Solutions of The Model
We now show that the coupled nonlocal NLS Eqs. (1) and (2) admit rather novel _nonreciprocal_ solutions. Without any loss of generality, if we always choose \(u(x,t)\) to be a Lame (inverse Lame) polynomial of order two and \(v(x,t)\) to be a Lame (inverse Lame) polynomial of order one, then nonreciprocal solutions exist provided \(g_{11}=g_{21}=0\). Thus in this case one is effectively solving simpler coupled equations
\[iu_{t}(x,t)+u_{xx}(x,t)+g_{12}v(x,t)v^{*}(-x,t)u(x,t)=0\,, \tag{34}\]
\[iv_{t}(x,t)+v_{xx}(x,t)+g_{22}v^{2}(x,t)v^{*}(-x,t)=0\,. \tag{35}\]
Notice that while \(v(x,t)\) satisfies an uncoupled Eq. (35), on the other hand \(u(x,t)\) satisfies the coupled Eq. (34). It is worth remembering that if instead we consider the full coupled Eqs. (1) and (2), then u(x,t) being Lame (inverse Lame) polynomial of order two and \(v(x,t)\) being Lame (inverse Lame) polynomial of order one is _not_ a solution of these coupled equations. On the other hand, if both \(u(x,t)\) and \(v(x,t)\) are Lame (inverse Lame) polynomials of order one and are solutions of the full coupled Eqs. (1) and (2), then they are also the solutions of the truncated coupled Eqs. (34) and (35). Hence we will only consider those reciprocal solutions where \(u\) is in terms of Lame (inverse Lame) polynomial of order two while \(v\) is in
terms of Lame (inverse Lame) polynomial of order one. We first present twelve nonreciprocal periodic solutions and the corresponding six hyperbolic nonreciprocal solutions in terms of Lame polynomials of order two in \(u\) and order one in \(v\). Later we will present 12 periodic solutions in terms of inverse Lame polynomials of order two in \(u\) and inverse Lame polynomials of order one in \(v\).
**Solution XIV**
It is not difficult to check that
\[u(x,t)=Ae^{i\omega_{1}t}\sqrt{m}{\rm sn}(\beta x,m){\rm dn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\sqrt{m}{\rm sn}(\beta x,m)\,, \tag{36}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=\beta^{2}\,,\ \ 3g_{22}B^{2}=\beta^{2}\,, \tag{37}\]
\[\omega_{1}=-(4m+1)\beta^{2}\,,\ \ \omega_{2}=-(1+m)\beta^{2}\,. \tag{38}\]
What is remarkable about this solution is that it is independent of the value of \(A\). It is also independent of whether the \(u\) field is odd or even under parity-time reversal (\(PT\)) symmetry. In fact this feature is common to all the reciprocal solutions presented below and hence will not be repeated.
It is worth pointing out that the solution (36) is also a solution of the local NLS Eqs.
\[iu_{t}(x,t)+u_{xx}(x,t)+6g_{12}|v(x,t)|^{2}u(x,t)=0\,, \tag{39}\]
\[iv_{t}(x,t)+v_{xx}(x,t)+6g_{22}|v(x,t)|^{2}v(x,t)=0\,, \tag{40}\]
except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XV**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}m{\rm sn}(\beta x,m){\rm cn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\sqrt{m}{\rm sn}(\beta x,m)\,, \tag{41}\]
is an exact solution of coupled Eqs. (34) and (35) provided relations (37) are satisfied and further
\[\omega_{1}=-(4+m)\beta^{2}\,,\ \ \ \omega_{2}=-(1+m)\beta^{2}\,. \tag{42}\]
It is worth pointing out that the solution (41) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XVI**
In the limit \(m=1\), both the solutions XIV and XV go over to the hyperbolic nonreciprocal solution
\[u(x,t)=Ae^{i\omega_{1}t}{\rm sech}(\beta x)\tanh(\beta x)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\tanh( \beta x)\,, \tag{43}\]
provided relations (37) are satisfied and further
\[\omega_{1}=-5\beta^{2}\,,\ \ \ \omega_{2}=-2\beta^{2}\,. \tag{44}\]
It is worth pointing out that the solution (43) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XVII**
It is straightforward to check that
\[u(x,t)=Ae^{i\omega_{1}t}\sqrt{m}{\rm sn}(\beta x,m){\rm dn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\sqrt{m}{\rm cn}(\beta x,m)\,, \tag{45}\]
is an exact solution of coupled Eqs. (34) and (35) provided relations (37) are satisfied and further
\[\omega_{1}=\omega_{2}=(2m-1)\beta^{2}\,. \tag{46}\]
It is worth pointing out that the solution (45) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XVIII**
It is not difficult to check that
\[u(x,t)=Ae^{i\omega_{1}t}\sqrt{m}{\rm sn}(\beta x,m){\rm dn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}{\rm dn}(\beta x,m)\,, \tag{47}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=(5-4m)\beta^{2}\,,\ \ \omega_{2}=(2-m)\beta^{2}\,. \tag{48}\]
It is worth pointing out that the solution (47) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XIX**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}m{\rm sn}(\beta x,m){\rm cn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\sqrt{m}{\rm cn}(\beta x,m)\,, \tag{49}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=(5m-4)\beta^{2}\,,\ \ \omega_{2}=(2m-1)\beta^{2}\,. \tag{50}\]
It is worth pointing out that the solution (49) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XX**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}m{\rm sn}(\beta x,m){\rm cn}(\beta x,m)\,,\ \ v(x,t)=Be^{i \omega_{2}t}{\rm dn}(\beta x,m)\,, \tag{51}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=\omega_{2}=(2-m)\beta^{2}\,. \tag{52}\]
It is worth pointing out that the solution (51) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXI**
In the limit \(m=1\), all the four solutions XVII to XX go over to the hyperbolic solution
\[u(x,t)=Ae^{i\omega_{1}t}{\rm sech}(\beta x)\tanh(\beta x)\,,\ \ v(x,t)=Be^{i \omega_{2}t}{\rm sech}(\beta x)\,, \tag{53}\]
provided the relations (37) are satisfied and further
\[\omega_{1}=\omega_{2}=\beta^{2}\,. \tag{54}\]
It is worth pointing out that the solution (53) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXII**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}\sqrt{m}{\rm cn}(\beta x,m){\rm dn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\sqrt{m}{\rm sn}(\beta x,m)\,, \tag{55}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=\omega_{2}=-(1+m)\beta^{2}\,. \tag{56}\]
It is worth pointing out that the solution (55) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XXIII**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}[{\rm dn}^{2}(\beta x,m)+y]\,,\ \ v(x,t)=Be^{i\omega_{2}t} \sqrt{m}{\rm sn}(\beta x,m)\,, \tag{57}\]
is an exact solution of coupled Eqs. (34) and (35) provided relations (37) are satisfied and further
\[\omega_{1}=2(3y+1-2m)\beta^{2}\,,\ \ \omega_{2}=-(1+m)\beta^{2}\,, \tag{58}\]
where
\[y=\frac{-(2-m)\pm\sqrt{1-m+m^{2}}}{3}\,. \tag{59}\]
It is worth pointing out that the solution (57) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XXIV**
In the limit \(m=1\), \(y=0\) or \(y=-2/3\). In case \(y=0\) and \(m=1\), both the solutions XXII and XXIII go over to the hyperbolic solution
\[u(x,t)=Ae^{i\omega_{1}t}{\rm sech}^{2}(\beta x)\,,\ \ v(x,t)=B\sqrt{m}\tanh( \beta x)\,, \tag{60}\]
provided relations (37) are satisfied and further
\[\omega_{1}=\omega_{2}=-2\beta^{2}\,. \tag{61}\]
It is worth pointing out that the solution (60) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XXV**
On the other hand, in case \(y=-2/3\) and \(m=1\), the solution XXIII goes over to the hyperbolic solution
\[u(x,t)=Ae^{i\omega_{1}t}[{\rm sech}^{2}(\beta x)-2/3]\,,\ \ v(x,t)=B\sqrt{m} \tanh(\beta x)\,, \tag{62}\]
provided relations (37) are satisfied and further
\[\omega_{1}=-6\beta^{2}\,,\ \ \omega_{2}=-2\beta^{2}\,. \tag{63}\]
It is worth pointing out that the solution (62) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite compared to those given in Eq. (37).
**Solution XXVI**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}\sqrt{m}{\rm cn}(\beta x,m){\rm dn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}\sqrt{m}{\rm cn}(\beta x,m)\,, \tag{64}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=(5m-1)\beta^{2}\,,\ \ \omega_{2}=(2m-1)\beta^{2}\,. \tag{65}\]
It is worth pointing out that the solution (64) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXVII**
It is straightforward to check that
\[u(x,t)=Ae^{i\omega_{1}t}\sqrt{m}{\rm cn}(\beta x,m){\rm dn}(\beta x,m)\,,\ \ v(x,t)=Be^{i\omega_{2}t}{\rm dn}(\beta x,m)\,, \tag{66}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=(5-m)\beta^{2}\,,\ \ \omega_{2}=(2-m)\beta^{2}\,. \tag{67}\]
It is worth pointing out that the solution (66) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXVIII**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}[{\rm dn}^{2}(\beta x,m)+y]\,,\ \ v(x,t)=Be^{i\omega_{2} t}\sqrt{m}{\rm cn}(\beta x,m)\,, \tag{68}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=2[(1+m)+3y]\beta^{2}\,,\ \ \omega_{2}=(2m-1)\beta^{2}\,, \tag{69}\]
where \(y\) is again as given by Eq. (59).
It is worth pointing out that the solution (68) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXIX**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}[{\rm dn}^{2}(\beta x,m)+y]\,,\ \ v(x,t)=Be^{i\omega_{2} t}{\rm dn}(\beta x,m)\,, \tag{70}\]
is an exact solution of coupled Eqs. (34) and (35) provided the relations (37) are satisfied and further
\[\omega_{1}=2[2(2-m)+3y]\beta^{2}\,,\ \ \omega_{2}=(2-m)\beta^{2}\,, \tag{71}\]
where \(y\) is again as given by Eq. (59).
It is worth pointing out that the solution (70) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXX**
In the limit \(m=1\), \(y=0\) or \(y=-2/3\). In case \(y=0\) and \(m=1\), all the four solutions XXVI to XXIX go over to the hyperbolic solution
\[u(x,t)=Ae^{i\omega_{1}t}{\rm sech}^{2}(\beta x)\,,\ \ v(x,t)=Be^{i\omega_{2}t}{\rm sech }(\beta x)\,, \tag{72}\]
provided relations (37) are satisfied and further
\[\omega_{1}=4\beta^{2}\,,\ \ \omega_{2}=\beta^{2}\,. \tag{73}\]
It is worth pointing out that the solution (72) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXI**
On the other hand, in case \(y=-2/3\) and \(m=1\), the solutions XXVIII and XXIX go over to the solution
\[u(x,t)=Ae^{i\omega_{1}t}[{\rm sech}^{2}(\beta x)-2/3]\,,\ \ v(x,t)=Be^{i\omega_{2}t}{\rm sech }(\beta x)\,, \tag{74}\]
provided relations (37) are satisfied and further
\[\omega_{1}=0\,,\ \ \omega_{2}=\beta^{2}\,. \tag{75}\]
It is worth pointing out that the solution (74) is also a solution of the local NLS Eqs. (39) and (40).
I now present two novel superposed solutions.
**Solution XXXII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}[A{\rm dn}^{2}(\beta x,m)+Ay+B\sqrt{m}{\rm cn}(\beta x, m){\rm dn}(\beta x,m)]\,, \tag{76}\]
\[v(x,t)=De^{i\omega_{2}t}[{\rm dn}(\beta x,m)+E\sqrt{m}{\rm cn}(\beta x,m)]\,, \tag{77}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[4g_{12}D^{2}=\beta^{2}\,,\ \ \omega_{1}=[\frac{(7+m)}{2}+3y]\beta^{2}\,,\ \ E=\pm D\,,\ \ B=\pm A\,,\] \[\omega_{2}=\frac{(1+m)\beta^{2}}{2}\,,\ \ 12g_{22}D^{2}=\beta^{2}\,,\] \[y=\frac{-(5-m)\pm\sqrt{1+14m+m^{2}}}{6}\,. \tag{78}\]
Note that the \(\pm\) signs in \(D,E\) are correlated with the signs in \(B\) and \(A\). Further, the solution is independent of the values of \(B\) and \(A\) except that they are constrained by \(B=A\). In the limit \(m=1\), this solution goes over to the solution XXX.
It is worth pointing out that the solution (76) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXIII**
It is easy to check that
\[u(x,t)=Ae^{i\omega_{1}t}[\sqrt{m}{\rm dn}(\beta x,m){\rm sn}(\beta x,m)+Bm{ \rm cn}(\beta x,m){\rm sn}(\beta x,m)]\,, \tag{79}\]
\[v(x,t)=e^{i\omega_{2}t}[D{\rm dn}(\beta x,m)+E\sqrt{m}{\rm cn}(\beta x,m)]\,, \tag{80}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[4g_{12}D^{2}=\beta^{2}\,,\ \ \omega_{1}=\omega_{2}=\frac{(1+m) \beta^{2}}{2}\,,\ \ E=\pm D\,,\ \ B=\pm A\,,\] \[12g_{22}D^{2}=\beta^{2}\,. \tag{81}\]
Note that the \(\pm\) signs in \(D,E\) are correlated with the signs in \(B\) and \(A\). Further, the solution is independent of the values of \(A\) and \(B\) except that they must satisfy the constraint \(B=\pm A\). In the limit \(m=1\), this solution goes over to the hyperbolic solution XXI.
It is worth pointing out that the solution (79) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (81).
Remarkably, these coupled equations also admit several solutions in terms of inverse Lame polynomial solutions of order two in \(u(x,t)\) and of inverse Lame polynomial solutions of order one in \(v(x,t)\). We list these solutions one by one. It may be noted that all these solutions are only valid for \(0<m<1\).
**Solution XXXIV**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{B\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}(\beta x, m)}\,, \tag{82}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6\beta^{2}\,,\ \ g_{22}B^{2}=-2\beta^{2}\,, \tag{83}\]
\[\omega_{1}=-(4m+1)\beta^{2}\,,\ \ \omega_{2}=-(1+m)\beta^{2}\,. \tag{84}\]
It is worth pointing out that the solution (82) is also a solution of the local NLS Eqs. (39) and (40). Note that this solution is independent of the magnitude of \(A\). In fact this feature is common to all the solutions given below.
**Solution XXXV**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{B\sqrt{m}{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \tag{85}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6(1-m)\beta^{2}\,,\ \ g_{22}B^{2}=-2(1-m)\beta^{2}\,, \tag{86}\]
\[\omega_{1}=\omega_{2}=(2m-1)\beta^{2}\,. \tag{87}\]
It is worth pointing out that the solution (85) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (86).
**Solution XXXVI**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{B}{{\rm dn}(\beta x,m)}\,, \tag{88}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=6(1-m)\beta^{2}\,,\ \ g_{22}B^{2}=2(1-m)\beta^{2}\,, \tag{89}\]
\[\omega_{1}=(5-4m)\beta^{2}\,,\ \ \omega_{2}=(2-m)\beta^{2}\,. \tag{90}\]
It is worth pointing out that the solution (88) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXVII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,,\ \ v(x,t)=e^{i\omega_{2}t}\frac{B{\rm cn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \tag{91}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6\beta^{2}\,,\ \ g_{22}B^{2}=-2\beta^{2}\,, \tag{92}\]
\[\omega_{1}=\omega_{2}=-(1+m)\beta^{2}\,. \tag{93}\]
It is worth pointing out that the solution (91) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXVIII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m)}{{\rm dn}^{2}(\beta x, m)}\,,\;\;v(x,t)=e^{i\omega_{2}t}\frac{B{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \tag{94}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6(1-m)\beta^{2}\,,\;\;g_{22}B^{2}=-2(1-m)\beta^{2}\,, \tag{95}\]
\[\omega_{1}=(5m-1)\beta^{2}\,,\;\;\omega_{2}=(2m-1)\beta^{2}\,. \tag{96}\]
It is worth pointing out that the solution (94) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (95).
**Solution XXXIX**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,,\;\;v(x,t)=e^{i\omega_{2}t}\frac{B}{{\rm dn}(\beta x,m)}\,, \tag{97}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=6(1-m)\beta^{2}\,,\;\;g_{22}B^{2}=2(1-m)\beta^{2}\,, \tag{98}\]
\[\omega_{1}=-(5-m)\beta^{2}\,,\;\;\omega_{2}=(2-m)\beta^{2}\,. \tag{99}\]
It is worth pointing out that the solution (97) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXX**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m){\rm cn}(\beta x,m)} {{\rm dn}^{2}(\beta x,m)}\,, \tag{100}\]
\[v(x,t)=e^{i\omega_{2}t}\frac{B\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}(\beta x,m )}\,, \tag{101}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6\beta^{2}\,,\;\;g_{22}B^{2}=-2\beta^{2}\,, \tag{102}\]
\[\omega_{1}=-(4+m)\beta^{2}\,,\;\;\omega_{2}=-(1+m)\beta^{2}\,. \tag{103}\]
It is worth pointing out that the solution (100) and (101) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXXI**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m){\rm cn}(\beta x,m)}{{ \rm dn}^{2}(\beta x,m)}\,, \tag{104}\]
\[v(x,t)=e^{i\omega_{2}t}\frac{B\sqrt{m}{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m) }\,, \tag{105}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6(1-m)\beta^{2}\,,\ \ g_{22}B^{2}=-2(1-m)\beta^{2}\,, \tag{106}\]
\[\omega_{1}=-(5m-4)\beta^{2}\,,\ \ \omega_{2}=(2m-1)\beta^{2}\,. \tag{107}\]
It is worth pointing out that the solution (104) and (105) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (106).
**Solution XXXXII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A\sqrt{m}{\rm sn}(\beta x,m){\rm cn}(\beta x,m)} {{\rm dn}^{2}(\beta x,m)}\,, \tag{108}\]
\[v(x,t)=e^{i\omega_{2}t}\frac{B}{{\rm dn}(\beta x,m)}\,, \tag{109}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=6(1-m)\beta^{2}\,,\ \ g_{22}B^{2}=2(1-m)\beta^{2}\,, \tag{110}\]
\[\omega_{1}=\omega_{2}=(2-m)\beta^{2}\,. \tag{111}\]
It is worth pointing out that the solution (108) and (109) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXXIII**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}A\left[\frac{1}{{\rm dn}^{2}(\beta x,m)}+y\right]\,, \tag{112}\]
\[v(x,t)=e^{i\omega_{2}t}\frac{B\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}(\beta x,m )}\,, \tag{113}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6\beta^{2}\,,\;\;g_{22}B^{2}=-2\beta^{2}\,, \tag{114}\]
\[\omega_{1}=-2(3+\frac{1}{y})\beta^{2}\,,\;\;\omega_{2}=(2-m)\beta^{2}\,, \tag{115}\]
where
\[y=\frac{-(2-m)\pm\sqrt{1-m+m^{2}}}{3(1-m)}\,. \tag{116}\]
It is worth pointing out that the solution (112) and (113) is also a solution of the local NLS Eqs. (39) and (40).
**Solution XXXXIV**
It is not difficult to check that
\[u(x,t)=e^{i\omega_{1}t}A\left[\frac{1}{{\rm dn}^{2}(\beta x,m)}+y\right]\,, \tag{117}\]
\[v(x,t)=e^{i\omega_{2}t}\frac{B\sqrt{m}{\rm sn}(\beta x,m)}{{\rm dn}(\beta x,m )}\,, \tag{118}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=-6(1-m)\beta^{2}\,,\;\;g_{22}B^{2}=-2(1-m)\beta^{2}\,, \tag{119}\]
\[\omega_{1}=-2[3(1-m)+\frac{1}{y}]\beta^{2}\,,\;\;\omega_{2}=(2m-1)\beta^{2}\,, \tag{120}\]
while \(y\) is again as given by Eq. (116).
It is worth pointing out that the solution (117) and (118) is also a solution of the local NLS Eqs. (39) and (40) except that the signs of \(g_{12}\) and \(g_{22}\) are opposite to those given in Eq. (119).
**Solution XXXXV**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}A\left[\frac{1}{{\rm dn}^{2}(\beta x,m)}+y\right]\,, \tag{121}\]
\[v(x,t)=e^{i\omega_{2}t}\frac{B}{{\rm dn}(\beta x,m)}\,, \tag{122}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}B^{2}=6(1-m)\beta^{2}\,,\;\;g_{22}B^{2}=(2-m)\beta^{2}\,, \tag{123}\]
\[\omega_{1}=-\frac{2}{y}\beta^{2}\,,\ \ \omega_{2}=(2-m)\beta^{2}\,, \tag{124}\]
while \(y\) is again as given by Eq. (116).
It is worth pointing out that the solution (121) and (122) is also a solution of the local NLS Eqs. (39) and (40).
Finally, we present a novel solution where while \(\phi_{1}\) is a combination of Lame polynomials of order two and inverse Lame polynomials of order two, \(\phi_{2}\) is a combination of Lame polynomials of order one and inverse Lame polynomials of order one.
**Solution XXXXVI**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\bigg{(}A[{\rm dn}^{2}(\beta x,m)+\sqrt{1-m}y]+\frac{B( 1-m)}{{\rm dn}^{2}(\beta x,m)}\bigg{)}\,, \tag{125}\]
\[v(x,t)=e^{i\omega_{2}t}[D{\rm dn}(\beta x,m)+\frac{E\sqrt{(1-m)}}{{\rm dn}( \beta x,m)}]\,, \tag{126}\]
is an exact solution of coupled Eqs. (34) and (35) provided
\[g_{12}D^{2}=6\beta^{2}\,,\ \ \omega_{2}=(2-m\pm 6\sqrt{1-m})\beta^{2} \,,\ \ E=\pm D\,,\ \ B=\pm A\,,\] \[g_{22}D^{2}=2\beta^{2}\,,\ \ \omega_{1}=2[2(2-m)+3y\pm 6\sqrt{1-m}] \beta^{2}\,,\] \[y=\frac{-(2-m)\pm 6\sqrt{1-m}}{3}\,,\ \ 0<m<1\,. \tag{127}\]
Note that the \(\pm\) signs in the various relations are correlated. Further, the solution is independent of the values of \(A\) and \(B\) except that they must satisfy the constraint \(B=\pm A\).
It is worth pointing out that the solution (125) and (126) is also a solution of the local NLS Eqs. (39) and (40).
## 3 New Solutions of a Coupled Nonlocal mKdV Model
Let us consider the following nonlocal coupled mKdV model [12]
\[u_{t}(x,t)+u_{xxx}(x,t)+6[g_{11}u(x,t)u(-x,-t)+g_{12}v(x,t)v(-x,-t)]u_{x}(x,t)= 0\,, \tag{128}\]
\[v_{t}(x,t)+v_{xxx}(x,t)+6[g_{21}u(x,t)u(-x,-t)+g_{22}v(x,t)v(-x,-t)]v_{x}(x,t)= 0\,. \tag{129}\]
First of all, notice that unlike the coupled local mKdV, the nonlocal coupled mKdV Eqs. (128) and (129) admit a plane wave solution
\[u(x,t)=Ae^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=Be^{i(k_{2}x-\omega_{2}t)}\,, \tag{130}\]
provided
\[\omega_{1}+k_{1}^{3}=6k_{1}[g_{11}A^{2}+g_{12}B^{2}]\,,\ \ \omega_{2}+k_{2}^{3}=6k_{2}[g_{21}A^{2}+g_{22}B^{2}]\,. \tag{131}\]
It is obvious from here that even the uncoupled nonlocal mKdV equation
\[u_{t}(x,t)+u_{xxx}(x,t)+6gu(x,t)u(-x,-t)u_{x}(x,t)=0\,, \tag{132}\]
admits the plane wave solution
\[u(x,t)=Ae^{i(kx-\omega t)}\,, \tag{133}\]
provided
\[\omega_{1}+k_{1}^{3}=6k_{1}A^{2}\,. \tag{134}\]
We now show that unlike the coupled local mKdV equations
\[u_{t}(x,t)+u_{xxx}(x,t)+6[g_{11}u^{2}(x,t)+g_{12}v^{2}(x,t)]u_{x}(x,t)=0\,, \tag{135}\]
\[v_{t}(x,t)+v_{xxx}(x,t)+6[g_{21}u^{2}(x,t)+g_{22}v^{2}(x,t)]v_{x}(x,t)=0\,, \tag{136}\]
the nonlocal mKdV Eqs. (128) and (129) admit solutions similar to those obtained by Zakharav and Shabat [16] for the local uncoupled NLS case.
**Solution I**
It is straightforward to check that
\[u(x,t)=\sqrt{n_{1}}[B\tanh(\xi)+iA]e^{i(k_{1}x-\omega_{1}t)}\,,\] \[v(x,t)=\sqrt{n_{2}}[D\tanh(\xi)+iE]e^{i(k_{2}x-\omega_{2}t)}\,, \ \ \xi=\beta(x-ct)\,, \tag{137}\]
is an exact solution of the nonlocal coupled Eqs. (128) and (129) provided
\[\beta^{2}=g_{11}n_{1}B^{2}+g_{12}n_{2}D^{2}=g_{21}n_{1}B^{2}+g_{22}n_{2}D^{2}\,, \tag{138}\]
\[\omega_{1}+k_{1}^{3}=-6\beta^{2}k_{1}-6k_{1}[g_{11}n_{1}A^{2}+g_{12}n_{2}E^{2} ]\,, \tag{139}\]
\[\omega_{2}+k_{2}^{3}=-6\beta^{2}k_{2}-6k_{2}[g_{21}n_{1}A^{2}+g_{22}n_{2}E^{2} ]\,, \tag{140}\]
\[Bck_{1}=-2Bk_{1}^{3}+4k_{1}B\beta^{2}-6k_{1}^{2}A\beta+B\omega_{1}\,, \tag{141}\]
\[Dck_{2}=-2Dk_{2}^{3}+4k_{2}D\beta^{2}-6k_{2}^{2}E\beta+D\omega_{2}\,. \tag{142}\]
**Solution II**
It is easy to check that
\[u(x,t)=\sqrt{n_{1}}[B\tanh(\xi)+iA]e^{i(k_{1}x-\omega_{1}t)}\,,\] \[v(x,t)=\sqrt{n_{2}}[E+iD\tanh(\xi)]e^{i(k_{2}x-\omega_{2}t)}\,, \tag{143}\]
is an exact solution of the nonlocal coupled Eqs. (128) and (129) provided
\[\beta^{2}=g_{11}n_{1}B^{2}-g_{12}n_{2}D^{2}=g_{21}n_{1}B^{2}-g_{22}n_{2}D^{2}\,, \tag{144}\]
\[\omega_{1}+k_{1}^{3}=-6\beta^{2}k_{1}-6k_{1}[g_{11}n_{1}A^{2}-g_{12}n_{2}E^{2} ]\,, \tag{145}\]
\[\omega_{2}+k_{2}^{3}=-6\beta^{2}k_{2}-6k_{2}[g_{21}n_{1}A^{2}-g_{22}n_{2}E^{2} ]\,, \tag{146}\]
\[Bck_{1}=-2Bk_{1}^{3}+4k_{1}B\beta^{2}-6k_{1}^{2}A\beta+B\omega_{1}\,, \tag{147}\]
\[Dck_{2}=-2Dk_{2}^{3}+4k_{2}D\beta^{2}+6k_{2}^{2}E\beta+D\omega_{2}\,. \tag{148}\]
**Solution III**
It is not difficult to check that
\[u(x,t)=\sqrt{n_{1}}[A+iB\tanh(\xi)]e^{i(k_{1}x-\omega_{1}t)}\,,\] \[v(x,t)=\sqrt{n_{2}}[E+iD\tanh(\xi)]e^{i(k_{2}x-\omega_{2}t)}\,, \tag{149}\]
is an exact solution of the nonlocal coupled Eqs. (128) and (129) provided
\[\beta^{2}=-[g_{11}n_{1}B^{2}+g_{12}n_{2}D^{2}=-[g_{21}n_{1}B^{2}+g_{22}n_{2}D^ {2}]\,, \tag{150}\]
\[\omega_{1}+k_{1}^{3}=-6\beta^{2}k_{1}+6k_{1}[g_{11}n_{1}A^{2}+g_{12}n_{2}E^{2} ]\,, \tag{151}\]
\[\omega_{2}+k_{2}^{3}=-6\beta^{2}k_{2}+6k_{2}[g_{21}n_{1}A^{2}+g_{22}n_{2}E^{2} ]\,, \tag{152}\]
\[Bck_{1}=-2Bk_{1}^{3}+4k_{1}B\beta^{2}+6k_{1}^{2}A\beta+B\omega_{1}\,, \tag{153}\]
\[Dck_{2}=-2Dk_{2}^{3}+4k_{2}D\beta^{2}+6k_{2}^{2}E\beta+D\omega_{2}\,. \tag{154}\]
Note that the coupled local mKdV Eqs. (135) and (136) do not admit the solutions I, II and III.
We now show that corresponding to any solution of the coupled nonlocal Eqs. (128) and (129) of the form
\[u(x,t)=u(\xi)\,,\ \ v(x,t)=v(\xi)\,,\ \ \xi=\beta(x-ct)\,, \tag{155}\]
where \(u(\xi)\) and \(v(\xi)\) are real, there are always solutions of the form
\[u(x,t)=u(\xi)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=v(\xi)e^{i(k_{2}x-\omega_{2} t)}\,,\ \ \xi=\beta(x-vt)\,. \tag{156}\]
The proof is straightforward. On substituting the ansatz as given by Eq. (156) in the coupled Eqs. (128) and (129) and equating the real and imaginary parts we obtain the following four relations
\[(\omega_{1}+k_{1}^{3})u(\xi)=3\beta^{2}k_{1}u^{\prime\prime}(\xi)+6k_{1}[g_{11}u (\xi)u(-\xi)+g_{12}v(\xi)v(-\xi)]u(\xi)\,, \tag{157}\]
\[(c+3k_{1}^{2})u^{\prime}(\xi)=\beta^{2}u^{\prime\prime\prime}(\xi)+6[g_{11}u( \xi)u(-\xi)+g_{12}v(\xi)v(-\xi)]u^{\prime}(\xi)\,, \tag{158}\]
\[(\omega_{2}+k_{2}^{3})v(\xi)=3\beta^{2}k_{2}v^{\prime\prime}(\xi)+6k_{2}[g_{21 }u(\xi)u(-\xi)+g_{12}v(\xi)v(-\xi)]v(\xi)\,, \tag{159}\]
\[(c+3k_{2}^{2})v^{\prime}(\xi)=\beta^{2}v^{\prime\prime\prime}(\xi)+6[g_{21}u (\xi)u(-\xi)+g_{22}v(\xi)v(-\xi)]v^{\prime}(\xi)\,. \tag{160}\]
In the limit \(k_{1}=k_{2}=\omega_{1}=\omega_{2}=0\), the two Eqs. (157) as well as (159) disappear while the other two equations lead to the two coupled equations for \(u(\xi)\) and \(v(\xi)\)
\[cu^{\prime}(\xi)=\beta^{2}u^{\prime\prime\prime}(\xi)+6[g_{11}u(\xi)u(-\xi)+g_ {12}v(\xi)v(-\xi)]u^{\prime}(\xi)\,, \tag{161}\]
\[cv^{\prime}(\xi)=\beta^{2}v^{\prime\prime\prime}(\xi)+6[g_{11}u(\xi)u(-\xi)+g _{12}v(\xi)v(-\xi)]v^{\prime}(\xi)\,. \tag{162}\]
We reemphasize that the corresponding local mKdV coupled Eqs. (135) and (136) do not admit either the plane wave type solutions of the form (130) nor the solutions of the form (156). As an illustration, we now present six well known solutions of the nonlocal coupled mKdV Eqs. (128) and (129) without the plane wave factor but reconsider them multiplied by the extra plane wave factor.
**Solution IV**
It is not difficult to check that
\[u(x,t)=A\sqrt{m}{\rm sn}(\xi,m)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=B\sqrt{m}{\rm cn }(\xi,m)e^{i(k_{2}x-\omega_{2}t)}\,, \tag{163}\]
is an exact solution of the coupled Eqs. (128) and (129) provided
\[\begin{array}{ll}\beta^{2}=g_{11}A^{2}+g_{12}B^{2}=g_{21}A^{2}+g_{22}B^{2} \,,\\ c+3k_{1}^{2}=(5m-1)\beta^{2}-6mg_{11}A^{2}\,,&\omega_{2}+k_{2}^{3}=3(2m-1)k_{ 2}\beta^{2}-6mk_{2}g_{21}A^{2}\,,\\ c+3k_{2}^{2}=(2m-1)\beta^{2}-6mg_{21}A^{2}\,,&\omega_{1}+k_{1}^{3}=-3(1-m)k_{ 1}\beta^{2}-6mk_{1}g_{11}A^{2}\,.\end{array} \tag{164}\]
**Solution V**
It is easy to check that
\[u(x,t)=A\sqrt{m}{\rm sn}(\xi,m)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=B{\rm dn }(\xi,m)e^{i(k_{2}x-\omega_{2}t)}\,, \tag{165}\]
is an exact solution of the coupled Eqs. (128) and (129) provided
\[\beta^{2}=g_{11}A^{2}+g_{12}B^{2}=g_{21}A^{2}+g_{22}B^{2}\,,\] \[c+3k_{1}^{2}=(5-m)\beta^{2}-6g_{11}A^{2}\,,\ \ \omega_{2}+k_{2}^{3}=3(2-m)k_{2}\beta^{2}-6k_{2}g_{21}A^{2}\,,\] \[c+3k_{2}^{2}=(2-m)\beta^{2}-6g_{21}A^{2}\,,\ \ \omega_{1}+k_{1}^{3}=3(1-m)k_{1}\beta^{2}-6k_{1}g_{11}A^{2}\,.\]
**Solution VI**
In the limit \(m=1\), both the solutions I and II go over to the hyperbolic solution
\[u(x,t)=A\tanh(\xi)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=B{\rm sech}(\xi)e^{i(k_{2}x- \omega_{2}t)}\,, \tag{167}\]
provided
\[\beta^{2}=g_{11}A^{2}+g_{12}B^{2}=g_{21}A^{2}+g_{22}B^{2}\,,\ \ \omega_{1}+k_{1}^{3}=-6k_{1}g_{11}A^{2}\,,\] \[c+3k_{1}^{2}=4\beta^{2}-6g_{11}A^{2}\,,\ \ \omega_{2}+k_{2}^{3}=3k_{2}\beta^{2}-6k_{2}g_{21}A^{2}\,,\] \[c+3k_{2}^{2}=\beta^{2}-6g_{21}A^{2}\,. \tag{168}\]
**Solution VII**
It is easy to check that
\[u(x,t)=A{\rm dn}(\xi,m)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=B\sqrt{m}{\rm cn }(\xi,m)e^{i(k_{2}x-\omega_{2}t)}\,, \tag{169}\]
is an exact solution of the coupled Eqs. (128) and (129) provided
\[\beta^{2}=g_{11}A^{2}+g_{12}B^{2}=g_{21}A^{2}+g_{22}B^{2}\,,\] \[c+3k_{1}^{2}=(2-m)\beta^{2}-6(1-m)g_{12}B^{2}\,,\] \[\omega_{2}+k_{2}^{3}=3k_{2}\beta^{2}-6(1-m)k_{2}g_{22}B^{2}\,,\] \[c+3k_{2}^{2}=(5-4m)\beta^{2}-6(1-m)g_{22}B^{2}\,,\] \[\omega_{1}+k_{1}^{3}=3(2-m)k_{1}\beta^{2}-6(1-m)k_{1}g_{12}A^{2}\,. \tag{170}\]
**Solution VIII**
In the limit \(m=1\), the solution IV goes over to the hyperbolic solution
\[u(x,t)=A{\rm sech}(\xi)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=B{\rm sech}(\xi)e^{i(k_{2}x- \omega_{2}t)}\,, \tag{171}\]
provided
\[\beta^{2}=g_{11}A^{2}+g_{12}B^{2}=g_{21}A^{2}+g_{22}B^{2}\,,\ \ \omega_{1}+k_{1}^{3}=3k_{1}\beta^{2}\,,\] \[c+3k_{1}^{2}=\beta^{2}\,,\ \ k_{2}=\pm k_{1}\,,\ \ \omega_{2}=\pm\omega_{1}\,. \tag{172}\]
Note that the \(\pm\) signs in Eq. (172) are correlated.
**Solution IX**
It is straightforward to check that
\[u(x,t)=A\tanh(\xi)e^{i(k_{1}x-\omega_{1}t)}\,,\ \ v(x,t)=B\tanh(\xi)e^{i(k_{2}x- \omega_{2}t)}\,, \tag{173}\]
is an exact solution of the coupled Eqs. (128) and (129) provided
\[\beta^{2}=g_{11}A^{2}+g_{12}B^{2}=g_{21}A^{2}+g_{22}B^{2}\,,\ \ \omega_{1}+k_{1}^{3}=-6k_{1}\beta^{2}\,,\] \[c+3k_{1}^{2}=-2\beta^{2}\,,\ \ k_{2}=\pm k_{1}\,,\ \ \omega_{2}=\pm \omega_{1}\,. \tag{174}\]
Note that the \(\pm\) signs in Eq. (174) are correlated. As mentioned above, the corresponding local coupled mKdV Eqs. (135) and (136) do not admit any of the above six solutions unless, of course, if \(k_{1}=k_{2}=\omega_{1}=\omega_{2}=0\).
It easily follows from here that corresponding to any solution of the uncoupled nonlocal Eq. (132) of the form
\[u(x,t)=u(\xi)\,,\ \ \xi=\beta(x-ct)\,, \tag{175}\]
where \(u(\xi)\) is real, there is always a solution of the form
\[u(x,t)=u(\xi)e^{i(k_{1}x-\omega_{1}t)}\,, \tag{176}\]
We now show that the coupled Eqs. (128) and (129) admit several new solutions which we present one by one. For simplicity, we present these solutions without the plane wave factor. However, we emphasize that all the coupled solutions discussed below are also valid with the plane wave factor as discussed in the above six solutions.
**Solution X**
It is easy to check that
\[u=\frac{A[F+{\rm sech}(\xi)]}{[D+{\rm sech}(\xi)]}\,,\ \ v=\frac{B{\rm sech}(\xi)}{[D+{\rm sech}(\xi)]}\,,\ \ D>0\,, \tag{177}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c(D-F)=(D+2F)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}F(D-F)=D^{2}\beta^{2}\,,\ \ 2g_{12}B^{2}F=(2D^{2}F-D-F)\beta^{2}\,. \tag{178}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (177) provided the relations (178) are satisfied.
**Solution XI**
It is easy to check that
\[u=\frac{A[F+{\rm sech}(\xi)]}{[D+{\rm sech}(\xi)]}\,,\ \ v=\frac{B}{[D+{\rm sech}(\xi)]}\,,\ \ D>0\,, \tag{179}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c(D-F)=(6FD^{2}-2D-F)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}(D-F)=3D(1-2D^{2})\beta^{2}\,,\ \ 2g_{12}B^{2}=D(2D^{2}F-D-F)\beta^{2}\,. \tag{180}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (179) provided the relations (180) hold good.
**Solution XII**
It is not difficult to check that
\[u=\frac{A{\rm sech}(\xi)}{[D+{\rm sech}(\xi)]}\,,\ \ v=\frac{B}{[D+{\rm sech}(\xi)]}\,,\ \ D>0\,, \tag{181}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c=-2\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}=3(2D^{2}-1)\beta^{2}\,,\ \ 2g_{12}B^{2}=-D^{2}\beta^{2}\,. \tag{182}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (181) provided the relations (182) hold good.
**Solution XIII**
It is easy to check that
\[u=\frac{A[F+{\rm cos}(\xi)]}{[D+{\rm cos}(\xi)]}\,,\ \ v=\frac{B\,{\rm cos}(\xi)}{[D+{ \rm cos}(\xi)]}\,,\ \ D>1\,, \tag{183}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[D(F-D)c=(D^{2}-6+2DF)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}F(F-D)=(D^{2}-2)\beta^{2}\,,\ \ 2g_{12}B^{2}DF=(D^{2}+DF-2)\beta^{2}\,. \tag{184}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (183) provided the relations (184) hold good.
**Solution XIV**
It is straightforward to check that
\[u=\frac{A[F+\cos(\xi)]}{[D+\cos(\xi)]}\,,\ \ v=\frac{B}{[D+\cos(\xi)]}\,,\ \ D>1\,, \tag{185}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[(D-F)c=(2D+F)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}(D-F)=D\beta^{2}\,,\ \ 2g_{12}B^{2}=(D^{2}+DF-2)\beta^{2}\,. \tag{186}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (185) provided the relations (186) hold good.
**Solution XV**
It is easy to check that
\[u=\frac{A\cos(\xi)}{[D+\cos(\xi)]}\,,\ \ v=\frac{B}{[D+\cos(\xi)]}\,,\ \ D>1\,, \tag{187}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c=2\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}=\beta^{2}\,,\ \ 2g_{12}B^{2}=(D^{2}-2)\beta^{2}\,. \tag{188}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (187) provided the relations (188) hold good.
**Solution XVI**
It is not difficult to check that
\[u=\frac{A[F+\tanh(\xi)]}{[D+\tanh(\xi)]}\,,\ \ v=\frac{B\tanh(\xi)}{[D+\tanh( \xi)]}\,,\ \ D>1\,, \tag{189}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[cD(F-D)=2(2DF+D^{2}-3)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[g_{11}A^{2}F(F-D)=(D^{2}-1)\beta^{2}\,,\ \ g_{12}B^{2}DF=(DF+D^{2}-1-D^{3}F)\beta^{2}\,. \tag{190}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (189) provided the relations (190) hold good.
**Solution XVII**
It is easy to check that
\[u=\frac{A[F+\tanh(\xi)]}{[D+\tanh(\xi)]}\,,\,\,\,\,v=\frac{B}{[D+\tanh(\xi)]}\,, \,\,\,\,D>1\,, \tag{191}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c(D-F)=2(2D+F-3FD^{2})\beta^{2}\,,\,\,\,\,g_{11}=g_{21}\,,\,\,\,g _{12}=g_{22}\,,\] \[g_{11}A^{2}(D-F)=D(1-D^{2})\beta^{2}\,,\,\,\,\,g_{12}B^{2}=(D^{2} -1)(1-DF)\beta^{2}\,. \tag{192}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (191) provided the relations (192) hold good.
**Solution XVIII**
It is straightforward to check that
\[u=\frac{A[F+\mbox{dn}(\xi,m)]}{[D+\mbox{dn}(\xi,m)]}\,,\,\,\,v=\frac{B\mbox{dn }(\xi,m)}{[D+\mbox{dn}(\xi,m)]}\,,\,\,\,D>0\,, \tag{193}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[cD(D-F)=[(2-m)D(D+2F)-6(1-m)]\beta^{2}\,,\,\,\,\,g_{11}=g_{21}\,, \,\,\,g_{12}=g_{22}\,,\] \[2g_{11}A^{2}F(D-F)=[(2-m)D^{2}-2(1-m)]\beta^{2}\,,\] \[2g_{12}B^{2}DF=[2(1-m)-(2-m)D(D+F)+2D^{3}F]\beta^{2}\,. \tag{194}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (193) provided the relations (194) hold good.
**Solution XIX**
It is easy to check that
\[u=\frac{A[F+\mbox{dn}(\xi,m)]}{[D+\mbox{dn}(\xi,m)]}\,,\,\,\,\,v=\frac{B}{[D +\mbox{dn}(\xi,m)]}\,,\,\,\,D>0\,, \tag{195}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c(D-F)=[6FD^{2}-(2-m)(2D+F)]\beta^{2}\,,\,\,\,\,g_{11}=g_{21}\,, \,\,\,g_{12}=g_{22}\,,\] \[2g_{11}A^{2}(D-F)=D[2D^{2}-(2-m)]\beta^{2}\,,\] \[2g_{12}B^{2}=[2(1-m)-(2-m)D(D+F)-2D^{3}F]\beta^{2}\,. \tag{196}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (195) provided the relations (196) hold good.
**Solution XX**
It is not difficult to check that
\[u=\frac{A{\rm dn}(\xi,m)}{[D+{\rm dn}(\xi,m)]}\,,\ \ v=\frac{B}{[D+{\rm dn}(\xi,m)]}\,, \ \ D>0\,, \tag{197}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c=-2(2-m)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}=[2D^{2}-(2-m)]\beta^{2}\,,\ \ 2g_{12}B^{2}=[2(1-m)-(2-m)D^{2}]\beta^{2}\,. \tag{198}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (197) provided the relations (198) hold good.
**Solution XXI**
It is easy to check that
\[u=\frac{A[F+{\rm cn}(\xi,m)]}{[D+{\rm cn}(\xi,m)]}\,,\ \ v=\frac{B{\rm cn}(\xi,m)}{[D+{\rm cn }(\xi,m)]}\,,\ \ D>1\,, \tag{199}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[D(D-F)c=[6(1-m)+(2m-1)D(D+2F)]\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}F(D-F)=[2(1-m)+(2m-1)D^{2}]\beta^{2}\,,\] \[2g_{12}B^{2}DF=[2mD^{3}F-(2m-1)D(D+F)-2(1-m)]\beta^{2}\,. \tag{200}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (199) provided the relations (200) hold good.
**Solution XXII**
It is easy to check that
\[u=\frac{A[F+{\rm cn}(\xi,m)]}{[D+{\rm cn}(\xi,m)]}\,,\ \ v=\frac{B}{[D+{\rm cn }(\xi,m)]}\,,\ \ D>1\,, \tag{201}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[(D-F)c=[6mFD^{2}-(2m-1)(2D+F)]\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}(D-F)=D[2m-1-2mD^{2}]\beta^{2}\,,\] \[2g_{12}B^{2}=-[2(1-m)+D(F+D)+2mD^{3}F]\beta^{2}\,. \tag{202}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (201) provided the relations (202) hold good.
**Solution XXIII**
It is easy to check that
\[u=\frac{A\cos(\xi)}{[D+\cos(\xi)]}\,,\ \ v=\frac{B}{[D+\cos(\xi)]}\,,\ \ D>1\,, \tag{203}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c=-2(2m-1)\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}=[2mD^{2}-(2m-1)]\beta^{2}\,,\ \ 2g_{12}B^{2}=-[(2m-1)D^{2}+2(1-m)]\beta^{2}\,. \tag{204}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (203) provided the relations (204) hold good.
We now present six soliton solutions with a power law tail.
**Solution XXIV**
It is not difficult to check that
\[u=\frac{A(F+x^{2})}{D+x^{2}}\,,\ \ v=\frac{Bx^{2}}{D+x^{2}}\,,\ \ D>0\,, \tag{205}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[cD(F-D)=6(F+2D)\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[F(F-D)g_{11}A^{2}=3D\,,\ \ FDg_{12}B^{2}=F+3D\,. \tag{206}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (205) provided the relations (206) hold good.
**Solution XXV**
It is straightforward to check that
\[u=\frac{A(F+x^{2})}{D+x^{2}}\,,\ \ v=\frac{B}{D+x^{2}}\,,\ \ D>0\,, \tag{207}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c(D-F)=6\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[(F-D)g_{11}A^{2}=1\,,\ \ (D-F)g_{12}B^{2}=F+3D\,. \tag{208}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (207) provided the relations (208) hold good.
**Solution XXVI**
It is easy to check that
\[u=\frac{Ax^{2}}{D+x^{2}}\,,\ \ v=\frac{B}{D+x^{2}}\,,\ \ D>0\,, \tag{209}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[cD=6\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[Dg_{11}A^{2}=1\,,\ \ g_{12}B^{2}=3D\,. \tag{210}\]
It is worth pointing out that the corresponding local coupled mKdV Eqs. (135) and (136) also admit the solution (209) provided the relations (210) hold good.
**Solution XXVII**
It is not difficult to check that
\[u=\frac{A(F+x^{2})}{D+x^{2}}\,,\ \ v=\frac{Bx}{\sqrt{D+x^{2}}}\,,\ \ D>0\,, \tag{211}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[cD(F-D)^{2}=12(F^{2}+2DF-D^{2})=3D(4F^{2}-D^{2}+2FD)\,,\] \[(F-D)^{2}g_{11}A^{2}=4D\,,\ \ D(F-D)g_{12}B^{2}=2(F+3D)\,,\] \[(F-D)^{2}g_{21}A^{2}=(5/2)D^{2}\,,\ \ (F-D)g_{22}B^{2}=2F+3D\,. \tag{212}\]
On equating the two expressions for \(c\) we find that \(D\neq 4\) and further if \(D=1\) then \(F=1/2\). Moreover, if \(D\neq 1,4\), then \(F\) is given in terms of \(D\) by
\[F=\frac{D(4-D)\pm D\sqrt{32-28D+5D^{2}}}{4(D-1)}\,. \tag{213}\]
Notice that \(F\) is real provided \(D<8/5\) or \(D>4\).
**Solution XXVIII**
It is easy to check that
\[u=\frac{Ax^{2}}{D+x^{2}}\,,\ \ v=\frac{Bx}{\sqrt{D+x^{2}}}\,,\ \ D>0\,, \tag{214}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c=-3\,,\ \ D=4\,,\ \ g_{11}A^{2}=1\,,\ \ g_{12}B^{2}=-3/2\,,\] \[g_{21}A^{2}=5/2\,,\ \ g_{22}B^{2}=-3\,. \tag{215}\]
**Solution XXIX**
It is straightforward to check that
\[u=\frac{A}{D+x^{2}}\,,\ \ v=\frac{B}{\sqrt{D+x^{2}}}\,,\ \ D>0\,, \tag{216}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (128) and (129) provided
\[c=12\,,\ \ D=1\,,\ \ g_{11}A^{2}=4\,,\ \ g_{12}B^{2}=2\,,\] \[g_{21}A^{2}=5/2\,,\ \ g_{22}B^{2}=2\,. \tag{217}\]
## 4 Conclusion And Open Problems
In this paper we have obtained several new solutions of the coupled AM variant of the nonlocal NLS equation as well as the coupled mKdV equations. Moreover, we compared and contrasted these solutions with the solutions (in case they are admitted) of the corresponding coupled NLS or mKdV equations, respectively. In the nonlocal NLS case, particular mention may be made of the host of the periodic as well as the hyperbolic nonreciprocal solutions in a special subclass of the coupled nonlocal NLS solutions. In the nonlocal mKdV case, we have shown that in contrast to the local mKdV case, the nonlocal coupled mKdV equations admit plane wave solutions as well as moving soliton solutions. This paper raises several open questions, some of which are as follows.
1. While we have obtained a large number of nonreciprocal solutions for the coupled nonlocal (as well as local) NLS equations, we have so far not been able to obtain similar solutions for either the coupled nonlocal or the local mKdV equations. It is worthwhile asking if nonreciprocal solutions can also be obtained for the mKdV or other coupled nonlocal or even local nonlinear equations.
2. Unlike the local mKdV equations we have shown that the nonlocal mKdV equations admit plane wave solutions as well as soliton solutions multiplied by the plane wave factor. It is worth asking if similar solutions can also be obtained for the other nonlocal equations.
3. For the full coupled nonlocal NLS equations as well as coupled nonlocal mKdV equations we have obtained novel periodic as well as hyperbolic solutions. One obvious question is, can one similarly obtain exact solutions of the other coupled nonlocal equations, e.g., coupled Hirota or coupled KdV equations or coupled Yang-type nonlocal NLS? Further, in the coupled nonlocal KdV case do we again get plane wave solutions as well as soliton solutions multiplied by the plane wave factor?
## 5 Acknowledgment
One of us (AK) is grateful to Indian National Science Academy (INSA) for the award of INSA Honorary Scientist position at Savitribai Phule Pune University. The work at Los Alamos National Laboratory was carried out under the auspices of the US DOE and NNSA under contract No. DEAC52-06NA25396.
## 6 Appendix A: Novel Periodic Superposed Solutions of a Coupled Local NLS Model
We now present those solutions of the coupled local NLS Eqs. (6) and (7) which are, however, not the solutions of the nonlocal coupled Eqs. (1) and (2).
In this context, it is interesting to observe that unlike the coupled nonlocal AM Eqs. (1) and (2), the (local) coupled NLS Eqs. (6) and (7) admit the plane wave solution
\[u(x,t)=Ae^{i(\omega_{1}t-k_{1}x)}\,,\;\;v(x,t)=Be^{i(\omega_{2}t-k_{2})}\,, \tag{218}\]
provided
\[\omega_{1}+k_{1}^{2}=[g_{11}A^{2}+g_{12}B^{2}]\,,\] \[\omega_{2}+k_{2}^{2}=[g_{21}A^{2}+g_{22}B^{2}]\,. \tag{219}\]
We now show that corresponding to any stationary soliton solution of the coupled local NLS Eqs. (6) and (7), one always has a corresponding moving soliton solution. The proof is straightforward. Let us assume that there is a stationary soliton solution of the form
\[u(x,t)=u(\beta x)e^{i(\omega_{1}t-k_{1}x}\,,\,\,\,v(x,t)=v(\beta x)e^{i(\omega _{2}t-k_{2}x)}\,,\,\,\,\xi=\beta(x-ct)\,, \tag{220}\]
which is an exact solution of the coupled Eqs. (6) and (7). Using the ansatz (220) in Eqs. (6) and (7), we find that it is an exact solution provided
\[\beta^{2}u^{\prime\prime}(\eta)=\omega_{1}u+[g_{11}|u|^{2}+g_{12} |v|^{2}]u\,,\] \[\beta^{2}v^{\prime\prime}(\eta)=\omega_{2}v+[g_{21}|u|^{2}+g_{22} |v|^{2}]v\,. \tag{221}\]
Let us now consider the moving soliton solutions. Let
\[u(x,t)=u(\xi)e^{i(\omega_{1}t-k_{1}x)}\,,\,\,\,\,v(x,t)=v(\xi)e^{i(\omega_{2}t -k_{2}x)}\,,\,\,\,\,\xi=\beta(x-ct)\,, \tag{222}\]
be an exact solution of the coupled local NLS Eqs. (6) and (7). On substituting this ansatz in Eqs. (6) and (7), we find that it is an exact solution provided
\[k_{1}=k_{2}\,,\,\,\,\,c=-2k_{1}\,,\,\,\,\,\beta^{2}u^{\prime \prime}(\xi)=(\omega_{1}+k_{1}^{2})u+[g_{11}|u|^{2}+g_{12}|v|^{2}]u\,,\] \[\beta^{2}v^{\prime\prime}(\xi)=(\omega_{2}+k_{1}^{2})v+[g_{21}|u |^{2}+g_{22}|v|^{2}]v\,. \tag{223}\]
Now notice that in the limit \(k_{1}=k_{2}=c=0\), the coupled Eqs. (223) indeed go over to Eqs. (221).
For simplicity, we now only present the new stationary soliton solutions of the local coupled NLS Eqs. (6) and (7). But as proved above, it is understood that there are corresponding moving soliton solutions.
**Solution IA**
It is not difficult to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A{\rm dn}(\beta x,m)}{D+{\rm sn}(\beta x,m)}\,, \,\,\,\,v(x,t)=e^{i\omega_{2}t}\frac{B{\rm cn}(\beta x,m)}{D+{\rm sn}(\beta x,m)}\,, \tag{224}\]
with \(A,B>0\,,D>1\), is an exact solution of the coupled Eqs. (6) and (7) provided
\[\omega_{1}=\omega_{2}=\frac{(1+m)\beta^{2}}{2}\,,\,\,\,\,g_{12}B ^{2}=\frac{3(mD^{2}-1)\beta^{2}}{2}\,,\,\,\,g_{11}A^{2}=\frac{(D^{2}-1)\beta^{ 2}}{2}\,,\] \[g_{22}B^{2}=\frac{(mD^{2}-1)\beta^{2}}{2}\,,\,\,\,\,g_{21}A^{2}= \frac{3(D^{2}-1)\beta^{2}}{2}\,. \tag{225}\]
Thus \(g_{12},g_{22}>0\) if \(1<D^{2}<1/m\) while \(g_{11},g_{21}>0\). On the other hand \(\omega_{1},\omega_{2}>0\).
**Solution IIA**
It is easy to check that
\[u(x,t)=e^{i\omega_{1}t}\frac{A[F+\tanh(\beta x)]}{D+\tanh(\beta x)}\,,\;\;v(x,t) =e^{i\omega_{2}t}\frac{B{\rm sech}(\beta x)}{D+\tanh(\beta x)}\,, \tag{226}\]
with \(A,B>0,D>1\), is an exact solution of the coupled local NLS Eqs. (6) and (7) (but not nonlocal coupled Eqs. (1) and (2)) provided
\[\omega_{1}=4\beta^{2}\,,\;\;\omega_{2}=\beta^{2}\,,\;\;F=\pm 1 \,,\;\;g_{11}A^{2}=(D\pm 1)^{2}\beta^{2}\,,\] \[g_{12}B^{2}=2(D^{2}-1)\beta^{2}\,,\;\;g_{22}=g_{12}\,,\;\;g_{21}= 0\,. \tag{227}\]
Note that the \(\pm\) signs here are correlated.
**Solution IIIA**
It is straightforward to check that
\[\phi_{1}=A\left[\frac{1-m}{{\rm dn}^{2}(\beta x,m)}+y\right]+\frac{B\sqrt{m}( 1-m){\rm sn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,, \tag{228}\]
\[\phi_{2}=\frac{D\sqrt{1}-m}{{\rm dn}(\beta x,m)}+\frac{E\sqrt{m(1-m)}{\rm sn} (\beta x,m)}{{\rm dn}(\beta x,m)}\,, \tag{229}\]
is an exact nonreciprocal solution of the unusual coupled local NLS Eqs. (39) and (40) (but not the nonlocal Eqs. (34) and (35)) provided \(0<m<1\) and further
\[d_{1}D^{2}=-\frac{3\beta^{2}}{2}\,,\;\;a_{1}=[\frac{(7+m)}{2}+3( 1-m)y]\beta^{2}\,,\;\;E=\pm D\,,\;\;B=\pm A\,,\] \[a_{2}=\frac{(1+m)\beta^{2}}{2}\,,\;\;d_{2}D^{2}=-\frac{\beta^{2} }{2}\,,\;\;y=\frac{-(5-m)\pm\sqrt{1+14m+m^{2}_{\rho}}}{6}\,. \tag{230}\]
Note that the \(\pm\) signs in \(D,E\) are correlated with the signs in \(B\) and \(A\). Further, the solution is valid for arbitrary values of \(A\).
**Solution IVA**
It is easy to check that
\[\phi_{1}=\frac{A\sqrt{m}{\rm cn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}+\frac{ Bm{\rm cn}(\beta x,m){\rm sn}(\beta x,m)}{{\rm dn}^{2}(\beta x,m)}\,, \tag{231}\]
\[\phi_{2}=\frac{D\sqrt{1-m}}{{\rm dn}(\beta x,m)}+\frac{E\sqrt{m(1-m)}{\rm sn }(\beta x,m)}{{\rm dn}(\beta x,m)}\,, \tag{232}\]
is an exact nonreciprocal solution of the unusual coupled local NLS Eqs. (39) and (40) (but not the solutions of the coupled nonlocal Eqs. (34) and (35)) provided
\[d_{1}D^{2}=-\frac{3\beta^{2}}{2}\,,\ \ a_{1}=a_{2}=\frac{(1+m) \beta^{2}}{2}\,,\ \ E=\pm D\,,\ \ B=\pm A\,,\] \[d_{2}D^{2}=-\frac{\beta^{2}}{2}\,,\ \ 0<m<1\,. \tag{233}\]
Note that the \(\pm\) signs in \(D,E\) are correlated with the signs in \(B\) and \(A\). Further, the solution is valid for arbitrary values of \(A\).
We now present three solutions of the coupled local NLS where in general \(k_{1}\neq k_{2}\). Further, in these cases even solutions with \(c=0\) are allowed even though \(k_{1},k_{2}\) are still nonzero and in general unequal.
**Solution VA**
It is not difficult to check that
\[u(x,t)=\sqrt{n_{1}}[\cos(\theta_{1})\tanh(\xi)+i\sin(\theta_{1}) ]e^{i(k_{1}x-\omega_{1}t)}\,,\] \[v(x,t)=\sqrt{n_{2}}[\cos(\theta_{2})\tanh(\xi)+i\sin(\theta_{2}) ]e^{i(k_{2}x-\omega_{2}t)}\,,\ \ \xi=\beta(x-ct)\,, \tag{234}\]
is an exact solution of the local NLS coupled Eqs. (6) and (7) provided
\[n_{1}g_{11}\cos^{2}(\theta_{1})+n_{2}g_{12}\cos^{2}(\theta_{2})=n _{1}g_{21}\cos^{2}(\theta_{1})+n_{2}g_{22}\cos^{2}(\theta_{2})=-2\beta^{2}\,, \tag{235}\] \[\omega_{1}-k_{1}^{2}+n_{1}g_{11}+n_{2}g_{12}=0\,,\] (236) \[\omega_{2}-k_{2}^{2}+n_{1}g_{21}+n_{2}g_{22}=0\,,\] (237) \[c=2k_{1}+2\beta\tan(\theta_{1})=2k_{2}+2\beta\tan(\theta_{2})\,. \tag{238}\]
**Solution VIA**
It is easy to check that
\[u(x,t)=\sqrt{n_{1}}[\cos(\theta_{1})\tanh(\xi)+i\sin(\theta_{1}) ]e^{i(k_{1}x-\omega_{1}t)}\,,\] \[v(x,t)=\sqrt{n_{2}}[\sin(\theta_{2})+i\cos(\theta_{2})\tanh(\xi )]e^{i(k_{2}x-\omega_{2}t)}\,,\ \ \xi=\beta(x-ct)\,, \tag{239}\]
is an exact solution of the local NLS coupled Eqs. (6) and (7) provided Eqs. (235) to Eqs. (237) are satisfied and further
\[c=2k_{1}+2\beta\tan(\theta_{1})=2k_{2}-2\beta\tan(\theta_{2})\,. \tag{240}\]
**Solution VIIA**
It is straightforward to check that
\[u(x,t)=\sqrt{n_{1}}[\sin(\theta_{1})+i\cos(\theta_{1})\tanh(\xi)]e^ {i(k_{1}x-\omega_{1}t)}\,,\] \[v(x,t)=\sqrt{n_{2}}[\sin(\theta_{2})+i\cos(\theta_{2})\tanh(\xi) ]e^{i(k_{2}x-\omega_{2}t)}\,,\ \ \xi=\beta(x-ct)\,,\]
is an exact solution of the local NLS coupled Eqs. (6) and (7) provided Eqs. (235) to Eqs. (237) are satisfied and further
\[c=2k_{1}-2\beta\tan(\theta_{1})=2k_{2}-2\beta\tan(\theta_{2})\,. \tag{242}\]
## 7 Appendix B: Novel Periodic Superposed Solutions of a Coupled Local mKdV Model
We now present those solutions of the coupled local mKdV Eqs. (135) and (136) which are however not the solutions of the coupled nonlocal Eqs. (128) and (129).
**Solution IB**
It is easy to check that
\[u(x,t)=\frac{A\tanh(\xi)}{[D+\tanh(\xi)]}\,,\ \ v(x,t)=\frac{B}{[D+\tanh(\xi) ]}\,,\ \ D>1\,, \tag{243}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (135) and (136) provided
\[c=4\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[g_{11}A^{2}=(1-D^{2})\beta^{2}\,,\ \ g_{12}B^{2}=(D^{2}-1)\beta^{2}\,. \tag{244}\]
**Solution IIB**
It is not difficult to check that
\[u(x,t)=\frac{A[F+\mbox{sn}(\xi,m)]}{[D+\mbox{sn}(\xi,m)]}\,,\ \ v(x,t)=\frac{B\mbox{sn}(\xi,m)}{[D+\mbox{sn}(\xi,m)]}\,,\ \ D>1\,, \tag{245}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (135) and (136) provided
\[cD(F-D)=2[(1+m)D(D+2F)-3]\beta^{2}\,,\ \ g_{11}=g_{21}\,,\ \ g_{12}=g_{22}\,,\] \[2g_{11}A^{2}F(F-D)=[(1+m)D^{2}-2]\beta^{2}\,,\] \[2g_{12}B^{2}DF=[(1+m)D(D+F)-2-2mD^{3}F]\beta^{2}\,. \tag{246}\]
**Solution IIIB**
It is easy to check that
\[u(x,t)=\frac{A[F+\mbox{sn}(\xi,m)]}{[D+\mbox{sn}(\xi,m)]}\,,\;\;v(x,t)=\frac{B}{ [D+\mbox{sn}(\xi,m)]}\,,\;\;A,B>0\,,\;\;D>1\,, \tag{247}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (135) and (136) provided
\[c(D-F)=[(2D+F)(1+m)-6mFD^{2}]\beta^{2}\,,\;\;g_{11}=g_{21}\,,\;\;g _{12}=g_{22}\,,\] \[2g_{11}A^{2}(D-F)=D[1+m-2mD^{2})\beta^{2}\,,\] \[2g_{12}B^{2}=[(1+m)(D^{2}+DF)-2-2mFD^{3}]\beta^{2}\,. \tag{248}\]
**Solution IVB**
It is straightforward to check that
\[u(x,t)=\frac{A\mbox{sn}(\xi,m)}{[D+\mbox{sn}(\xi,m)]}\,,\;\;v(x,t)=\frac{B}{ [D+\mbox{sn}(\xi,m)]}\,,\;\;D>1\,, \tag{249}\]
where \(\xi=\beta(x-ct)\) is an exact solution of the coupled Eqs. (135) and (136) provided
\[c=2(1+m)\beta^{2}\,,\;\;g_{11}=g_{21}\,,\;\;g_{12}=g_{22}\,,\] \[2g_{11}A^{2}=(1+m-2mD^{2})\beta^{2}\,,\;\;2g_{12}B^{2}=[(1+m)D^{2 }-2]\beta^{2}\,. \tag{250}\]
|
2303.02615 | Estimating Extreme 3D Image Rotation with Transformer Cross-Attention | The estimation of large and extreme image rotation plays a key role in
multiple computer vision domains, where the rotated images are related by a
limited or a non-overlapping field of view. Contemporary approaches apply
convolutional neural networks to compute a 4D correlation volume to estimate
the relative rotation between image pairs. In this work, we propose a
cross-attention-based approach that utilizes CNN feature maps and a
Transformer-Encoder, to compute the cross-attention between the activation maps
of the image pairs, which is shown to be an improved equivalent of the 4D
correlation volume, used in previous works. In the suggested approach, higher
attention scores are associated with image regions that encode visual cues of
rotation. Our approach is end-to-end trainable and optimizes a simple
regression loss. It is experimentally shown to outperform contemporary
state-of-the-art schemes when applied to commonly used image rotation datasets
and benchmarks, and establishes a new state-of-the-art accuracy on these
datasets. We make our code publicly available. | Shay Dekel, Yosi Keller, Martin Cadik | 2023-03-05T09:07:26Z | http://arxiv.org/abs/2303.02615v2 | # Estimating Extreme 3D Image Rotation with Transformer Cross-Attention
###### Abstract
The estimation of large and extreme image rotation plays a key role in multiple computer vision domains, where the rotated images are related by a limited or a non-overlapping field of view. Contemporary approaches apply convolutional neural networks to compute a 4D correlation volume to estimate the relative rotation between image pairs. In this work, we propose a cross-attention-based approach that utilizes CNN feature maps and a Transformer-Encoder, to compute the cross-attention between the activation maps of the image pairs, which is shown to be an improved equivalent of the 4D correlation volume, used in previous works. In the suggested approach, higher attention scores are associated with image regions that encode visual cues of rotation. Our approach is end-to-end trainable and optimizes a simple regression loss. It is experimentally shown to outperform contemporary state-of-the-art schemes when applied to commonly used image rotation datasets and benchmarks, and establishes a new state-of-the-art accuracy on these datasets. We make our code publicly available1.
Footnote 1: [https://anonymous.4open.science/r/AttExtremeRotation-A467/](https://anonymous.4open.science/r/AttExtremeRotation-A467/)
## 1 Introduction
Estimating the relative pose between a pair of images is a crucial task in computer vision, which is used in various applications such as indoor navigation, augmented reality, autonomous driving, 3D reconstruction [43, 39], camera localization [5, 44, 48], simultaneous localization and mapping [12, 37], and novel view synthesis [34, 41]. The current approach to image registration involves extracting features, matching them, and establishing correspondences between them. However, this approach is ineffective for input pairs with little or no overlap, making it difficult to establish sufficient feature correspondences for matching, such as in the images shown in Fig. 1.
Many applications [47, 31, 1] require an accurate estimation of the rotations between images. The common approach to estimating extreme 3D rotations for images with
Figure 1: The estimation of extreme 3D image rotations. First row: Image pairs with a small overlap. Second row: non-overlapping image pairs. The proposed scheme estimates the relative rotation between the image pairs.
very limited or no-overlap as in Fig. 1 relates to the seminal work by Coughlan and Yuille [10] who introduced a technique that is premised on linear structures within an image, arising primarily from three directions that are mutually orthogonal, one vertical (building walls), and two horizontal (ground-level pavements, roads, etc.). Similarly, the "Single View Metrology" by Criminisi et al. [11] and its extensions [59, 25, 40], use parallel lines in the image and their corresponding vanishing points [20] for camera calibration. Moreover, the relative rotation of a camera can also be estimated using illumination cues [2], by analyzing the directions of the lighting and cast shadows.
In this work, we propose a novel deep learning-based method for determining the relative rotation between two images that are related by significant and extreme rotations. Our approach does not follow the classical formulations [10, 11] that explicitly detect handcrafted image cues such as lines, shadows and their corresponding vanishing points. Instead, our method involves designing a deep neural network that performs direct regression of the relative rotation from the input images.
Inspired by the recent successful applications of Transformers [51] in computer vision tasks such as object detection [8] and image recognition [23], we adapt a Transformer-Encoder to compute a stacked multihead attention to encode the _cross-attention_ between the two latent representations of the image pairs. Thus, a Transformer-Encoder [51] computes an improved stacked representation of the 4D correlation volume (4DCV) used in previous works [49, 38, 29, 22, 15, 16], where the 4DCVs were calculated by inner products. Instead of using a single layer of \(N^{2}\) inner products as in the 4DCV, the proposed Transformer-Encoder-based approach leverages multi-head attention with its advanced network architecture and additional parameters to better encode the interactions between the entries of the activation maps. Interestingly, the attention maps computed by our scheme, shown in Section 4.5, show that the Transformer-Encoder assigns high attention scores to image regions containing rotation-informative image cues, emphasizing vertical and horizontal lines. We also observe that the proposed approach can predict the rotation of non-overlapping image pairs with state-of-the-art (SOTA) accuracy. Our framework is end-to-end trainable and optimizes a regression loss. It is evaluated on three data-set benchmarks: StreetLearn [35], SUN360 [52] and InteriorNet [28], with different classes of overlap in indoor and outdoor locations and under varying illumination. The experimental results in Section 4 show our model to provide state-of-the-art (SOTA) accuracy.
In summary, our contributions are as follows:
* We address the problem of estimating rotations in extreme scenarios, even with little or no overlap.
* We propose a cross-attention approach based on a Transformer-Encoder to encode the pairwise attentions between the two latent spaces of the image pairs.
* We found that the relative rotation can be estimated based on the cues from a single image.
* Our approach has been experimentally shown to compare favorably with state-of-the-art rotation estimation schemes when applied to indoor and outdoor datasets.
## 2 Related Work
Our rotations estimation approach is a particular instance of relative pose estimation in general, and relative pose regression (RPR), in particular. The common relative rotation estimation approach is to detect and match 2D feature points [33, 4] in both images. In pose localization tasks [48], PnP schemes are used to estimate the relative 3D rotation, and the camera pose of the query image is estimated given the 3D coordinates and pose of the anchor image. Recent approaches applied end-to-end trainable deep network to both images [3, 14]. Graph neural networks (GNNs) were applied to multi-image RPR by aggregating the localization cues from multiple video frames [53, 50]. Recently, neural radiation fields (NeRFs) were also applied to encode scenes for RPR, instead of storing images or feature points. Some schemes only estimate relative 3D rotations using rotation-specific parametrizations such as quaternions and Euler angles [57] by formulating the problem as a \(L_{2}\) regression using neural networks [57, 26]. Quaternions are used to resolve the inherent discontinuities in the rotation representation due to their Double-Cover property. Levinson et al. [26]
studied the use of SVD orthogonalization to estimate 3D rotations using neural networks. The SVD is applied to project the estimated rotation matrices onto the rotation group. Their analysis demonstrates that simply replacing traditional representations with the SVD orthogonalization method leads to state-of-the-art results in various deep-learning applications. The probability of distributions of the set of 3D rotations was estimated by Mohlin et al. [36] that proposed to use a neural network to estimate the parameters of the Fisher distribution matrix. The optimization of the negative log-likelihood loss for this distribution leads to improved results, outperforming the previous state-of-the-art on several practical datasets. Applying a negative log-likelihood loss for this distribution allows for improved rotation estimation and was successfully used for head pose estimation by Liu et al. [32]. Rockwell et al. [42] present a baseline to directly estimate the relative pose between two images by training a Vision Transformer (ViT) to bring its computations close to the eight-point algorithm. They achieve competitive results in multiple settings.
The methods discussed above require a significant overlap in the pair of input images, where large rotation (and consequently) small overlap may lead to inaccurate estimates. In particular, non-overlapping images can't be aligned using such schemes. Caspi and Irani [9] showed that two image sequences with non-overlapping fields of view can be aligned both in time and in space, when the two cameras are closely attached, based on common temporal changes within the two sequences. Similarly, Shakil [45] showed that two (or more) nonoverlapping videos recorded by uncalibrated video cameras can be aligned using the temporal changes and frame-to-frame motion within the sequences. The registration of non-overlapping RGB-D scans [46, 21] is a common extension of the image-only problem, where a global representation of a scene is often estimated.
Our work follows the setup studied by Cai et al. [7] where the extreme relative 3D rotation is estimated given only a pair of input images. Cai et al. propose to apply a CNN to embed the input images and a 4D correlation volume of the image embeddings is computed. The 3D rotation is estimated by applying an MLP to the correlation volume and optimizing a cross-entropy loss. They achieved SOTA accuracy among non-overlapping images. 4D correlation volumes (4DCV) are 4D representations of Bilinear Pooling [30, 24] that encode _all_ of the pairwise correlation between all entries of the 2D embedding maps of a pair of images. Therefore, 4DCVs have been applied in tasks that require long-range (spatial) correspondences, such as the RAFT SOTA optical flow [49], as well as other optical flow schemes [22, 16, 54]. Similarly, 3D correlation volumes were applied to deep stereo matching [38, 29, 19, 27], where each pixel in one of the images is matched to limited spatial support in the other image. In contrast, we propose to compute the equivalent of the 4DCV by computing the cross-attention between the activation maps of the pair of input images using a stacked multi-head Transformer-Encoder and a corresponding activation mask. In particular, the Transformer-Encoder computes the equivalent of multiple stacked correlation volumes [51] using MHA.
## 3 Rotation Estimation Using Cross-Attention
The proposed scheme estimates the relative 3D rotation \(\mathbf{R}\in\mathbb{R}^{3\times 3}\) between a pair of input images \(\{I_{1},I_{2}\}\), and is outlined in Fig. 2. Each of the input images is encoded into the activation maps \(\{\widehat{I}_{1},\widehat{I}_{2}\}\)\(\in\mathbb{R}^{c\times K_{1}\times K_{2}}\) using the weight-sharing Siamese Residual-Unet (ResUNet) [55] network, where \(c\) is the number of channels of the activation maps, while \(K_{1}\) and \(K_{2}\) are their spatial dimensions. We compute an improved equivalent of the 4D correlation volume used in previous works [49, 7] between the input images \(I_{1}\) and \(I_{2}\), by computing the cross-attention using a Transformer-Encoder and a corresponding attention mask \(\mathbf{M}\), as detailed in Section 3.1. For that, we first row-vectorize \(\widehat{I}_{1}\) and \(\widehat{I}_{2}\) into two sequences \(\in\mathbb{R}^{c\times K_{1}K_{2}}\) by flattening the leading spatial dimension of each activation map. The sequences are concatenated to a single tensor \(T\)\(\in\mathbb{R}^{c\times 2K_{1}K_{2}}\) that is input to the Transformer-Encoder. The output sequence \(\widehat{\mathbf{T}}\) of the Transformer-Encoder is further analyzed using an MLP, and the 3D rotation is regressed using its quaternion representation \(\mathbf{q}\), as discussed in Section 3.2.
### Cross-Attention computation using a Transformer-Encoder
The cross-attention between the pair of input images \(I_{1}\) and \(I_{2}\) is computed using a Transformer-Encoder with \(l=2\) layers and \(h=4\) attention heads for each layer. An ablation study of this configuration is given in Section 4.7. The cross-attention maps computed by the Transformer-Encoder are an improved equivalent of the 4D correlation volumes [49, 7], encoding the interactions (inner-products) between _all_ the image cues in the activation maps. By default, a Transformer-Encoder computes the self-attention maps of the input sequence. Hence, the cross-attention of the vectorized and concatenated activation maps \(T\) is computed by applying the attention mask \(\mathbf{M}\) given in Eq. 1. The mask \(\mathbf{M}\) nullifies the self-attention terms in the attention maps computed throughout the Transformer-Encoder, while retaining the cross-attention terms,
\[\mathbf{M}=\begin{bmatrix}-\infty&-\infty&\cdots&0&0\\ -\infty&-\infty&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\\ 0&0&\cdots&-\infty&-\infty\\ 0&0&\cdots&-\infty&-\infty\end{bmatrix}^{2} \tag{1}\]
The use of the mask \(\mathbf{M}\) and the corresponding structure of the attention maps is shown in Fig. 3.
Any pair of image patches could hold valuable information about the overall geometric relationships in an image. The Transformer-Encoder is able to uncover these hints implicitly. The regions in the input images that contain rotation-related cues, explicitly or implicitly, receive higher attention scores, as seen in Section and Fig. 5. This leads to a more meaningful and concise input for the subsequent MLP layer, ultimately resulting in improved estimation accuracy. The same as in [7], even when the image pairs are non-overlapping, the Transformer-Encoder formulation can predict the rotation using straight lines only present in a single image, the same as human cognitive capabilities. For example, in the extreme scenario of non-overlapping image pairs the roll angle can be estimated
Figure 2: An overview of our method. The of input images \((I_{1},I_{2})\in\mathbb{R}^{H\times W}\) are encoded by a weight-sharing Siamese CNN to extract the feature maps \((\hat{I}_{1},\hat{I}_{2})\). These feature maps are concatenated as \(T\), and fed into a Transformer-Encoder alongside the mask \(M\) to compute their cross-attention \(\hat{T}\). The first half of \(\hat{T}\) is reshaped and analyzed by the two residual blocks in Table 1. Finally, a fully connected layer predicts the relative rotation encoded by the Quaternion \(q\).
from a single image, by implicitly assuming that buildings and their edges are perpendicular to ground level. Similarly, the relative elevation angle can be estimated by assuming that the streets and pavements are parallel to the ground plane or computing the corresponding vanishing points. Most training and test datasets in this domain depict urban scenes, adhering to these assumptions.
### Relative Rotation Regression
The refined representations \(\hat{T}\) are used as input to the MLP regressor outlined in Table 1 to determine the output rotation as the quaternion \(\mathbf{q}=[qw,qx,qy,qz]\). The training loss is calculated as follows:
\[\mathbf{L}=||q_{0}-\frac{q}{||q||}||_{2}. \tag{2}\]
\(q_{0}\) and \(q\) are the groundtruth and predicted quaternions, respectively. They are normalized to ensure that the quaternion is a valid representation of a 3D rotation.
## 4 Experimental Results
The proposed scheme was experimentally verified by applying it to contemporary benchmark datasets with overlapping and non-overlapping image pairs. We followed the comprehensive experimental setup proposed by Cai et al. [7], using the same datasets and image overlap classes as their study. In particular, we used their source code3 to create perspective views from panoramic images, ensuring that the input images were the same in both studies, allowing fair comparisons with previous SOTA results and other contemporary schemes. For that, we also used the same Residual-Unet backbone network [56] as in [7]. Section 4.1 details the image datasets we used and their processing, in accordance with Cai et al. [7], to derive the training and test datasets. Training details are given in Section 4.2. We compare with the recent SOTA schemes listed in Section 4.3 using the geodesic error measure used in previous works [7]
Footnote 3: [https://github.com/RuojinCai/ExtremeRotation_code](https://github.com/RuojinCai/ExtremeRotation_code)
\[\mathbf{E}=\arccos\left(\frac{tr(\mathbf{R}^{T}\mathbf{R}^{*})-1}{2}\right), \tag{3}\]
where \(\mathbf{R}\) is the predicted rotation matrix and \(\mathbf{R}^{*}\) is the groundtruth relative rotation matrix for each image pair. The experimental comparisons are reported in Section 4.4 and the attention maps are visualized in Section 4.5 to provide an intuitive interpretation of the cross-attention scores computed by the Transformer-Encoder. We studied the cross-dataset generalization properties of the proposed scheme in Section 4.6, while ablation studies of the different parameters, design choices, and parameters are reported in Section 4.7.
\begin{table}
\begin{tabular}{l c c} \hline \hline Layers & Residual1 & Residual2 \\ & \((k,s,p,N)\) & \((k,s,p,N)\) \\ \hline Batch norm & & \\ Conv & \((1,1,0,128)\) & \((1,1,0,64)\) \\ Batch norm & & \\ Conv & \((3,2,1,128)\) & \((3,2,1,64)\) \\ Batch norm & & \\ Conv & \((1,1,0,512)\) & \((1,1,0,256)\) \\ Average pooling & \((2,2)\) & \((2,2)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The architecture of the residual blocks in Fig. 2. For the convolution layers \((k,s,p,N)\) relate to the filter supports \((k,k)\), strides \((s,s)\), padding \((p,p)\) and the number of filters \(N\). For the average pooling layer, we report the pooling support \((k,k)\).
Figure 3: Computing the cross-attention using a Transformer-Encoder and the input mask, \(\mathbf{M}\). The mask \(\mathbf{M}\) zeros the self-attention terms, retaining only the cross-attention terms.
### Image Datasets and their Processing
We used the following datasets and train/test splits used in previous works:
**InteriorNet**[28] is a synthetic dataset for understanding and mapping interior scenes. A subset of 10,050 panoramas from 112 different houses was used, with 82 images used for training and 32 for testing, respectively.
**StreetLearn**[35] is an outdoor dataset consisting of close to 140,000 panoramic views of the cities of Pittsburgh and Manhattan. The dataset was divided into 50K panoramic views for training and 3K panoramic views for testing.
**SUN360**[52] is an indoor collection of high-resolution panoramas that cover a full view of \(360^{\circ}\times 180^{\circ}\) for a variety of environmental scenes downloaded from the Internet. It also provides location category labels. We used 7K and 2K panoramas for training and testing, respectively.
As these datasets contain panoramic images, we generated 200 perspective \(128\times 128\) images by randomly cropping 200 different locations in each panoramic image. Using such sampling allows us to generate uniformly distributed ground-truth image pairs within a pitch resolution of \([0^{\circ},90^{\circ}]\) and a Yaw resolution of \([-180^{\circ},180^{\circ}]\). We estimate only the Yaw and the Pitch angles, assuming zero roll between the image pairs. We avoided generating textureless image pairs, that is, images that mainly contain ceilings or floors in a house or skies in outdoor scenes, by limiting the pitch rang to \([-45^{\circ},45^{\circ}]\) for the outdoor dataset and \([-30^{\circ},30^{\circ}]\) for the indoor datasets. There is no overlap between the train and test datasets. To assess our outcomes in relation to prior research and to analyze the influence of camera translation on our rotation estimation approach, we partitioned the InteriorNet and StreetLearn datasets into two groups: images with and without camera translations. The non-translated images were acquired by randomly selecting pairs of cropped images from a single panorama. In contrast, the datasets that include translations (known as StreetLearn-T and InteriorNet-T) were generated by randomly selecting pairs of cropped images from different panoramas, where translations are less than \(3m\). However, our method did not estimate these translations. We evaluated our performance in overlapping and nonoverlapping pairs and use the setup of Cai et al. [7] by dividing the datasets into three overlap classes:
**Large**, contains highly overlapping pairs up to relative rotations of \(45^{\circ}\)
**Small**, contains pairs that partially overlap with relative rotation angles \(\in[45^{\circ},90^{\circ}]\)
**None**, contains pairs without overlap with relative rotations \(>90^{\circ}\).
### Training Details
We use a pre-trained Residual-Unet [56] (same as in Cai et al. [7]) as a backbone to compute the feature maps of the two input images \((\hat{I}_{1},\hat{I}_{2})\)\(\in\)\(\mathbb{R}^{c\times k_{1}\times k_{2}}=\mathbb{R}^{128\times 32\times 32}\). In line with Fig. 2, the embeddings were reshaped and concatenated along the axis of the samples to form the tensor \(\mathbf{T}\in\mathbb{R}^{(2\cdot 32\cdot 32)\times 128}=\mathbb{R}^{2048\times 128}\). \(\mathbf{T}\) was the input to the Transformer-Encoder, consisting of \(l=2\) layers with ReLU non-linearity and a dropout of \(p=0.1\). Each encoder layer uses \(h=4\) MHA heads and a hidden dimension of \(C_{h}=768\). An ablation study of the Transformer-Encoder parameters is given in Section 4.7. The Transformer-Encoder's output \(\hat{T}\) is used by the MLP regressor detailed in Table 1 that computes the quaternion representation for the regression loss in Eq. 2. Throughout all experiments, the model is optimized using an Adam optimizer with an initial learning rate of \(\lambda=5e-4\), with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), \(\epsilon=10^{-10}\), and a batch size of \(20\). Our model is implemented in PyTorch, it is end-to-end trainable, and all experiments were performed on an 8GB NVIDIA GeForce GTX 2080 GPU.
### Comparative baselines
In line with Cai et al. [7], we compare our method with contemporary schemes using the datasets in Section 4.1:
**A SIFT-based approach [6]**. A method for matching SIFT features [33] using RANSAC [18] in image pairs of the same panorama, and estimating the relative rotation matrix using Homography equations or the Essential matrix.
**CNN-based methods [13, 57]**. Deep learning schemes that detect and encode local image features using SuperPointNet [13] and D2-Net [17].
**Self-supervised interest point [57].** A scheme by Zhou et al. [57] (RegGD) that applies a CNN to approximate the mappings between various rotation representations and fits continuous 5D and 6D rotation representa
tions, instead of the commonly used Euler and quaternion representations.
**Extreme rotation estimation [7]**. A deep learning technique to estimate the relative 3D rotation of image pairs in an extreme setting [7] where the images have little or no overlap. They proposed a network that automatically learns implicit visual cues by computing a 4D correlation volume.
**Attention based methods.** We also compare to a recent work by Rockwell et al. [42] (8PointVit) using a Vision Transformer (ViT) to estimate the relative pose. Although Rockwell et al. achieve competitive results in multiple settings, their approach is less suited for extreme view changes.
### Experimental comparisons
The comparison results of the proposed scheme with baselines and SOTA schemes are reported in Table 2. We report the mean and median of the geodesic error given in Eq. 3 and the percentage of image pairs whose estimated relative rotation error was under \(10^{\circ}\). We compared the accuracy of our proposed model to the schemes detailed in Section 4.3. The proposed approach is shown to be accurate for both indoor and outdoor scenes and significantly outperforms the baselines schemes in all overlap categories. For nonoverlapping pairs, correspondence-based methods such as SIFT [6], SuperPointNet [13], Reg6D et al. [57] and 8PointViT [42] failed to provide any estimates, as they require feature correspondence. The DenseCorrVol approach [7] provides accurate results in extreme cases, but our approach outperforms it.
The qualitative results of the rotation estimation are shown in Fig. 4 for the StreetLearn and SUN360 datasets, for the large, small, and non-overlapping cases. We show the full panoramas, the footprints of the cropped images that were used as inputs for the proposed scheme and the footprint of the estimated image crop based on the estimated rotation. In all cases. we archive high estimation accuracy.
### Attention Maps Visualization
As described in Section 1, our scheme, together with that of Cai et al. [7], takes advantage of rotation-informative image signals by assigning them high attention scores. These cues can be both implicit (e.g. shadows, straight lines, analytic shapes, lighting angles) and explicit (e.g.
\begin{table}
\begin{tabular}{l l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Overlap Method} & \multicolumn{3}{c}{InteriorNet} & \multicolumn{3}{c}{InteriorNet-T} & \multicolumn{3}{c}{SUN360} & \multicolumn{3}{c}{StreetLearn} & \multicolumn{3}{c}{StreetLearn-T} \\ \cline{3-14} & \multicolumn{3}{c}{Avg(\({}^{\perp}\)) Med(\({}^{\circ}\)) \(\downarrow\)) \({}^{\text{To}}(\%\))} & Avg(\({}^{\perp}\)) Med(\({}^{\circ}\)) \(\downarrow\) \({}^{\text{To}}(\%\)) & Avg(\({}^{\perp}\)) Med(\({}^{\circ}\)) \(\downarrow\) \({}^{\text{To}}(\%\)) & Avg(\({}^{\perp}\)) Med(\({}^{\circ}\)) \(\downarrow\) \({}^{\text{To}}(\%\)) & Avg(\({}^{\perp}\)) Med(\({}^{\circ}\)) \(\downarrow\) \({}^{\text{To}}(\%\)) \\ \hline \multirow{7}{*}{\begin{tabular}{l} Large \\ \end{tabular} } & SIFT* [33] & 6.09 & 4.00 & 84.86 & 7.78 & 2.95 & 55.52 & 5.46 & 3.88 & 93.10 & 5.84 & 3.16 & 91.18 & 18.86 & 3.13 & 22.37 \\ & SuperPoint* [13] & 5.40 & 3.53 & 87.10 & 5.46 & 2.79 & 65.97 & 4.69 & 3.18 & 92.12 & 6.23 & 3.61 & 91.18 & 6.38 & 1.79 & 16.45 \\ & Reg6D [57] & 9.05 & 5.90 & 68.49 & 17.00 & 11.95 & 41.79 & 16.51 & 12.43 & 40.39 & 11.70 & 8.87 & 58.24 & 36.71 & 24.79 & 23.03 \\ & DenseCorrVo [7] & 1.53 & 1.10 & 99.26 & 2.89 & 1.10 & 97.61 & 1.00 & 0.94 & **100.00** & 1.19 & 1.02 & 99.41 & 9.12 & 2.91 & 87.50 \\ & 8PointViT [42] & 0.48 & **0.40** & **1000.00** & 2.90 & 1.83 & 97.91 & - & - & - & 0.62 & **0.52** & **1000.00** & **4.08** & 2.43 & **90.13** \\ & Ours & **0.45** & **0.49** & 99.55 & **1.88** & **0.63** & **97.95** & **0.57** & **0.48** & **99.95** & **0.60** & **0.65** & 99.3 & 5.45 & **1.76** & **86.50** \\ \hline \multirow{7}{*}{\begin{tabular}{l} Small \\ \end{tabular} } & SIFT* [33] & 24.18 & 8.57 & 39.73 & 18.16 & 10.01 & 18.52 & 13.71 & 6.33 & 56.77 & 16.22 & 7.35 & 55.81 & 38.78 & 13.81 & 5.68 \\ & SuperPoint* [13] & 16.72 & 8.43 & 21.58 & 11.61 & 5.82 & 11.73 & 17.63 & 7.70 & 26.69 & 19.29 & 7.60 & 24.58 & 6.80 & 6.85 & 0.95 \\ & Reg6D [57] & 25.71 & 15.56 & 33.56 & 42.93 & 28.92 & 23.15 & 42.55 & 32.11 & 9.40 & 24.77 & 15.11 & 30.56 & **46.61** & **34.33** & **13.88** \\ & DenseCorrVol [7] & 6.45 & 1.61 & 95.89 & 10.24 & 1.38 & 89.81 & 3.09 & 1.41 & 98.50 & 2.32 & 1.41 & 98.67 & 13.04 & 3.49 & 84.23 \\ & 8PointViT [42] & **1.84** & **0.94** & **99.32** & **4.48** & **2.38** & 96.30 & - & - & - & 1.46 & **1.09** & **1000.00** & 9.19 & 3.25 & **87.7** \\ & Ours & 3.27 & **0.881** & 96.11 & 6.45 & **0.81** & **97.72** & **2.11** & **0.852** & **98.83** & **1.26** & **0.723** & 99.12 & **7.51** & **1.88** & **88.66** \\ \hline \multirow{7}{*}{
\begin{tabular}{l} None \\ \end{tabular} } & SIFT* [33] & 109.30 & 92.86 & 0.00 & 93.79 & 113.86 & 0.00 & 127.61 & 129.07 & 0.00 & 83.49 & 90.00 & 0.38 & 85.90 & 106.84 & 0.38 \\ & SuperPoint* [13] & 120.28 & 120.28 & 0.00 & – & – & 0.00 & 149.80 & 165.24 & 0.00 & – & – & 0.00 & – & – & 0.00 \\ \cline{1-1} & Reg6D [57] & 48.36 & 32.93 & 10.82 & 60.91 & 51.26 & 11.14 & 64.74 & **56.55** & 3.77 & 28.48 & 18.86 & 24.39 & 49.23 & 35.66 & 11.86 \\ \cline{1-1} & SpatioViT [42] & - & - & - & - & - & - & - & - & - & - & - & - & - & - \\ \cline{1-1} & DenseCorrVol [7] & 37.69 & 3.15 & 61.97 & 49.44 & 4.17 & **58.36** & 34.92 & 4.43 & **61.39** & 5.77 & 1.53 & **96.41** & 30.98 & 3.50 & **72.69** \\ \cline{1-1} & Ours & **37.42** & **30.4** & **63.90** & **48.35** & **4.12** & 57.24 & **34.28** & **4.33** & 60.14 & **5.55** & **1.35** & 95.18 & **30.12** & **3.44** & 72.01 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Relative rotation estimation results. We utilized the InteriorNet, SUN360, and StreetLearn datasets and show the average and median of geodesic errors. We also present the percentage of image pairs with relative rotation error below \(10^{\circ}\), for the overlap categories in Section 4.1. The gray numbers indicate errors exceeding \(50\%\). The asterisk \(*\) signifies that the mean and median errors did not lead to pose estimation, and their calculations are performed only on successful image pairs.
corners used for feature correspondences). The visualization of these attention scores is shown in Fig. 5, where we have overlaid the attention scores on the input images following the approach of Kolesnikov et al. [23]. We applied our model to each pair of images, obtained the activation maps, and superimposed them onto the input images. The attention maps emphasize vertical and horizontal lines for both the overlapping and non-overlapping image pairs. Our experimental datasets (Section 4.1), consist of interior and urban outdoor scenes showing horizontal and vertical lines. This relates to the seminal single-image camera calibration approaches of Coughlan and Yuille [10] and Criminisi et al. [11] and their extensions, which detect parallel lines in an image to estimate vanishing points [20] and relative rotations [10]. Therefore, we postulate this shows that our approach implicitly solves the rotation estimation same as [10] and [11], by learning both the low-level informative task-specific image cues and the higher-level algorithmic computational flow.
### Cross-Dataset Generalization
The cross-dataset generalization properties of our approach were evaluated using the Holicity dataset [58]. The Manhattan dataset was used to train the models, while the London dataset was used for testing. The test images were divided into three overlap classes according to Section 4.1. Additionally, we compared the generalization of our approach with Cai et al.'s [7]. The generalization results, summarized in Table 3, indicate that our approach outperformed Cai et al.'s in all overlap classes.
### Ablation Study
We performed ablation studies to assess the effect of different hyperparameters and design choices evaluated using the StreetLearn dataset.
Figure 4: Rotation estimation results. The panoramic images and the cropped ground-truth images are marked in green and yellow-dot lines. The predicted footprint of one of the cropped images is marked by the red-dot line. The first row shows the matching results of images with large-overlaps. The second and last rows show the matching of small-overlap and non-overlapping images.
\begin{table}
\begin{tabular}{l c c} \hline \hline Overlap & Ours [\({}^{\circ}\)] & Cai et al. [7] [\({}^{\circ}\)] \\ \hline Large & **10.62** & 11.23 \\ Small & **19.43** & 20.87 \\ None & **38.65** & 40.82 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluating cross-dataset generalization. We trained the models on the Manhattan dataset and tested them on the London dataset. The average geodesic error, as defined in Eq. 3, was reported.
**Transformer-Encoder parameters.** Table 4 summarizes multiple Transformer-Encoder configurations for each overlap category. The expressive power of the Transformer-Encoder depends on the number of heads and layers. The more are used, the better the expressive power. However, using an excessive number might lead to overfit, and the optimal constellation, in terms of accuracy in Table 4, is given by \(h=4\), \(l=2\). In particular, this constellation is a sweetspot such that increasing the number of heads or layers results in reduced accuracy.
**3D rotation encoding and training losses.** The ablations of different rotation encodings and their corresponding training losses were evaluated and are presented in Table 6. The evaluation was performed by applying the Residual-Unet backbone [56] and a proposed Transformer-Encoder-based cross-attention method to image pairs from the StreeLearn dataset with large overlaps. For the discrete formulation, in line with Cai et al. [7], the pitch and yaw angles were discretized into 360 bins \(\in[-180^{\circ},180^{\circ}]\), and a cross-entropy loss was used to train the network. These results were compared to those obtained using the \(L_{2}\) regression loss, as described in Eq. 2 in our scheme. The results in Table 6 demonstrate that our \(L_{2}\) regression approach outperforms the discrete Euler angle approach proposed by Cai et al. [7].
## 5 Conclusion
We presented a novel formulation for estimating the relative rotation between a pair of images. In particular, we study the estimation of rotations between images with small and no overlap. We propose an attention-based approach using a Transformer-Encoder to calculate the cross-attention between image pair embedding maps, which outperforms the previous use of 4D correlation volumes [49, 7]. Our framework is end-to-end trainable and optimizes a regression loss. It has been experimentally shown to outperform previous SOTA schemes [7] on multiple datasets used in contemporary works. in particular, for the challenging small and nonoverlapping cases.
Figure 5: Visualizations of the attention maps. The attention maps generated by our model are overlaid on the image pairs, for the overlapping (upper) and non-overlapping (bottom) pairs. The input crops are represented by solid lines, and the crops are aligned with the estimated rotation as indicated by the dotted lines. |
2301.03372 | Machine-Learning Prediction of the Computed Band Gaps of Double
Perovskite Materials | Prediction of the electronic structure of functional materials is essential
for the engineering of new devices. Conventional electronic structure
prediction methods based on density functional theory (DFT) suffer from not
only high computational cost, but also limited accuracy arising from the
approximations of the exchange-correlation functional. Surrogate methods based
on machine learning have garnered much attention as a viable alternative to
bypass these limitations, especially in the prediction of solid-state band
gaps, which motivated this research study. Herein, we construct a random forest
regression model for band gaps of double perovskite materials, using a dataset
of 1306 band gaps computed with the GLLBSC (Gritsenko, van Leeuwen, van Lenthe,
and Baerends solid correlation) functional. Among the 20 physical features
employed, we find that the bulk modulus, superconductivity temperature, and
cation electronegativity exhibit the highest importance scores, consistent with
the physics of the underlying electronic structure. Using the top 10 features,
a model accuracy of 85.6% with a root mean square error of 0.64 eV is obtained,
comparable to previous studies. Our results are significant in the sense that
they attest to the potential of machine learning regressions for the rapid
screening of promising candidate functional materials. | Junfei Zhang, Yueqi Li, Xinbo Zhou | 2023-01-04T08:19:18Z | http://arxiv.org/abs/2301.03372v1 | # Machine-Learning Prediction of the Computed Band Gaps of Double Perovskite Materials
###### Abstract
Prediction of the electronic structure of functional materials is essential for the engineering of new devices. Conventional electronic structure prediction methods based on density functional theory (DFT) suffer from not only high computational cost, but also limited accuracy arising from the approximations of the exchange-correlation functional. Surrogate methods based on machine learning have garnered much attention as a viable alternative to bypass these limitations, especially in the prediction of solid-state band gaps, which motivated this research study. Herein, we construct a random forest regression model for band gaps of double perovskite materials, using a dataset of 1306 band gaps computed with the GLLBSC (Gritsenko, van Leeuwen, van Lenthe, and Baerends solid correlation) functional. Among the 20 physical features employed, we find that the bulk modulus, superconductivity temperature, and cation electronegativity exhibit the highest importance scores, consistent with the physics of the underlying electronic structure. Using the top 10 features, a model accuracy of 85.6% with a root mean square error of 0.64 eV is obtained, comparable to previous studies. Our results are significant in the sense that they attest to the potential of machine learning regressions for the rapid screening of promising candidate functional materials.
Machine Learning, Random Forest Regression, Electronic Structure, Computational Material Science
## 1 Introduction
In quantum mechanics, the energy of bound electrons becomes quantized [1], and electrons at the ground state can be excited to higher energy levels by absorbing photons with the corresponding wavelengths. In solid structures, the superposed electronic states form continuous energy bands. In insulators and semiconductors, the band gap is the energy gap across the valence and conduction band where electrons are forbidden to occupy. The magnitude of the band gap plays an important role in many functional materials, such as transistors, photovoltaics, light-emitting diodes, and sensors [2]. For instance, optoelectronic materials are generally wide-band gap semiconductors, while thermoelectric materials are narrow-band gap semiconductors [3]. Hence, accurate and efficient prediction of band gaps of solid materials is crucial for the design and engineering of new devices.
One of the most widely used electronic structure methods for evaluating band gaps is density functional theory (DFT) [4]. In the Kohn-Sham formalism [5], the multielectron wavefunction is replaced by fictitious noninteracting states that give rise to the true electron density [6], which enables the iterative solution of the single-particle Hamiltonian. However, the exchange-correlation energy, which contains all the quantum mechanical interactions of the electrons, does not have an exact expression in terms of the electron density and as such requires an approximation, such as the local density approximation (LDA) [7] or the generalized gradient approximation (GGA) [8]. Such approximations have limited accuracy, most notably the underestimation of the band gap of semiconductors and insulators [9]. Various approaches have been proposed to address this limitation, such as the on-site Hubbard \(U\) correction [10], hybrid functionals using fractional exact exchange [11], and quasiparticle methods such as the GW approximation [12]. However, these methods do not always guarantee an accurate description of the system, and they can be much more computationally expensive than conventional DFT [13].
An alternative strategy for band gap prediction is machine learning. For example, a support vector regression model was constructed for inorganic solids using experimentally measured band gaps [14], thereby bypassing the limitations of DFT. Another study trained a kernel ridge regression model [15] using band gaps computed with the GLLBSC (Gritsenko, van Leeuwen, van Len the, and Baerends solid correlation) functional [16], which demonstrated reasonable agreement with experimental values. These studies attest to the potential of machine learning methods, provided that robust datasets are available for training [17]. The importance of band gap prediction of functional materials and the above-mentioned limitation of DFT serves as the motivation for this research study, which attests to the potential of machine learning regression for band gap prediction.
We employ a dataset of GLLBSC-computed band gaps of 1306 double perovskites in this study. Double perovskites (\(AABB^{{}^{\prime}}X_{6}\)) have double the unit cell of single perovskites (\(ABX_{3}\)) with chemically distinct _A_/_A'_ and _B_/_B'_ sites [18]. A variety of physical and chemical properties can be engineered by doping the cations with species of different valence states or radii [19]. Due to their stable crystal structure, unique electromagnetic properties, and high catalytic activities, these compounds have much potential as functional materials for environmental protection [20], the chemical industry [21], photovoltaics [22], and catalysis [23]. In this regard, optimization and engineering in the above-mentioned fields require a proper description of the underlying electronic structure of double perovskites [24], which attests to the significance of choosing the band gaps of double perovskites as our dataset.
Previous studies have shown that random forest regression is well-suited to capturing nonlinearity, as seen across the band gap and the extracted physical features such as the highest occupied energy level [25]. As such, we construct a random forest regression model for predicting the band gap of double perovskite compounds, building upon a previous kernel ridge regression study [15]. We find that the bulk modulus, superconductivity temperature, and cation electro negativity exhibit the highest importance scores among the 20 physical descriptors employed, consistent with the physics of the underlying electronic structure. A model accuracy of 85.6% with a root mean square error of 0.64 eV is obtained using the top 10 features, comparable to previous studies [1].
The succeeding part of the paper is structured as follows: The literature review is given in section 2; the research methodology is presented in section 3; section 4 presents the results and discussion, including an evaluation of the performance of our model as well as our limitations; finally, section 5 gives the concluding remarks of this work.
## 2 Literature Review
This research study focuses on the prediction of the band gaps of double perovskite materials using machine learning, as a surrogate method for the conventional prediction yielded by the DFT. The limitation of the DFT, notably the lack of expression of the exchange-correlation energy, and the potential of machine learning in solving the issue have urged computer scientists to try various machine learning models for band gap prediction. This section will review recently proposed machine learning models for band gap prediction.
### Tuplewise Graph Neural Networks (TGNN)
Na, G. S. et al. [26] conducted a research study using modified TGNN (Tuplewise Graph Neural Networks) to predict the band gap of a crystalline compound. TGNN is designed to automatically generate crystal representation using crystal structures and to include the crystal-level properties as an input feature. In this study, the prediction of the band gap using TGNN is shown to have higher accuracy than the standard DFT. The results of two out of four datasets that the study employed are of interest in our research: 1345 organic-inorganic perovskite materials of which the targeting band gap is the hybrid screened exchange functional (HSE06) and 2233 materials for solar cells with the targeting band gap as GLLBSC-computed band gap. Using the proposed TGNN model, the experiment of the former dataset achieved an MAE of 0.045 eV and that of the latter dataset achieved an MAE of 0.295 eV.
### Alternating Conditional Expectations (ACE)
ACE (Alternating Conditional Expectations) is a machine learning algorithm designed to find the optimal transformation between the two sets of variables, and performs well on small data sets; its advantage is that the results are represented in graphic form. The limitation of ACE is that if the dependence of the response variable on the predictors is slightly different than the transformation that the algorithm estimated, the analytic formulas are very difficult to discover. Gladkikh, V. et al. [27] conducted a study exploring the mappings between the band gap and the properties of the constituent elements using ACE. The study employs a dataset containing a large number of single perovskite materials (\(ABX_{3}\)). The best result achieved using ACE has an RMSE of 0.836 eV and an MAE of 0.602 eV.
### Kernel Ridge Regression (KRR)
Regonia, P.R. et al. [28] trained a KRR (Kernel Ridge Regression) model for the prediction of the optical band gap of zinc oxide (\(ZnO\)). Kernel ridge regression is a variant of ridge regression that is suitable for small datasets and is usually used for the prediction of the band gap of organic crystal structures. The model is trained using two empirical features: the experimental time and temperature conditions during \(ZnO\) fabrication. Quadratic features are generated to increase the model's complexity and prevent the dataset's underfitting. The result presents an RMSE of 0.0849 eV.
## 3 Methods
### Random Forest Regression
Random forest regression is a regression method that utilizes multiple decision trees, which are constructed by a simple supervised algorithm consisting of a series of if-then-else statements. The randomness is manifested through random sampling of data subsets or random selection of
features. Multiple uncorrelated decision trees construct a random forest, where all trees are granted free growth without any pruning. The random forest algorithm can be employed for both classification and regression. For classification, the result is the outcome with the highest turnout among all trees; for regression, the forest takes the average of all trees. The steps to generate a random forest are as follows (**Fig. 1** illustrates a flow chart of the algorithm):
1. From a sample with capacity \(N\), conduct bootstrap sampling \(K\) times. The resulting \(K\) samples are used as the node samples of decision trees.
2. Choose a constant \(m\) smaller than the dataset feature number \(M\).
3. When splitting each decision tree, select \(m\) features from the original \(M\) features, choosing one feature as the splitting feature of the node. The Gini index is used to calculate the information gain and determine the splitting.
4. Repeat step 2 and step 3 for each node until no splitting can occur, when the next feature is used by the parent node in the last splitting. The tree is always left unpruned to ensure free growth.
5. Repeat steps 1-4 to generate a random forest.
A random forest can manage data with a high dimension of features without performing dimension reduction or feature selection. This is beneficial for the dataset of this study, which involves multiple atomic descriptors of double perovskites. The mutual effects of different features and their significance are also quantified. Although random forest regression is computationally efficient and accurate when using a large number of generated trees, the risk of overfitting still exists for data with a large noise. We perform random forest regression as implemented in _scikit-learn_, using the _double_perovskites_gap_ dataset available in the _matminer_ package [29]. Comparing previous literature, which is normally trained using 5 to 10 atomic features [30], our result is unique in the sense that we use a total of 20 atomic features to achieve a more comprehensive result, of which the dimension is then reduced to 10 features. The selected important features are also consistent with the underlying physics, making the results more credible.
### Features
20 atomic features are obtained from the _periodic_table_ and _composition_ modules of the _Pymatgen_[31] (Python Materials Genomics) package:
Average electronegativity
Average cation electronegativity
Figure 1: Flow chart of random forest regression
Average atomic radius
Average van der walls radius
Average Mendeleev number
Average electrical resistivity
Average molar volume
Average thermal conductivity
Average boiling point
Average melting point
Average critical temperature
Average superconduction temperature
Average bulk modulus
Average youngs modulus
Average Brinell hardness
Average rigidity modulus
Average mineral hardness
Average Vickers hardness
Average density of solid phase
Average first ionization energy
## 4 Results and Discussion
### Model Selection
Random forest regression has two parameters to be optimized: the number of estimators (_n_estimator_) referring to the number of trees to be built before taking the maximum voting or averages of predictions; and the random seed (_random_state_) for the random generator. Both the accuracy and the computational cost of the model increase with the number of estimators [32]. The cost scales as \(O\big{(}n_{\text{tree}}*m_{\text{try}}*n\log(n)\big{)}\), where \(n_{\text{tree}}\) is the number of estimators, \(m_{\text{try}}\) is the number of variables to sample at each node, and \(n\) is the number of records [33]. As such, an optimal number of estimators is needed to ensure a satisfactory model performance.
As shown in **Fig. 2**, the model accuracy reaches a maximum at around 700 estimators and decreases afterward, which is attributed to overfitting. As such, the _n_estimator_ is set to 700. On the other hand, the random seed determines the random sampling for the train-test split and may subtly affect the accuracy due to the randomization of the training pipeline. An optimal _random_state_ value of 14 is selected.
The corresponding parity plot of the model prediction is shown in **Fig. 3**. Using a test/training ratio of 0.25 and all 20 physical descriptors, the model accuracy is 85.1% with a mean absolute error (MAE) of 0.47 eV, a root mean squared error (RMSE) of 0.62 eV, which is comparable to the RMSE value of 0.5 eV reported in a previous kernel ridge regression study of the same dataset.
### Feature Selection
The feature importance plot is shown in **Fig. 4**. The top three features with the highest importance scores are average bulk modulus, superconductivity temperature, and cation electronegativity:
1) Bulk modulus quantifies the elastic property of a solid or fluid under pressure, specifically its resistance to compression [34]. Microscopically, bulk modulus depends on the compressibility of atoms, which affects the extent of the overlap of valence atomic orbitals, and therefore the band gap of the material [35].
Figure 3: Parity plot of the predicted vs. GLLBSC-computed band gaps, obtained using all 20 physical descriptors and a test/training ratio of 25/75. The parity line is shown in red.
Figure 2: The accuracy of the random forest regression model as a function of (**a**) the number of estimators and (**b**) the random seed, using all 20 physical descriptors.
2) Superconductivity is the state of matter with no electrical resistance and magnetic penetrability [36]. Given that the magnitude of the band gap determines the electrical conductivity, a material with a relatively small band gap is expected to more easily achieve a superconducting state [37].
3) Electronegativity quantifies the ability of an atom to attract an electron pair in a chemical bond [38]. The cation electronegativity here refers to the electronegativity difference between the oxygen anions and the metal cations. A larger elemental electronegativity difference leads to a larger degree of electron localization around the more electronegative element, which makes it harder for electrons to leap to the conduction band [39].
The low importance scores of some features, such as average electrical resistivity and molar volume, indicate that the dataset contains a large amount of noise, which necessitates feature selection. **Table 1** summarizes the model performance using different numbers of top features. The performance remains optimal up to the top 10 features, which yields an accuracy of 85.6% with an RMSE of 0.64 eV. Given the marginal difference in accuracy using 20, 15, and 10 top features, the remainder of the study employs the top 10 features only.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Number of top features** & **20** & **15** & **10** & **5** & **3** & **1** \\ \hline Accuracy (\%) & 85.1 & 85.5 & 85.6 & 82.3 & 82.4 & 65.2 \\ \hline MAE (eV) & 0.47 & 0.46 & 0.46 & 0.56 & 0.57 & 1.12 \\ \hline RMSE (eV) & 0.62 & 0.62 & 0.64 & 0.79 & 0.81 & 1.43 \\ \hline NRMSE & 0.08 & 0.07 & 0.08 & 0.10 & 0.10 & 0.17 \\ \hline \end{tabular}
\end{table}
Table 1: The model performance obtained using different numbers of features with the highest feature importance scores (MAE = mean absolute error; RMSE = root mean squared error; NRMSE = normalized RMSE).
Figure 4: Feature importance of all 20 physical descriptors, obtained from a test/training ratio of 25/75.
The corresponding importance scores and parity plots are shown in **Figs. 5** & 6, respectively. The model constructed using the top 10 features exhibits the least deviation of the data points from the parity line. Moreover, the models overall tend to show a larger underestimation for larger band gap values, which can potentially be attributed to the limited accuracy of the GLLBSC functional itself [40].
### Testing and Training Set partition
**Table 2** summarizes the model performance as a function of the different test-to-training set partitions, ranging from 10/90 to 75/25. As expected, the test set accuracy decreases with the number of training set data points. The corresponding parity plots in **Fig. 7** also demonstrate a larger extent of deviation from the parity line as the proportion of the training set decreases. Based on these results, we validate that the test/training ratio of 25/75 is sufficient in providing satisfactory accuracy (85.6%) and reasonable RMSE (0.64 eV).
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Test/training set ratio** & **10/90** & **20/80** & **25/75** & **40/60** & **50/50** & **75/25** \\ \hline Number of test set data points & 131 & 262 & 327 & 523 & 653 & 980 \\ \hline Number of training set data points & 1175 & 1044 & 979 & 783 & 653 & 326 \\ \hline Test set accuracy (\%) & 87.9 & 86.8 & 85.6 & 82.6 & 82.5 & 76.2 \\ \hline MAE (eV) & 0.41 & 0.45 & 0.46 & 0.5 & 0.53 & 0.67 \\ \hline RMSE (eV) & 0.57 & 0.63 & 0.64 & 0.7 & 0.74 & 0.88 \\ \hline NRMSE & 0.07 & 0.08 & 0.08 & 0.08 & 0.09 & 0.11 \\ \hline \end{tabular}
\end{table}
Table 2: Model performance obtained with different test-to-training set partitions.
Figure 5: Feature importance scores for models constructed using a number of features with the highest importance scores.
Figure 6: Parity plots of the predicted vs. GLLBSC-computed band gaps obtained using different numbers of features with the highest importance scores. The parity line is shown in red.
Figure 7: Parity plots of the predicted vs. GLLBSC-computed band gaps obtained using different test-to-training set partitions. The parity line is shown in red.
### Model Performances
**Table 3** summarizes the result of previous studies. The best performance yields in KRR by P. R. Regonia et al. [28] with an RMSE of 0.09. Our random forest regression model is comparable to linear regression and XBGoost by G. S. Na et al. [26] and has a lower MAE than ACE and ET by V. Gladkikh et al. [27].
### Limitations and Recommendations
This study is limited by the relatively small sample size. We use 1306 data to generate all the results, which may reduce the power of the study and cause a large margin of error. Future research studies can focus on using larger datasets, which we suppose will improve the model fitting. In this study, the missing values are filled by the mean value of that feature. This preprocessing step can be taken more carefully by trying various means to deal with the missing values. Another limitation of the study is that we lack a more interpretable understanding of random forest regression in statistical learning theory. A single decision tree is interpretable because it follows several decision steps, whereas a forest lacks this step-by-step interpretability. Hence, using interpretability tools such as the RF Visualization Toolkits [41] to generate a "Decision Path View" may help to understand the forest. This is essential since the feature's importance is related to the underlying physics.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Model** & **Study** & **Material type** & **Number of materials** & **Band gap** & **Accuracy (\%)** & **MAE (eV)** & **RMSE (eV)** \\ \hline Random forest & J. Zhang et al. & Double perovskites & 1306 & GLLBSC & 85.6 & 0.46 & 0.64 \\ \hline TGNN & G. S. Na et al. [26] & Materials for solar cells & 2233 & GLLBSC & - & 0.30 & - \\ \hline Linear regression & G. S. Na et al. [26] & Materials for solar cells & 2233 & GLLBSC & & 0.44 & - \\ \hline XGBoost & G. S. Na et al. [26] & Materials for solar cells & 2233 & GLLBSC & - & 0.44 & - \\ \hline ACE & V. Gladkikh et al. [27] & Single perovskites & - & HSE & - & 0.60 & 0.84 \\ \hline ET & V. Gladkikh et al. [27] & Single perovskites & - & HSE & - & 0.54 & 0.75 \\ \hline KRR & P. Regonia et al. [28] & _ZnO_ & - & Optical band gap & 98.0 & - & 0.09 \\ \hline ANN & P. Regonia et al. [28] & _ZnO_ & - & Optical band gap & 97.8 & - & 0.09 \\ \hline GBR & M. Guo et al. [8] & Binary compounds & 4096 & DFT- calculated band gap & 81.0 & - & 0.26 \\ \hline \end{tabular}
\end{table}
Table 3: Results of the models for band gap prediction (TGNN = tuplewise graph neural networks; XGBoost = extreme gradient boosting; ACE = alternating conditional expectations; ET = extremely randomized trees; KRR = kernel ridge regression; ANN = alternating conditional expectations; GBR = gradient boosting regression).
## 5 Conclusion
Despite the widespread use of first-principles methods based on density functional theory (DFT) in materials science, it remains computationally costly and limited in its accuracy due to the approximation of the exchange-correlation functional. In this regard, machine learning presents a viable alternative for the rapid prediction of materials' electronic properties while retaining reasonable fidelity to DFT. This study has implemented random forest regression for the prediction of the band gap of double perovskite compounds employing a dataset of 1306 GLLBSC-computed band gaps. Among the 20 physical descriptors, average bulk modulus, superconductivity temperature, and cation electronegativity exhibited the highest importance scores, which provide a physically interpretable description in terms of the underlying electronic structure. Optimal model performance is obtained with the top 10 features and a test/training partition of 25/75, yielding a model accuracy of 85.6% and RMSE of 0.64 eV comparable to previous studies. Our results highlight the potential of machine learning regression for rapid and physically interpretable prediction of the electronic properties of functional materials.
###### Acknowledgements.
This work was supported by Touch Education Technology Inc. We acknowledge scientific and editorial support from the Project Lead, J. S. Lim of Harvard University; technical support from the Project Support C. Zhang; and administrative support from C. Ding of Touch Education Technology Inc. This work was led by J.Z. with support from Y.L. and X.Z. J.Z. performed machine learning, literature review, and drafted the manuscript. Y.L. performed parameter optimization, visualization, and literature review. X.Z. assisted with literature review and writing.
|
2303.15369 | Constraint on Early Dark Energy from Isotropic Cosmic Birefringence | Polarization of the cosmic microwave background (CMB) is sensitive to new
physics violating parity symmetry, such as the presence of a pseudoscalar
"axionlike" field. Such a field may be responsible for early dark energy (EDE),
which is active prior to recombination and provides a solution to the so-called
Hubble tension. The EDE field coupled to photons in a parity-violating manner
would rotate the plane of linear polarization of the CMB and produce a
cross-correlation power spectrum of $E$- and $B$-mode polarization fields with
opposite parities. In this paper, we fit the $EB$ power spectrum predicted by
the photon-axion coupling of the EDE model with a potential $V(\phi)\propto
[1-\cos(\phi/f)]^3$ to polarization data from Planck. We find that the unique
shape of the predicted $EB$ power spectrum is not favored by the data and
obtain a first constraint on the photon-axion coupling constant, $g=(0.04\pm
0.16)M_{\text{Pl}}^{-1}$ (68% CL), for the EDE model that best fits the CMB and
galaxy clustering data. This constraint is independent of the miscalibration of
polarization angles of the instrument or the polarized Galactic foreground
emission. Our limit on $g$ may have important implications for embedding EDE in
fundamental physics, such as string theory. | Johannes R. Eskilt, Laura Herold, Eiichiro Komatsu, Kai Murai, Toshiya Namikawa, Fumihiro Naokawa | 2023-03-27T16:35:37Z | http://arxiv.org/abs/2303.15369v2 | # Constraint on Early Dark Energy from Isotropic Cosmic Birefringence
###### Abstract
Polarization of the cosmic microwave background (CMB) is sensitive to new physics violating parity symmetry, such as the presence of a pseudoscalar "axionlike" field. Such a field may be responsible for early dark energy (EDE), which is active prior to recombination and provides a solution to the so-called Hubble tension. The EDE field coupled to photons in a parity-violating manner would rotate the plane of linear polarization of the CMB and produce a cross-correlation power spectrum of \(E\)- and \(B\)-mode polarization fields with opposite parities. In this paper, we fit the \(EB\) power spectrum predicted by the photon-axion coupling of the EDE model with a potential \(V(\phi)\propto\left[1-\cos(\phi/f)\right]^{3}\) to polarization data from _Planck_. We find that the unique shape of the predicted \(EB\) power spectrum is not favored by the data and obtain a first constraint on the photon-axion coupling constant, \(g=(0.04\pm 0.16)M_{\rm Pl}^{-1}\) (68% CL), for the EDE model that best fits the CMB and galaxy clustering data. This constraint is independent of the miscalibration of polarization angles of the instrument or the polarized Galactic foreground emission. Our limit on \(g\) may have important implications for embedding EDE in fundamental physics, such as string theory.
## I Introduction
The standard cosmological model, called \(\Lambda\)CDM, includes new physics beyond the standard model of elementary particles and fields, such as dark matter and dark energy [1]. Clues to their physical nature may be found in possible deviations from the \(\Lambda\)CDM model. In recent years, a growing number of such deviations, or "tensions," have been reported [2], which may point towards new physics. In this paper, we study a fascinating connection between two hints of new physics: early dark energy (EDE) as a solution to the so-called Hubble tension (see Refs. [3; 4] for reviews) and cosmic birefringence, a rotation of the plane of linear polarization of photons (see Ref. [5] for a review).
EDE, which was active prior to the epoch of recombination at a redshift of \(z\simeq 1090\)[6; 7; 8], can resolve the Hubble tension [9; 10] by modifying the value of the Hubble constant, \(H_{0}\), inferred from the cosmic microwave background (CMB) data and brings it into agreement with \(H_{0}\) inferred from the local distance ladder [11]. But, is EDE _the_ solution to the Hubble tension? To make progress, one must look elsewhere for corroborating evidence.
In this paper, we look for a signature of EDE in polarization of the CMB. If the EDE field, \(\phi\), is a pseudoscalar "axionlike" field, it could couple to electromagnetism in a parity-violating manner in the Lagrangian density, \(\mathcal{L}\). We write [12; 13]
\[\mathcal{L}=-\frac{1}{2}(\partial\phi)^{2}-V(\phi)-\frac{1}{4}F_{\mu\nu}F^{ \mu\nu}-\frac{1}{4}g\phi F_{\mu\nu}\tilde{F}^{\mu\nu}\,, \tag{1}\]
where \(g\) is the photon-axion coupling constant, and \(F_{\mu\nu}\) and \(\tilde{F}^{\mu\nu}\) are the field strength tensor of the photon field and its dual tensor, respectively.
The last term in Eq. (1) is a Chern-Simons term, which violates parity symmetry in the presence of a spacetime-dependent condensate of \(\phi\). This term appears naturally for an axionlike field with \(g=c_{\phi\gamma}\alpha_{\rm em}/(2\pi f)\), where \(c_{\phi\gamma}\) is an anomaly coefficient, \(\alpha_{\rm em}\simeq 1/13f\) the electromagnetic fine-structure constant, and \(f\) the axion decay constant (see, e.g., Eq. (24) of Ref. [14]), and has been considered for EDE models in Refs. [15; 16; 17; 18; 19].
We assume a "canonical" EDE potential, \(V(\phi)=V_{0}[1-\cos(\phi/f)]^{3}\), where \(V_{0}\) is the normalization. This model can resolve the Hubble tension [10; 11; 20; 21; 22; 23; 24; 25; 26]. See Refs. [27; 28; 29; 30; 31; 32; 33; 34] for other constraints on this model. For other EDE models that can resolve the Hubble tension, see Ref. [4] and references therein.
To probe violation of parity symmetry in the polarization pattern of the CMB, one can decompose a pixelized map of the observed Stokes parameters into eigenstates of parity called \(E\) and \(B\) modes [35; 36]:
\[Q(\hat{\mathbf{n}})\pm iU(\hat{\mathbf{n}})=-\sum_{\ell=2}^{\ell_{\rm max}}\sum_ {m=-\ell}^{\ell}\left(E_{\ell m}\pm iB_{\ell m}\right)\pm_{2}Y_{\ell}^{m}( \hat{\mathbf{n}})\,, \tag{2}\] |
2303.14077 | Improved Adversarial Training Through Adaptive Instance-wise Loss
Smoothing | Deep neural networks can be easily fooled into making incorrect predictions
through corruption of the input by adversarial perturbations:
human-imperceptible artificial noise. So far adversarial training has been the
most successful defense against such adversarial attacks. This work focuses on
improving adversarial training to boost adversarial robustness. We first
analyze, from an instance-wise perspective, how adversarial vulnerability
evolves during adversarial training. We find that during training an overall
reduction of adversarial loss is achieved by sacrificing a considerable
proportion of training samples to be more vulnerable to adversarial attack,
which results in an uneven distribution of adversarial vulnerability among
data. Such "uneven vulnerability", is prevalent across several popular robust
training methods and, more importantly, relates to overfitting in adversarial
training. Motivated by this observation, we propose a new adversarial training
method: Instance-adaptive Smoothness Enhanced Adversarial Training (ISEAT). It
jointly smooths both input and weight loss landscapes in an adaptive,
instance-specific, way to enhance robustness more for those samples with higher
adversarial vulnerability. Extensive experiments demonstrate the superiority of
our method over existing defense methods. Noticeably, our method, when combined
with the latest data augmentation and semi-supervised learning techniques,
achieves state-of-the-art robustness against $\ell_{\infty}$-norm constrained
attacks on CIFAR10 of 59.32% for Wide ResNet34-10 without extra data, and
61.55% for Wide ResNet28-10 with extra data. Code is available at
https://github.com/TreeLLi/Instance-adaptive-Smoothness-Enhanced-AT. | Lin Li, Michael Spratling | 2023-03-24T15:41:40Z | http://arxiv.org/abs/2303.14077v2 | # Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing
###### Abstract
Deep neural networks can be easily fooled into making incorrect predictions through corruption of the input by adversarial perturbations: human-imperceptible artificial noise. So far adversarial training has been the most successful defense against such adversarial attacks. This work focuses on improving adversarial training to boost adversarial robustness. We first analyze, from an instance-wise perspective, how adversarial vulnerability evolves during adversarial training. We find that during training an overall reduction of adversarial loss is achieved by sacrificing a considerable proportion of training samples to be more vulnerable to adversarial attack, which results in an uneven distribution of adversarial vulnerability among data. Such "uneven vulnerability", is prevalent across several popular robust training methods and, more importantly, relates to overfitting in adversarial training. Motivated by this observation, we propose a new adversarial training method: Instance-adaptive Smoothness Enhanced Adversarial Training (ISEAI). It jointly smooths both input and weight loss landscapes in an adaptive, instance-specific, way to enhance robustness more for those samples with higher adversarial vulnerability. Extensive experiments demonstrate the superiority of our method over existing defense methods. Noticeably, our method, when combined with the latest data augmentation and semi-supervised learning techniques, achieves state-of-the-art robustness against \(\ell_{\infty}\)-norm constrained attacks on CIFAR10 of 59.32% for Wide ResNet34-10 without extra data, and 61.55% for Wide ResNet28-10 with extra data. Code is available at [https://github.com/TreeLi/Instance-adaptive-Smoothness-Enhanced-AT](https://github.com/TreeLi/Instance-adaptive-Smoothness-Enhanced-AT).
adversarial robustness, adversarial training, loss smoothing, instance adaptive regularization
## 1 Introduction
Deep neural networks (DNNs) are well known to be very vulnerable to adversarial attacks [1]. Adversarial attacks modify (or "perturb") natural images (clean examples) to craft adversarial examples in such a way as to fool the target network to make predictions that are dramatically different from those for the corresponding clean examples even when the class of the perturbed input appears unchanged to a human observer. This raises severe security concerns about DNNs, especially as more and more real-world applications are dependent upon such models.
To date, adversarial training (AT) has been found to be the most effective defense against adversarial attacks [2]. It is typically formulated as a min-max problem:
\[\arg\min_{\mathbf{\theta}}\mathbb{E}[\arg\max_{\mathbf{\delta}}\mathcal{L}(\mathbf{x}+\mathbf{ \delta};\mathbf{\theta})] \tag{1}\]
where the inner maximization searches for the strongest adversarial perturbation \(\mathbf{\delta}\) and the outer optimization searches for the model parameters \(\mathbf{\theta}\) to minimize the loss on the generated adversarial examples. One particular limit of vanilla AT [3] is that all samples in the data set are treated equally during training, neglecting individual differences between samples. Several previous works have made improvements to AT by customizing regularization in an instance-adaptive way. Regularization here can be implemented either by modifying the method used to generate the training-time adversarial sample or by modifying the strength of regularization applied to the loss function. For example, one popular instance-adaptive strategy is to enhance the strength of the training-time adversarial attack1 for hard-to-attack (adversarially benign) samples and/or to diminish the strength of the attack at those easy-to-attack samples [4, 5, 6, 7, 8]. Other strategies are discussed in detail in Section 2. We extend this line of work to improve AT, but propose a different strategy to distinguish instances (that particularly contrasts with the aforementioned popular method), and propose a new regularizer to jointly smooth both input and weight loss landscapes.
Footnote 1: The strength of the attack can be modified by scaling the magnitude of the perturbations found by the attack, or by changing the perturbation budget used by the attack, or by changing the number of steps used by the attack.
The proposed approach of improving AT to motivated by an analysis, from an instance-wise perspective, of how adversarial vulnerability (formally defined as in Eq. (3)) evolves during AT. We first discuss the theoretical possibility that during AT the reduction in the overall adversarial loss may be produced by sacrificing some gradients (allowing them to increase) so that the others can decrease more. We then demonstrate, empirically, the prevalence of this issue in practice across various datasets and adversarial training methods. We observed that during AT there is an increase in the number of samples that have low- and, more surprisingly, high-vulnerability to adversarial attack. This produces an uneven distribution of robustness among instances, or "uneven vulnerability" for short. Furthermore, we show that for a large proportion of samples with low-vulnerability their margin along the adversarial direction is excessive in the extreme. Specifically, even if modified with a
perturbation with a magnitude about 30 times the perturbation budget, such samples can still be correctly classified. We describe such samples as having "disordered robustness". Next, we demonstrate that the uneven vulnerability issue is related to overfitting in AT and can not be mitigated by the popular robust overfitting regularization methods like AWP [9] and RST [10]. Therefore, we hypothesize that enforcing an even distribution of adversarial vulnerability among data can improve the robust generalization of AT. Importantly, the proposed method is complimentary to other regularization techniques.
The above insights lead to the proposal of a novel AT approach named Instance-adaptive Smoothness Enhanced Adversarial Training (ISEAT). It jointly enhances both input and weight loss landscape smoothness in an instance-adaptive way. Specifically, it enforces, in addition to standard AT, logit stability against both adversarial input and weight perturbation and the strength of regularization for each instance is adaptive to its adversarial vulnerability. Extensive experiments were conducted to evaluate the performance of the proposed method on various datasets and models. It significantly improves the baseline AT method and outperforms in terms of adversarial robustness previous related methods. Using standard benchmarks the proposed method produces new state-of-the-art robustness of 59.32% and 61.55% respectively for settings that do not, and do, use extra data during training. A detailed ablation study was conducted to illuminate the mechanism behind the effectiveness of the proposed method.
## 2 Related Works
Adversarial training is usually categorized as single-step and mutli-step AT according to the number of iterations used for crafting training adversarial examples. The common single-step and multi-step adversaries are Fast Gradient Sign Method (FGSM) [1] and Projected Gradient Descent (PGD) [3]. FGSM uses the sign of the loss gradients w.r.t. the input as the adversarial direction. PGD can be considered as an iterative version of FGSM where the perturbation is projected back onto the \(\ell_{\infty}\)-norm \(\epsilon\)-ball at the end of each iteration. For brevity a number is appended to the abbreviation PGD to indicate the number of steps performed when searching for an adversarial image. Hence, for example, PGD50 is used to denote a PGD attack with 50 steps throughout the text. AT is prone to overfitting, which is known as robust overfitting [11]. Specifically, test adversarial robustness degenerates while training adversarial loss declines during the later stage of training. Robust overfitting significantly impairs the performance of AT.
It has been shown previously that adversarial robustness is related to the smoothness of the model's loss landscape w.r.t. the input [12, 13] and the model weights [9]. Therefore, we summarize existing methods for adversarial robustness in two categories: input loss smoothing and weight loss smoothing. For input loss smoothing, one approach is explicitly regularizing the logits [14] or the gradients [13] of each training sample to be the same as any of their neighbor samples within a certain distance. Besides, AT can be concerned as an implicit input loss smoothness regularizer and the strength of regularization is controlled by the direction and the size of perturbation [14]. Regarding weight loss smoothing, Adversarial Weight Perturbation (AWP) [9] injects adversarial perturbation into model weights to implicitly smooth weight loss. RWP [15] found that applying adversarial weight perturbation to only small-loss samples leads to an improved robustness compared to AWP. Alternatively, Stochastic Weight Averaging (SWA) [16] smooths weight by exponentially averaging checkpoints along the training trajectory. Our regularizer combines logit regularization and AWP together to jointly smooth both input and weight loss in a more effective way.
Many strategies have been proposed to improve AT by regularizing instances unequally. One popular strategy is to adapt the size of the adversarial input perturbation, and so the strength of regularization, to the difficulty of crafting successful adversarial examples. Typically, large perturbations are assigned to hard-to-attack samples in order to produce successful adversarial examples for more effective AT. In tandem, small perturbations are assigned to easy-to-attack samples in order to alleviate over-regularization for a better trade-off between accuracy and robustness. The size of perturbation can be controlled by the number of steps [6], the perturbation budget [4, 7, 8] or an extra scaling factor [5]. Furthermore, this strategy has been also applied to weight loss smoothing in RWP [15]. We argue that this strategy contradicts our finding that hard-to-attack (low-vulnerability in our terms) samples have already been over-regularized so their regularization strength should not be further enlarged.
The most similar methods to ours are MART [17] and GAIRAT [18]. MART regularizes KL-divergence between the logit of clean and corresponding adversarial examples, weighted by one minus the prediction confidence on clean examples. Hence, instances with lower clean prediction confidence receive stronger regularization. GAIRAT directly weights the adversarial loss of instances based on their geometric distance to the decision bound, which is measured by the number of iterations required for a successful attack. Instances closer to decision bound (less iterations) are weighted more. Although there is a connection among prediction confidence, geometric distance and adversarial vulnerability (ours), they are essentially different metrics and the weight schemes based on them thus perform differently. Regarding GAIRAT, the computation of geometric distance deeply depends on the training adversary, and hence, severely limits its application, e.g., it can not be applied to single-step AT without using an extra multi-step adversary. Another similar work is InfoAT [19] which like the proposed method uses logit stability regularization, but weights regularization according to the mutual information of clean examples.
In contrast to the manually crafted strategies described above, LAS-AT [20] customizes adversarial attack automatically for each instance in a generative adversarial style. The parameters of the attacker, such as the perturbation budget for each instance, are generated on-the-fly by a separate strategy network, which is jointly trained alongside the classification network to maximize adversarial loss, i.e., produce stronger adversarial examples. This approach is more complicated than the alternatives, including ours, and potentially suffers from similar instability issues to GANs
[21].
AT can benefit from using more training data to enhance robust generalization. This was theoretically proved for simplified settings like Gaussian models [22]. For complicated but realistic settings, training with extra data from either real [10], or synthetic [23], sources dramatically boosts the robust performance of AT and thus becomes a necessary ingredient for achieving state-of-the-art results. However, extra data is usually very expensive or even infeasible to acquire in many tasks, so [24] proposes a new data augmentation method, IDBH, specifically designed for AT. Our method is complementary to using extra data and data augmentation and a further boost in robustness is observed when they are combined together (see Section 5.1).
## 3 Uneven Vulnerability and Disordered Robustness
This section describes two issues of AT: "uneven vulnerability" and "disordered robustness". We first propose, theoretically, an alternative optimization path for AT which leads to the uneven vulnerability issue. We then demonstrate, empirically, that AT in practice is prone to optimize in this manner, and thus, to producing models that becomes increasingly vulnerable at some samples while robust at some others. We find that the robustness for a considerable proportion of training samples is actually disordered, because the model is insensitive to dramatic perturbations applied to them that should significantly affect the model's output. Last, we relate the uneven vulnerability issue to overfitting in AT and show that robust generalization can be improved by alleviating unevenness.
We adopt a similar notation to that used in [14]. \(\mathbf{x}\in\mathbb{R}^{d}\) is a \(d\)-dimensional sample whose ground truth label is \(y\). \(\mathbf{x}_{i}\) refers to \(i\)-th sample in dataset \(D\) and \(x_{i,j}\) refers to the \(j\)-th dimension of \(\mathbf{x}_{i}\). Similarly, \(x_{j}\) refers to the \(j\)-th dimension of an arbitrary sample \(\mathbf{x}\). \(D\) is split into two subsets, \(D_{t}\) and \(D_{e}\), for training and testing respectively. \(\mathbf{\delta}_{i}\in\mathcal{B}(\mathbf{x}_{i},\epsilon)\) is the adversarial perturbation applied to \(\mathbf{x}_{i}\) to fool the network. \(\mathcal{B}(\mathbf{x},\epsilon)\) denotes the \(\epsilon\)-ball around the example \(\mathbf{x}\) defined for a specific distance measure (the \(\ell_{\infty}\)-norm in this paper). \(\delta_{i,j}\) is the perturbation applied along the dimension \(x_{i,j}\), and \(\delta_{j}\) is the perturbation applied along the \(j\)-th dimension of an arbitrary sample \(\mathbf{x}\). The network is parameterized by \(\mathbf{\theta}\). The output of the network before the final softmax layer (i.e., the logits) is \(f(\mathbf{x};\mathbf{\theta})\). The class predicted by the network, i.e., the class associated with the highest logit value, is \(F(\mathbf{x};\mathbf{\theta})\). The predictive loss is \(\mathcal{L}(\mathbf{x},y;\theta)\) or \(\mathcal{L}(\mathbf{x})\) for short (Cross-Entropy loss was used in all experiments). The size of a batch of samples used in one step of training is \(m\). \(M\) denotes the collection of indexes of the samples in a batch.
The experiments in this section were conducted using the default settings with CIFAR10 as defined in Section 5 unless specified otherwise. The model architecture was PreAct ResNet18 [25]. Robustness was evaluated against PGD50.
### _Adversarial Robustness and Loss Landscape_
Adversarial loss can be approximated following [14] by the sum of clean loss and adversarial vulnerability as
\[\mathcal{L}(\mathbf{x}+\mathbf{\delta})\approx\mathcal{L}(\mathbf{x})+\sum_{i}^{d}\nabla _{x_{i}}\mathcal{L}(\mathbf{x})\delta_{i}+\frac{1}{2}\sum_{i,j}^{d}\nabla_{x_{i} x_{j}}^{2}\mathcal{L}(\mathbf{x})\delta_{i}\delta_{j} \tag{2}\]
where \(\nabla_{x_{i}}\mathcal{L}(\mathbf{x})\), or \(\nabla_{x_{i}}\) for short, is the first-order gradient of \(\mathcal{L}(\mathbf{x})\) w.r.t. the input dimension \(x_{i}\) corresponding to the slope of the input loss landscape. Similarly, \(\nabla_{x_{i}x_{j}}^{2}\mathcal{L}(\mathbf{x})\), or \(\nabla_{x_{i}x_{j}}^{2}\) for short, denotes the second-order gradient w.r.t. \(x_{i}\) and \(x_{j}\) corresponding to the curvature of the loss landscape.
We define the adversarial vulnerability (AV) of \(\mathbf{x}\) against a particular attack as the loss increment caused by the attack:
\[\text{AV}=\mathcal{L}(\mathbf{x}+\mathbf{\delta})-\mathcal{L}(\mathbf{x}) \tag{3}\]
A higher vulnerability means that adversarial attack has a greater impact on the loss value for that sample, and hence, is more likely to corrupt the model's prediction for that sample. From this perspective, loss gradients in Eq. (2) constitute the source of AV. Adversarial attacks exploit input loss gradients to enlarge adversarial loss by aligning the sign of the perturbation and corresponding gradient, i.e., \(sign(\delta_{i})=sign(x_{i})\). Gradients with small magnitude (a flat loss landscape) therefore indicate low AV.
AT works by shrinking the magnitude of gradients, or flattening the loss landscape from the geometric perspective. Taking a toy example of training with two samples, adversarial loss can be written (second-order gradients are ignored for simplicity) as
\[\mathcal{L}(\mathbf{x}+\mathbf{\delta})\approx\mathcal{L}(\mathbf{x})+\sum_{i}^{d}\nabla_{ x_{1,i}}\delta_{1,i}+\sum_{i}^{d}\nabla_{x_{2,i}}\delta_{2,i} \tag{4}\]
Supposing that the training adversary is theoretically optimal, i.e., \(\forall i,j:sign(\nabla_{x_{i,j}})=sign(\delta_{i,j})\) and \(|\delta_{i,j}|=\epsilon\), the objective of AT can be rewritten as
\[\arg\min_{\mathbf{\theta}}\ \mathcal{L}(\mathbf{x})+\epsilon\sum_{i}^{d}\left|\nabla_{x_{1,i }}\right|+\epsilon\sum_{i}^{d}\left|\nabla_{x_{2,i}}\right| \tag{5}\]
Note \(|\cdot|\) denotes the absolute value. Ideally, the magnitude of every gradient is supposed to be shrunk by AT with the decrease of training loss resulting in a flat loss landscape over the input space for all training data.
However, Eq. (5) can also be reduced by sacrificing some gradients (to be large) in order to shrink other gradients or the clean loss: as long as the overall reduction is greater than any enlargement. From the geometric view, the loss landscape becomes steep around some samples so that it can be flat and/or low around others. If some gradients are consistently sacrificed (enlarged), the model may eventually converge to yield an uneven distribution of AV among training data.
We argue that it is even more likely for the model to converge to such an uneven state if the training adversary is weaker than the theoretical optimum. A suboptimal adversary causes misalignment between the sign of the perturbation and the corresponding gradient, such that \(sign(\nabla_{x_{i,j}})\neq sign(\delta_{i,j})\) in Eq. (4), in which case training encourages the misaligned gradients to grow larger. If some
gradients are consistently misaligned by their corresponding perturbations, they accumulate to be large.
### _Uneven Distribution of Adversarial Vulnerability_
AT in practice is prone to minimize training loss in the alternative way described in the final paragraphs of the previous section. Consequently, AV is unevenly distributed among the data. To demonstrate this we track how the instance-wise AV of the training data varies as training progresses. Specifically, we measure the degree of unevenness for AV via its standard deviation (SD) over the entire training set:
\[\text{AV SD}=\sqrt{\mathbb{E}_{i\in D_{t}}[(\text{AV}_{i}-\mathbb{E}_{j\in D_ {t}}[\text{AV}_{j}])^{2}]} \tag{6}\]
Higher AV SD indicates that AV is spread out more among instances, i.e., higher unevenness. In addition, we compute the mean AV of the 10% of samples with the lowest AV ("bottom 10%") and the mean AV of the 10% of samples with the highest AV ("top 10%"). We also assess the unevenness of adversarial vulnerability by calculating the percentage of samples with AV \(\geq 1\) and \(\leq 0\) within the whole training set. \(1\) and \(0\) are selected as the thresholds for high and low AV respectively as the model's prediction, after adversarial attack, is observed to be significantly affected if AV \(\geq 1\) and hardly changed if AV \(\leq 0\).
In Fig. (a)a it can be observed that AV SD, as well as the number of high- and low-vulnerability samples, increases over the course of training. The average AV of top 10% increases, while the average AV of bottom 10% decreases to be even lower than 0. Although the overall training adversarial loss decreases, AV is distributed unevenly among training samples: some become extremely vulnerable to attack while others become very robust.
Fig. 1: Illustration of the phenomenon “uneven vulnerability”. **Fig. (a)a** shows how instance-wise AV evolves with the training epoch. “AV (bot. 10%)” and “AV (top 10%)” measure the average AV of the 10% of training samples with the lowest and highest AV respectively. “AV SD” is the standard deviation of AV across all training samples. “AV \(\leq 0\)” and “AV \(\geq 1\)” measure the proportion of samples with AV \(\leq 0\) and \(\geq 1\) among the entire training set. **Fig. (b)b** shows the distribution of margin \(\mu\) over the entire training set for the checkpoint with highest robustness found in the experiment of Fig. (a)a. The yellow dashed line indicates the margin corresponding to the training perturbation budget \(\epsilon\). The red solid line represents cumulative distribution.
Fig. 2: Visualization of disordered robustness. This figure shows clean samples and the corresponding adversarial samples for various levels of margin. Adversarial examples in each row were created using \(\pi+\mu\frac{S}{\pi\theta_{i}}\) with the value of \(\mu\) given at the left of each row. \(\mu=1.5\) approximately corresponds to the perturbation budget \(\epsilon\). The caption of each image describes the ground-truth class and the prediction confidence (the output of softmax layer at the index of ground-truth class). Although the images are modified dramatically by adversarial perturbation to even a detrimental degree when \(\mu\geq 20\), they can still be correctly classified by the model without much variation on confidence, or even with higher confidence. In the cases with very large \(\mu\), the perturbed images become extremely hard to recognize or become meaningless to a human observer, while the model is able to recognize them correctly with high confidence.
To further examine this phenomenon, the margin (\(\mu\)) along the adversarial direction for each sample in the training data was computed from the adversarially-trained model using the method defined in [26]:
\[\operatorname*{arg\,min}_{\mu}\text{ s.t. }F(\mathbf{x}+\mu\frac{\mathbf{\delta}}{\|\mathbf{ \delta}\|_{2}};\mathbf{\theta})\neq F(\mathbf{x};\mathbf{\theta}) \tag{7}\]
Where \(\mathbf{\delta}\) is computed using PGD50 and \(\|\cdot\|_{2}\) is the \(\ell_{2}\)-norm. As shown in Fig. (b)b, about 20% of training data can be successfully attacked within the \(\epsilon\)-ball to fool the model into changing prediction, and among them, around 5% of training instances can be successfully attacked using only a third of the perturbation budget \(\epsilon\). In contrast, a large proportion of samples exhibit an excessive margin along the adversarial direction. The prediction of the model remains constant under an attack with double the perturbation budget for about half the training data. More surprisingly, about 14% of the training samples exhibit the theoretically maximal effective margin (\(\mu=50\))2, which indicates that no perturbation along the adversarial direction can change the model's prediction. We name this property of extremely excessive margin as "disordered robustness" because a reasonable model should be sensitive to noticeable or even devastating perturbations of the input (see Fig. 2 for examples of perturbed inputs for sample with disordered robustness).
Footnote 2: The margin value corresponding to the perturbation budget \(\epsilon\) is about 1.5. A margin value of 50 is, hence, equivalent to perturbing along the adversarial direction by approximately \(\frac{\mathbf{\delta}}{5\epsilon}\) which is greater than 1. For our model, input images are normalized to have pixel values between zero and one, and the perturbed input is clipped to remain in this range, so increasing the magnitude of any perturbation beyond a value of one with have no additional effects on the input.
To further verify our claim, the loss landscapes around some representative samples are visualized in Fig. 3. The loss was calculated using:
\[L=\mathcal{L}(\mathbf{x}+\alpha\frac{\mathbf{\delta}}{\|\mathbf{\delta}\|_{2}}+\beta\frac {\mathbf{u}}{\|\mathbf{u}\|_{2}},y;\mathbf{\theta}) \tag{8}\]
Where \(\mathbf{\delta}\) was generated by PGD50 and \(\mathbf{u}\) was randomly sampled from a uniform distribution \(\mathcal{U}(-\epsilon,\epsilon)^{d}\). The loss landscape was visualized along the adversarial and the random direction by varying \(\alpha\) and \(\beta\) respectively.
For samples with disordered robustness, lower loss values are produced for values of \(\alpha>0\) than are produced when \(\alpha=0\) (see Fig. (a)a and Fig. (b)b). This confirms that this particular kind of robustness is disordered because adversarial examples are more benign, i.e., easier to be correctly classified than clean examples in this case. In fact, perturbations in the random direction are more malicious than those in the adversarial direction since the loss increases as \(\beta\) increases. Nevertheless, the highest loss value achieved by perturbations along the random direction is still very small, i.e., the perturbed sample remains easy to be correctly classified. Therefore, such disordered robust samples are very resistant to the input perturbations within the budget.
In contrast, the loss landscapes for (normal) robust samples (Fig. (c)c) and vulnerable samples (Fig. (d)d) are quite different. It can be observed that both robust and vulnerable samples exhibit an increasing loss as \(\alpha\) is increased (i.e. as the magnitude of the perturbation along the adversarial direction is increased), which is in stark contrast to the loss landscape at samples with disordered robustness.
We acknowledge that excessive margin has been observed before in AT [26]. However, our finding differs regarding the direction along which excessive margin is observed. [26] observed excessive margin for an adversarially-trained model along the adversarial direction found by PGD on a standardly-trained model (i.e. they used different models for the adversarial direction and the margin evaluation), while we observed it along the PGD adversarial direction generated for an adversarially-trained model (same model for adversarial direction and margin evaluation). Furthermore, [26] did not find that the direction along which the excessive margin is observed is in fact adversarially benign.
### _Connection of Uneven Adversarial Vulnerability to Robust Overfitting_
We hypothesize that the uneven distribution of AV accounts for overfitting in AT. To evaluate, we test how robust generalization varies with the unevenness of AV. Unevenness is controlled by the strength, \(\eta\), of a logit stability regularization applied to the 10% of samples with the highest AV:
\[\mathbb{E}_{i\in M}[\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i})+\eta\|f(\mathbf{x}_{i }+\mathbf{\delta}_{i})-f(\mathbf{x}_{i})\|_{2}^{2}1(\mathbf{x}_{i},\mathbf{\delta}_{i})] \tag{9}\]
where \(1(\cdot)\) is an indicator function to select the samples with highest AV within each training batch. \(\|\cdot\|_{2}^{2}\) is the squared \(\ell_{2}\)-norm. Unevenness is supposed to be reduced as \(\eta\) increases since those highly vulnerable samples are regularized to be more robust. We use this regularizer to
Fig. 3: Visualization of loss landscapes at samples with disordered robustness (Fig. (a)a and Fig. (b)b), and for a robust sample (Fig. (c)c) and a vulnerable sample (Fig. (d)d). \(\alpha\approx 1.5\) corresponds to a perturbation size of \(\epsilon\). Loss increases up as the color of plane changes from blue to red. Note the scale of loss is dramatically different for these three categories of data.
fine-tune a pre-adversarially-trained model for 10 epochs with an initial learning rate of 0.01 decayed by 0.1 at half epochs. Experiments are preformed using different values for \(\eta\) from 0 to 4 with a step size of 0.5. Note that \(\eta=0\) means no regularization is applied.
As shown in Fig. 4, the AV SD on the training data drops with increasing regularization strength. The AV of the test data decreases even more dramatically, and test robustness increases, indicating improved robust generalization with stronger regularization. The increase of test robustness is less significant than the decrease of test AV because clean accuracy degrades with stronger regularization. Overall, this result verifies that the unevenness of AV is related to the degree of robust overfitting. Note that this does not imply that uneven AV is necessary for robust overfitting. For example, it is possible for a model with extremely large capacity to perfectly overfit every training data in AT so that all training data have nearly 0 AV. Nevertheless, uneven vulnerability might still occur at an intermediate stage during learning, and reduce as learning continues to produce perfect robust overfitting. Similarly, we observe that unevenness declines after the second decay of learning rate in Fig. 0(a).
### _Prevalence of Uneven Adversarial Vulnerability Across Robust Training Schemes_
Finally, we show that the issue of unevenly distributed AV is prevalent across various robust training schemes. As shown in Tab. I, RST and AWP both mitigate the unevenness of AV to some extent with a reduced top 10% average AV and reduced number of high-vulnerability samples compared to vanilla AT. However, the reduction of unevenness produced by RST and AWP is less than that achieved by the purpose-built regularization (Eq. (9)). This suggests that these previous methods of improving AT contribute to enhanced adversarial robustness using different mechanism to the proposed one and a higher robustness can be expected if they are properly combined together, as described next.
## 4 Instance Adaptive Smoothness Enhanced Adversarial Training
Inspired by the above insights, we propose a new general AT paradigm, Instance Adaptive Robustness Enhancement, that alleviates the uneven vulnerability issue to improve robust generalization. This approach enhances robustness for training samples with a strength adaptive to their vulnerability. In general, a higher strength of regularization is applied to samples with higher vulnerability with the aim of improving robustness for these samples. In contrast, low-vulnerability samples, which are already robustly classified, receive weaker regularization due to the desire to avoid the effects of over-regularization. Specifically, we combine AT with a robustness regularizer and adapt the regularization strength for each instance based on its AV. Algorithm 1 illustrates this general framework.
Our ultimate proposal for improving AT is a specific implementation of this general framework that uses AV weighted regularization to jointly smooth the input and weight loss landscapes. This specific realization of our proposal is called Instance-adaptive Smoothness Enhanced Adversarial Training (ISEAT), and is summarized in Algorithm 2 and described in detail below.
First, in order to integrate weight loss smoothing into our framework, as described in detail in the following section, we extend the metric defined in Eq. (3) to measure instance-wise vulnerability against adversarial input, \(\mathbf{\delta}\), and weight, \(\mathbf{v}\), perturbation:
\[\text{AV}_{i}=\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})- \mathcal{L}(\mathbf{x}_{i};\mathbf{\theta}) \tag{10}\]
\(\mathbf{v}\) is an adversarial perturbation within a pre-defined feasible region, \(\mathcal{V}\), applied to the model's parameters to maximize adversarial loss (adversarial input perturbation \(\mathbf{\delta}\) is assumed) [9]:
\[\arg\max_{\mathbf{v}\in\mathcal{V}}\ \mathbb{E}_{i\in M}[\mathcal{L}(\mathbf{x}_{i}+\mathbf{ \delta}_{i};\mathbf{\theta}+\mathbf{v})] \tag{11}\]
To optimize, \(\mathbf{v}\) is searched like \(\mathbf{\delta}\) by the projected gradient descent (PGD) algorithm [3] as:
\[\mathbf{v}\leftarrow\Pi_{\gamma}(\mathbf{v}+\rho\frac{\nabla_{\mathbf{v}}\mathbb{E}_{i \in M}[\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})]}{\|\nabla_ {\mathbf{v}}\mathbb{E}_{i\in M}[\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta} +\mathbf{v})]\|}\|\mathbf{\theta}\|) \tag{12}\]
where \(\rho\) is the step size, and \(\Pi_{\gamma}(\cdot)\) is a layer-wise projection operation, defined as follows:
\[\Pi_{\gamma}(\mathbf{v})=\begin{cases}\gamma\frac{\|\mathbf{\theta}_{n}\|}{\|\mathbf{v}_{ n}\|}\mathbf{v}_{n},&\text{if}\|\mathbf{v}_{n}\|>\|\gamma\mathbf{\theta}_{n}\|,\forall n \in N\\ \mathbf{v}_{n},&\text{if}\|\mathbf{v}_{n}\|\leq\|\gamma\mathbf{\theta}_{n}\|,\forall n\in N \end{cases} \tag{13}\]
where \(N\) denotes the collection of all layers in the network. \(\Pi_{\gamma}(\cdot)\) restricts the weight perturbation \(\mathbf{v}_{n}\) in the \(n\)-th layer to be relative to the corresponding weight \(\mathbf{\theta}_{n}\) such that
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Rob. (\%)} & \multicolumn{4}{c}{Adversarial vulnerability} \\ \cline{3-6} & & AV SD & Top 10\% & Bot. 10\% & \(\geq 1\) & \(\leq 0\) \\ \hline AT & 51.78 & 0.467 & 1.527 & -0.010 & 12.38 & 12.14 \\ AWP & 54.68 & 0.351 & 1.120 & -0.022 & 5.52 & 19.74 \\ RST & **57.68** & 0.443 & 1.378 & -0.011 & 9.96 & 20.87 \\ Eq. (9) & 55.95 & **0.196** & **0.633** & **-0.047** & **0.14** & **12.00** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Robustness and the statistics of adversarial vulnerability for various robust training schemes. ‘AV SD’ denotes the standard deviation of adversarial vulnerability. ‘Top 10%” (‘Bot. 10%’) denotes the the average AV of the 10% of training samples with the highest and lowest AV respectively. ‘\(\geq 1\)’ (‘\(\leq 0\)’) denotes the proportion of training data with adversarial vulnerability greater (less) than or equal to 1 (0).
Fig. 4: Accuracy, robustness and adversarial vulnerability for the models trained with different regularization strength. For training data, we measure AV SD to indicate how the unevenness of adversarial vulnerability varies. Regarding test performance, we evaluate accuracy, robustness and adversarial vulnerability (AV) on the test set.
\(\|\mathbf{v}_{n}\|\leq\|\gamma\mathbf{\theta}_{n}\|\). For more detail of this relative perturbation size, please refer to the original work [9].
In practice, [9] found that one step of search is enough for finding approximately optimal weight perturbation. \(\mathbf{v}\) can therefore be simplified (let \(\rho=\gamma\)) as:
\[\mathbf{v} =\Pi_{\gamma}(\gamma\frac{\nabla_{\mathbf{v}}\mathbb{E}_{i\in M}[ \mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})]}{\|\nabla_{\mathbf{v}} \mathbb{E}_{i\in M}[\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v}) ]\|}\|\mathbf{\theta}\|)\] \[=\gamma\frac{\nabla_{\mathbf{v}}\mathbb{E}_{i\in M}[\mathcal{L}(\mathbf{ x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})]}{\|\nabla_{\mathbf{v}}\mathbb{E}_{i\in M }[\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})]\|}\|\mathbf{\theta}\| \tag{14}\]
Next, we consider how to weight instances according to their AV. We decided that regularization strength should depend on the relative order, instead of absolute value, of vulnerability so that the overall strength of regularization remains constant throughout training, even if the overall AV declines at the later stage of training. This is important for balancing the influence of AT and the additional robustifying methods. The regularization weight is generated linearly, based on the ranking of vulnerability within the batch, as follows:
\[w_{i}=1-\frac{r(\text{AV}_{i})}{m} \tag{15}\]
where \(r(\cdot)\) computes the ranking (indexed from 0 for the highest vulnerability) within the batch. Hence, the weights range from \(1\) (for the most vulnerable sample) to \(\frac{1}{m}\) (for the least vulnerable). This linear scheme is selected due to its simplicity and superiority over other options in performance (see Section 5.3 for an empirical comparison with some alternatives).
Alternative robustifying methods that support instance-adaptive strength include adversarial perturbation customization [7, 8, 20], direct weight on loss [18] and loss smoothness regularization [14]. Typically, adversarial perturbation customization modifies the configuration of adversarial generation for each sample to reflect the required regularization strength, e.g., a large perturbation budget, \(\epsilon\), corresponding to a large strength. Applying this strategy naively in our framework is expected to double the computational overhead since adversarial examples will be first generated to measure AV and then re-generated using the modified configuration that is customized to the vulnerability of each sample. The increased computational burden can be very costly when the training adversary is multi-step. In contrast, the direct weight on loss method weights each instance directly via a separate coefficient on the overall loss. It adds virtually no extra computational cost, but was observed before by [20, 27] to induce the model to overfit the training adversary resulting in a false security. Therefore, direct weight on loss is computationally efficient but ineffective. Different from the above two methods, loss smoothness regularization has been widely verified to be effective in improving AT [9, 17] and can be computationally efficient if implemented properly [14]. Thus, we adopt loss smoothness regularization as the robustifying method to realize our framework.
### _Jointly Smoothing Input and Weight Loss Surfaces_
To robustify the model in addition to AT, we propose a new regularization method that jointly smooths both input and weight loss landscapes. The idea of joint smoothing is motivated by the observation in Section 3.4 that input and weight loss smoothing improve AT in a complementary way. The proposed regularizer enforces prediction Logit Stability against both adversarial Input and Weight perturbation (LSIW) so that the model's predicted logits remains, ideally, constant when the input and the weights are both adversarially perturbed. Specifically, we penalize the logit variation raised by input perturbation, \(\mathbf{\delta}\), and weight perturbation, \(\mathbf{v}\), as
\[\mathbb{E}_{i\in M}\|f(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})-f(\mathbf{x}_ {i};\mathbf{\theta})\|_{2}^{2} \tag{16}\]
Loss smoothness can be regularized in principle through logits and gradients. Gradient regularization constrains the
loss gradient instead of the predicted logits. However, this requires double-backpropagation which is computationally expensive. In contrast, logit regularization adds only a marginal expense for computing the regularization loss. Therefore, logit regularization is much more computationally efficient than gradient regularization. Moreover, logit regularization empirically outperforms gradient regularization in terms of robustness improvement and the trade-off between accuracy and robustness [14]. Therefore, we adopt logit regularization to smooth loss.
We acknowledge that the idea of jointly smoothing input and weight loss was explored before in [9], but to our best knowledge, stabilizing predicted logits against adversarial weight perturbation is novel. The previous work combined adversarial weight perturbation with input loss smoothing using the method (named TRADE-AWP in the original work):
\[\text{KL}(f(\mathbf{x};\mathbf{\theta}+\mathbf{v}),f(\mathbf{x}+\mathbf{\delta};\mathbf{\theta}+\mathbf{v})) \tag{17}\]
Hence, in contrast to our approach in this previous work both clean and adversarial examples were computed using the perturbed model, i.e., \(\mathbf{\theta}+\mathbf{v}\). We argue that adversarial weight perturbation is not fully utilized in this paradigm since the logit variation caused by weight perturbation is not explicitly constrained by the outer Kullback-Leibler (KL) divergence. Theoretically, a stronger regularization can be realized by forcing the predicted logits to be same between clean examples on the unperturbed model and adversarial examples on the perturbed model, as in Eq. (16). The performance of these two approaches is compared in Section 5.3. These empirical results confirm the superiority of our approach over the previous one. Another difference between Eq. (16) and Eq. (17) is the metric used to measure the similarity or distance between two prediction logits. Squared \(\ell_{2}\)-norm is adopted in our solution due to its superior performance as evaluated in Section 5.3.
### _Optimization_
Finally, we combine AWP-based AT with the proposed weight scheme and regularization method to get the overall training loss:
\[\mathbb{E}_{i\in M}[\mathcal{L}(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v} )+\lambda w_{i}\|f(\mathbf{x}_{i}+\mathbf{\delta}_{i};\mathbf{\theta}+\mathbf{v})-f(\mathbf{x}_{i };\mathbf{\theta})\|_{2}^{2}] \tag{18}\]
There are two hyper-parameters, \(\lambda\) and \(\gamma\), in our method. \(\lambda\) controls the strength of joint regularizer. \(\gamma\) in Eq. (14) directly controls the strength of adversarial weight perturbation and also implicitly affect the strength of joint regularizer.
In practice, modern machine learning frameworks [28] cannot directly compute the gradients of Eq. (18) w.r.t. \(\mathbf{\theta}\) in one backward pass on one model because the model used to compute \(f(\mathbf{x}_{i};\mathbf{\theta})\) will be altered by adversarial weight perturbation before backpropagation. To derive the update rule for gradient descent, we first rewrite Eq. (18) as a function of two models parameterized by \(\mathbf{\theta}^{\prime}=\mathbf{\theta}+\mathbf{v}\) and \(\mathbf{\theta}\) separately:
\[L(f(\mathbf{x}+\mathbf{\delta};\mathbf{\theta}^{\prime}),f(\mathbf{x};\mathbf{\theta})) \tag{19}\]
Next, we apply the Chain rule to separate the gradient of Eq. (19) w.r.t. \(\mathbf{\theta}\) into the sum of two individual backward passes:
\[\frac{\partial L}{\partial\mathbf{\theta}}= \frac{\partial L}{\partial f(\mathbf{x}+\mathbf{\delta};\mathbf{\theta}^{ \prime})}\frac{\partial f(\mathbf{x}+\mathbf{\delta};\mathbf{\theta}^{\prime})}{\partial \mathbf{\theta}^{\prime}}\frac{\partial\mathbf{\theta}^{\prime}}{\partial\mathbf{\theta}}\] \[+\frac{\partial L}{\partial f(\mathbf{x};\mathbf{\theta})}\frac{\partial f (\mathbf{x};\mathbf{\theta})}{\partial\mathbf{\theta}} \tag{20}\]
After obtaining the gradients, we update the model's parameters following the method used for AWP [9] as:
\[\mathbf{\theta}\leftarrow(\mathbf{\theta}+\mathbf{v})-l\cdot Eq.\eqref{eq
[18], FAT [6], MART [17] and LAS-AT [20] on CIFAR10. All their result were copied from the original work or other published source like RobustBench [36]. They used the same model architecture and the same, or very similar, training settings as we did. We additionally evaluated the performance of our method when combined with the data augmentation method IDBH (weak variant) [24] and with extra data like RST [10] to benchmark state-of-the-art robustness. We observed that our method when combined with IDBH, akin to [37], benefited from training longer so the total number of training epochs was increased to 400. Note that AT alone with IDBH (IBDH+AT) degenerated as the length of training was increased, so we report its performance with the default settings. For experiments with extra data, we used WideResNet28-10 instead of WideResNet34-10 to align with experimental protocols used in related works for a fair comparison. We adopted the same extra data as [10], i.e., 500K unlabled data from dataset 80 Million TinyImages (80M-TI) with pseudo-labels3. As in [10], extra data was included in the ratio 1:1 with the CIFAR10 data in each training mini-batch, so the effective batch size became 256.
Footnote 3: The extra data was downloaded from the official git repository of [10]: [https://github.com/yaircarmon/semisup-adv](https://github.com/yaircarmon/semisup-adv).
The hyper-parameters of our method were optimized using grid search. The optimal values found were: \(\lambda=0.1\) and \(\gamma=0.007\) for CIFAR10; \(\lambda=0.1\) and \(\gamma=0.005\) for CIFAR10 with IDBH; \(\lambda=0.01\) and \(\gamma=0.005\) for CIFAR10 with extra data; \(\lambda=0.1\) and \(\gamma=0.005\) for CIFAR100; \(\lambda=0.1\) and \(\gamma=0.009\) for SVHN. We observed that jointly smoothing input and weight loss with a large learning rate (0.1 in this case) degraded both accuracy and robustness due to over-regularization. Therefore, we adopted a warm-up strategy for \(\lambda\) on CIFAR10/100: \(\lambda\) was set to 0 during the initial epochs when the learning rate was large, and \(\lambda\) was set to the optimal value after first decay of the learning rate. Note that this strategy was not applied to the experiments with SVHN because the initial learning rate on SVHN was already small.
### _Benchmarking State-of-the-Art Robustness_
As can be seen fro the results in Tab. II, our method significantly improves both accuracy and robustness over the baseline in all evaluated settings. Specifically, it boosts robustness by \(+3.12\%\) compared to AT in the default setting, by \(+5.16\%\) when IDBH data augmentation is used, and by \(+2.02\%\) compared to RST when extra real data from 80M-TI is used. More importantly, our method boosts accuracy as well suggesting a better trade-off between accuracy and robustness. By combining with IDBH, our method achieves a robustness of 58.55% for WRN28-10 which is competitive with the baseline robustness of 59.53% achieved by RST using additional real data. This significantly closes the gap between the robust performance of training with and without extra data. Finally, we highlight that our method achieves, to our best knowledge, state-of-the-art robustness \(61.55\%\)4 and \(59.32\%\) for the settings with and without extra data on the corresponding model architectures respectively.
Footnote 4: A higher record of robustness, 62.80%, was reported by [38]. However, it is unfair to directly compare our result with theirs since we used significantly different training settings. [38] replaced the ReLU activation function with SILU in WRN28-10. They also changed the batch size from 128 to 512, the labeled-to-unlabeled ratio from 1:1 to 3:7 and the learning rate schedule from piecewise to cosine. All these modifications were observed to have an important effect on the robust performance according to their experimental results. We expect our method to achieve higher robustness when trained using customized settings, but our computational resources are insufficient to search for the optimal training setup: [38] used 32 Google Cloud TPU v3 cores for training.
Our method outperforms all existing instance-adaptive AT methods in terms of robustness. We compare our method with related works using the default setup and CIFAR10 (Tab. II) since published results are available for this setup. Our method achieves the highest robustness, \(56.54\%\), among all competitive works, which considerably exceeds the previous best record of \(55.52\%\) achieved by LAS-AWP and the robustness of \(54.23\%\) achieved by the most similar previous work, MART-AWP. Particularly, our method dramatically outperforms FAT by \(+9.06\%\) in terms of robustness. FAT is one of the most recent contributions whose instance adaptation strategy contrasts ours, as described in Section 2. This supports our claim that this previous strategy for instance-adaptive AT is fundamentally defective. Furthermore, our method consistently achieves superior robustness compared to all available related works such as MART and GAIRAT in the condition with extra data.
Last, we find that the performance of our method can
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Model & Extra & Accuracy & Robustness \\ & & Data & (\%) & (\%) \\ \hline AT & & - & 85.90 \(\pm\) 0.57 & 53.42 \(\pm\) 0.59 \\ AT-AWP & & - & 85.57 \(\pm\) 0.40 & 54.04 \(\pm\) 0.40 \\ TRADE & & - & 85.72 & 53.40 \\ InfoAT & & - & 85.62 & 52.86 \\ GARAT & & - & 86.30 & 40.30 \\ FAT & & - & **87.97** & 47.48 \\ RWP & & - & 86.86 \(\pm\) 0.51 & 54.61 \(\pm\) 0.11 \\ MART & & - & 84.17 \(\pm\) 0.40 & 51.10 \(\pm\) 0.40 \\ MART-AWP & & - & 84.43 \(\pm\) 0.40 & 54.23 \(\pm\) 0.40 \\ LAS-AT & & - & 86.23 & 53.58 \\ LAS-AWP & & - & 87.74 & 55.52 \\ ISEAT (ours) & & - & 86.02 \(\pm\) 0.36 & 56.54 \(\pm\) 0.36 \\ ISEAT (ours)+SWA & & - & 85.95 \(\pm\) 0.09 & **57.09**\(\pm\) 0.13 \\ \hline IDBH+AT & & - & 87.03 \(\pm\) 1.58 & 54.16 \(\pm\) 0.70 \\ IDBH+ISEAT (ours) & & - & **88.50**\(\pm\) 0.11 & **59.32**\(\pm\) 0.08 \\ \hline \hline RST & & & 89.69 \(\pm\) 0.40 & 59.53 \(\pm\) 0.40 \\ RST+AGIRAT & & - & 87.50 & 56.29 \\ RST+AWP & & - & 89.36 & 59.64 \\ RST+AWP & & 88.25 \(\pm\) 0.40 & 60.05 \(\pm\) 0.40 \\ ISEAT (ours) & & - & 88.87 \(\pm\) 0.55 & 60.36 \(\pm\) 0.06 \\ ISEAT (ours) & & **90.59**\(\pm\) 0.19 & **61.55**\(\pm\) 0.10 \\ \hline IDBH+ISEAT (ours) & & - & 87.91 \(\pm\) 0.18 & 58.55 \(\pm\) 0.14 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Test accuracy and robustness for networks trained on CIFAR10 using our method and related methods. Results above the double line are for WRN34-10 without extra data and results below the double line are for WRN28-10 with extra data. The best result is highlighted for each metric in each block. The standard deviation is indicated by the value after the \(\pm\) sign if evaluated by us or reported in the original work, otherwise omitted from the table.
be further improved by \(+0.55\%\) in the default setup when another weight smoothing technique Stochastic Weight Averaging (SWA) is integrated. However, we did not observe a similar performance boost in the other setting for CIFAR10. This suggests that our method may exhaust the benefits of weight smoothing in some settings, but not all.
### _Generalization to Other Datasets_
Following common practice for testing generalization ability, we evaluate our method on the alternative datasets CIFAR100 and SVHN. As shown in Tab. III, our method significantly improves robustness over the baseline by \(+3.05\%\) on CIFAR100 and by \(+6.56\%\) on SVHN. It also slightly boosts accuracy on SVHN. Note that the magnitude of robustness improvement in a particular training setting generally depends on the degree of robust overfitting, which is connected to the unevenness of AV among training data. It is therefore reasonable for our method to perform differently on different datasets even using the same model architecture. Overall, the performance improvements across various datasets is consistent, which confirms that the proposed method is generally applicable.
### _Ablation Study_
We conducted ablation experiments to justify the design of our method and illuminate the mechanism behind its effectiveness. Experiments were performed using WRN34-10 on CIFAR10. To ensure a fair comparison, the approaches were applied to fine-tune the same model. This base model had been previously trained using AT with the default training setup, as described in Section 5. Fine-tuning was performed for 40 epochs. The initial learning rate was 0.01 and decayed to 0.001 after 20 epochs.
We first assess the contribution of different components in our method. It can be observed in Tab. IV that both the components of our method, (adaptively weighted) input loss smoothing and weight loss smoothing, can individually improve robustness over the baseline to a great extent, \(+1.08\%\) and \(+1.84\%\) respectively. This confirms that they both play a vital role in our method. Furthermore, combining them together (the proposed method) achieves a greater robustness boost, \(+3.04\%\), compared to either of them alone. This combined boost is greater than the arithmetic sum of the performance increases of the individual components (\(3.04\%>1.08\%+1.84\%\)) suggesting that these two components are complementary to each other.
Next, we examine our design of input loss smoothness regularizier. We first verify the choice of distance metric used to measure the dissimilarity between two predicted logits. As shown in Tab. V, squared \(\ell_{2}\)-norm (the adopted method) performs slightly better than KL-divergence (used by MART [17]) in terms of both accuracy and robustness. Moreover, we compare the performance of the linear weight scheme (the chosen method) with the top-10% weight scheme (used in the preliminary experiments reported in Section 3.3, see Eq. (9)) and unweighted (or uniform) scheme. It can be observed in Tab. V that the weighted schemes, either linear or top-10%, considerably improve both accuracy and robustness over the unweighted scheme, and among the weighted schemes, the linear one outperforms the top-10% scheme regarding both accuracy and robustness. Overall, a linear weight scheme with squared \(\ell_{2}\)-norm is empirically the best among all evaluated solutions.
Last, we examine the effectiveness of our approach to combining input and weight loss smoothing. We compare our proposal, LSIW, with Logit Stability regularization against Input perturbation only (LSI) and TRADE-AWP. The regularization loss of these methods is described in Tab. VI. For more technical detail, please refer to Section 4.1. We observe in Tab. VI that our approach achieves significantly higher accuracy and robustness than the others. This supports our hypothesis that stabilizing logits against both input and weight adversarial perturbation makes a better use of adversarial weight perturbation, and hence, results in a more effective smoothness regularization.
### _Computational Efficiency_
It can be seen from the results in Tab. IV that smoothing input loss landscape alone (i.e., weighted logit stability regularization) adds about 6% computational overhead, and smoothing weight loss landscape alone (i.e., AWP) adds around 9% computational overhead compared to AT. Jointly smoothing both input and weight loss landscapes using the proposed ISEAT method introduces an overhead of approximately 18% compared to AT. The extra cost of our method
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & Accuracy (\%) & Robustness (\%) & Time (s) \\ \hline AT & 85.90 \(\pm\) 0.57 & 53.42 \(\pm\) 0.59 & **0.253** \\ +input loss & 84.10 \(\pm\) 0.27 & 54.50 \(\pm\) 0.17 & 0.268 (+6\%) \\ smoothing & **86.04**\(\pm\) 0.27 & 55.26 \(\pm\) 0.15 & 0.277 (+9\%) \\ +both (ISEAI) & 85.63 \(\pm\) 0.13 & **56.46**\(\pm\) 0.14 & 0.298 (+18\%) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The performance contribution of each component in the proposed ISEAT method. Time is an average measured for processing one mini-batch on a Nvidia RTX 3080Ti in seconds.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Dataset & Method & Accuracy (\%) & Robustness (\%) \\ \hline CIFAR100 & AT & **56.15**\(\pm\) 1.15 & 25.12 \(\pm\) 0.22 \\ CIFAR100 & ISEAT (ours) & 53.19 \(\pm\) 0.23 & **28.17**\(\pm\) 0.14 \\ \hline SVHN & AT & 90.55 \(\pm\) 0.60 & 47.48 \(\pm\) 0.59 \\ SVHN & ISEAT (ours) & **91.08**\(\pm\) 0.49 & **54.04**\(\pm\) 0.68 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Test accuracy and robustness of our method for PRN18 on CIFAR100 and SVHN.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Distance & Weight & Accuracy (\%) & Robustness (\%) \\ \hline AT & & **85.90**\(\pm\) 0.57 & 53.42 \(\pm\) 0.59 \\ KL-divergence & unweighted & 85.07 \(\pm\) 0.31 & 56.08 \(\pm\) 0.32 \\ Squared \(\ell_{2}\)-norm & unweighted & 85.15 \(\pm\) 0.70 & 56.20 \(\pm\) 0.19 \\ Squared \(\ell_{2}\)-norm & top-10\% & 85.53 \(\pm\) 0.03 & 56.36 \(\pm\) 0.02 \\ Squared \(\ell_{2}\)-norm & linear & 85.63 \(\pm\) 0.13 & **56.46**\(\pm\) 0.14 \\ \hline \hline \end{tabular}
\end{table} TABLE V: The performance of logit stability regularization with different distance metrics and weight schemes. “Distance” denotes the metric used to measure the discrepancy between two predicted logits. “Weight” denotes the weight scheme.
is greater than the sum of the extra cost of two separate smoothing components (\(18\%>6\%+9\%\)) because it requires additional forward and backward passes to compute the gradient of the proposed regularization. For more detail, please refer to Section 4.2 and Section 4.3.
## 6 Conclusion
This work investigated how adversarial vulnerability evolves during AT from an instance-wise perspective. We observed the model was trained to be more robust for some samples and, meanwhile, more vulnerable at some others resulting in an increasingly uneven distribution of adversarial vulnerability among training data. We theoretically proposed an alternative optimization path to minimize adversarial loss as an account for this phenomenon. Motivated by the above observations, we first proposed a new AT framework that enhances robustness at each sample with strength adapted to its adversarial vulnerability. We then realized it with a novel regularization method that jointly smooths input and weight loss landscapes. Our proposed method is novel in a number of respects: 1) adapting regularization to instance-wise adversarial vulnerability is new and contrasts the popular existing strategy; 2) stabilizing logit against adversarial input and weight perturbation simultaneously is novel and more effective than the previous approaches. Experimental result shows our method outperforms all related works and significantly improves robustness w.r.t. the AT baseline. Extensive ablation studies demonstrate the vital contribution of the proposed instance adaptation strategy and smoothness regularizer in our method.
In addition to finding that AT results in an uneven distribution of adversarial vulnerability among training data, we also observed that for a considerable proportion of samples the model was excessively robust, such that even very large perturbations, making the sample unrecognizable to a human, failed to influence the prediction made by the network. One limitation of this work is that the proposed method, albeit effective in improving robustness, does not mitigate the issue of "disordered robustness". Future work might usefully explore this problem to further improve the performance of AT. A better trade-off between accuracy and robustness is anticipated if disordered robustness is alleviated.
## Acknowledgments
The authors acknowledge the use of the research computing facility at King's College London, King's Computational Research, Engineering and Technology Environment (CREATE), and the Joint Academic Data science Endeavour (JADE) facility. This research was funded by the King's - China Scholarship Council (K-CSC).
|
2302.03622 | Novel three-dimensional Fermi surface and electron-correlation-induced
charge density wave in FeGe | As the first magnetic kagome material to exhibit the charge density wave
(CDW) order, FeGe has attracted much attention in recent studies. Similar to
AV$_{3}$Sb$_{5}$ (A = K, Cs, Rb), FeGe exhibits the CDW pattern with an
in-plane 2$\times $2 structure and the existence of van Hove singularities
(vHSs) near the Fermi level. However, sharply different from AV$_{3}$Sb$_{5}$
which has phonon instability at $M$ point, all the theoretically calculated
phonon frequencies in FeGe remain positive. Here, we perform a comprehensive
study of the band structures, Fermi surfaces and nesting function of FeGe
through first-principles calculations. Surprisingly, we find that the maximum
of nesting function is at $K$ point instead of $M$ point. Two Fermi pockets
with Fe-$d_{xz}$ and Fe-$d_{x^{2}-y^{2}}$/$d_{xy}$ orbital characters have
large contribution to the Fermi nesting, which evolve significantly with
$k_{z}$, indicating the highly three-dimensional (3D) feature of FeGe in
contrast to AV$_{3}$Sb$_{5}$. Meanwhile, the vHSs are close to the Fermi
surface only in a small $k_{z}$ range, and does not play a leading role in
nesting function. Considering the effect of local Coulomb interaction, we
reveal that the Fermi level eigenstates nested by vector $K$ are mainly
distributed from unequal sublattice occupancy, thus the instability at $K$
point is significantly suppressed. Meanwhile, the wave functions nested by
vector $M$ have many ingredients located at the same Fe site, thus the
instability at $M$ point is enhanced. This indicates that the electron
correlation, rather than electron-phonon interaction, plays a key role in the
CDW transition at $M$ point. | Lin Wu, Yating Hu, Di Wang, Xiangang Wan | 2023-02-07T17:26:37Z | http://arxiv.org/abs/2302.03622v2 | # Novel three-dimensional Fermi surface and electron-correlation-induced charge density wave in FeGe
###### Abstract
As the first magnetic kagome material to exhibit the charge density wave (CDW) order, FeGe has attracted much attention in recent studies. Similar to AV\({}_{3}\)Sb\({}_{5}\) (A = K, Cs, Rb), FeGe exhibits the CDW pattern with an in-plane 2\(\times\)2 structure and the existence of van Hove singularities (vHSs) near the Fermi level. However, sharply different from AV\({}_{3}\)Sb\({}_{5}\) which has phonon instability at \(M\) point, all the theoretically calculated phonon frequencies in FeGe remain positive. Here, we perform a comprehensive study of the band structures, Fermi surfaces, nesting function and the mechanism of CDW transition through first-principles calculations. Surprisingly, we find that the maximum of nesting function is at \(K\) point instead of \(M\) point. Two Fermi pockets with Fe-\(d_{xz}\) and Fe-\(d_{x^{2}-y^{2}}\)/\(d_{xy}\) orbital characters have large contribution to the Fermi nesting, which evolve significantly with \(k_{z}\), indicating the highly three-dimensional (3D) feature of FeGe in contrast to AV\({}_{3}\)Sb\({}_{5}\). Meanwhile, the vHSs are close to the Fermi surface only in a small \(k_{z}\) range, and does not play a leading role in nesting function. Considering the effect of local Coulomb interaction, we reveal that the Fermi level eigenstates nested by vector \(K\) are mainly distributed from unequal sublattice occupancy, thus the instability at \(K\) point is significantly suppressed. Meanwhile, the wave functions nested by vector \(M\) have many ingredients located at the same Fe site, thus the instability at \(M\) point is enhanced. This indicates that the electron correlation, rather than electron-phonon interaction, plays a key role in the CDW transition at \(M\) point.
Kagome lattice materials [1; 2; 3; 4] exhibit unique electronic structure signatures owing to their unconventional geometric characteristics, embracing the flat bands induced by destructive interference of nearest-neighbour hopping, a pair of van Hove singularities (vHSs) at \(M\) point, and Dirac cone dispersion at \(K\) point [5; 6; 7; 8; 9; 10; 11; 12]. A wide array of exotic physical phenomena in the kagome lattice arises from different degrees of electron filling. When large density of states (DOS) from the kagome flat bands are located near the Fermi level, strong electron correlations can induce magnetic order [5; 6; 7]. Meanwhile, when vHSs are located near the Fermi level, interaction between the saddle points and lattice instability could induce charge density wave (CDW) order with a 2\(\times\)2 structure in xy-plane [8; 9; 10; 11; 12]. Therefore kagome lattice materials serve an essential platform for studying non-trival topological physics [13; 14; 15; 16; 17], CDW order [18; 19; 20; 21; 22; 23; 24; 25; 26; 27], superconductivity [40; 41; 42; 43; 44; 45; 46; 47], fractional quantum Hall effect [45; 46; 47] and quantum anomalous Hall effect (QAHE) [48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 99; 111; 112; 13; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 251; 252; 253; 254; 255; 256; 257; 26; 278; 289; 290; 229; 230; 231; 232; 234; 236; 237; 239; 241; 243; 242; 243; 244; 257; 246; 247; 248; 249; 258; 259; 260; 261; 262; 263; 264; 265; 2667; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 288; 289; 291; 300; 312; 333; 340; 341; 342; 343; 344; 345; 346; 347; 348; 359; 360; 370; 371; 372; 373; 374; 375; 376; 3778; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 391; 392; 393; 401; 402; 403; 404; 415; 405; 406; 407; 408; 409; 416; 417; 418; 419; 420; 421; 422; 423; 424; 425; 426; 427; 428; 429; 430; 444; 451; 4529; 453; 460; 471; 429; 48; 48; 49; 500; 510; 520; 530; 531; 532; 549; 556; 571; 58; 591; 592; 58; 593; 594; 501; 595; 58; 596; 597; 598; 599; 602; 61; 626; 63; 64; 65; 66; 67; 68; 690; 627; 61; 63], we report to study the effect of van Hove singularities (vHSs) near the Fermi level. However, the first magnetic kagome material to exhibit the charge density wave (CDW) order, FeGe has attracted much attention in recent studies. Similar to AV\({}_{3}\)Sb\({}_{5}\) (A = K, Cs, Rb), FeGe exhibits the CDW pattern with an in-plane 2\(\times\)2 structure and the existence of van Hove singularities (vHSs) near the Fermi level. However, sharply different from AV\({}_{3}\)Sb\({}_{5}\) which has phonon instability at \(M\) point, all the theoretically calculated phonon frequencies in FeGe remain positive. Here, we perform a comprehensive study of the band structures, Fermi surfaces, nesting function and the mechanism of CDW transition through first-principles calculations. Surprisingly, we find that the maximum of nesting function is at \(K\) point instead of \(M\) point. Two Fermi pockets with Fe-\(d_{xz}\) and Fe-\(d_{x^{2}-y^{2}}\)/\(d_{xy}\) orbital characters have large contribution to the Fermi nesting, which evolve significantly with \(k_{z}\), indicating the highly three-dimensional (3D) feature of FeGe in contrast to AV\({}_{3}\)Sb\({}_{5}\). Meanwhile, the vHSs are close to the Fermi surface only in a small \(k_{z}\) range, and does not play a leading role in nesting function. Considering the effect of local Coulomb interaction, we reveal that the Fermi level eigenstates nested by vector \(K\) are mainly distributed from unequal sublattice occupancy, thus the instability at \(K\) point is significantly suppressed. Meanwhile, the wave functions nested by vector \(M\) have many ingredients located at the same Fe site, thus the instability at \(M\) point is enhanced. This indicates that the electron correlation, rather than electron-phonon interaction, plays a key role in the CDW transition at \(M\) point.
Kagome lattice materials [1; 2; 3; 4] exhibit unique electronic structure signatures owing to their unconventional geometric characteristics, embracing the flat bands induced by destructive interference of nearest-neighbour hopping, a pair of van Hove singularities (vHSs) at \(M\) point, and Dirac cone dispersion at \(K\) point [5; 6; 7; 8; 9; 10; 11; 12]. A wide array of exotic physical phenomena in the kagome lattice arises from different degrees of electron filling. When large density of states (DOS) from the kagome flat bands are located near the Fermi level, strong electron correlations can induce magnetic order [5; 6; 7]. Meanwhile, when vHSs are located near the Fermi level, interaction between the saddle points and lattice instability could induce charge density wave (CDW) order with a 2\(\times\)2 structure in xy-plane [8; 9; 10; 11; 12]. Therefore kagome lattice materials serve an essential platform for studying non-trival topological physics [13; 14; 15; 16; 17], CDW order [18; 19; 39], superconductivity [40; 41; 42; 43; 44], fractional quantum Hall effect [45; 46; 47] and quantum anomalous Hall effect (QAHE) [48; 48; 49; 50; 51; 52; 53; 54; 55]
energy band structure and the Fermi surface for different \(k_{x}\)-\(k_{y}\) planes as \(k_{z}\) changes from 0 to 0.5, and find that the electronic structures of FeGe have strong 3D feature. The positions of vHSs evidently shift as \(k_{z}\) varies, and only in a small \(k_{z}\) range the vHSs are in proximity to Fermi level (\(\pm\)0.1 eV), distinct from the quasi-two-dimensional structural characteristics of AV\({}_{3}\)Sb\({}_{5}\)[31, 33]. Our numerical results show that the maximum of nesting function is at the \(K\) point instead of the \(M\) point. Whereafter, we find that two pockets have large contribution to the nesting function, which respectively contribute form the \(d_{xz}\) orbital and a combination of \(d_{x^{2}-y^{2}}\)/\(d_{xy}\) orbitals. To understand the conflict between the nesting function at the \(K\) point and CDW transition at \(M\) point, we consider the effect of local Coulomb interaction [75]. We find that the Fermi surface nesting at the \(K\) point is ineffective due to different sublattice characters of the band structures, similar with the sublattice mechanism in superconductors [11, 76]. On the other hand, the CDW instability at the \(M\) point could be enhanced by the local Coulomb interaction, since the wave functions nested by vector \(M\) are mainly distributed from the same Fe site. It implies that the strong 3D Fermi surfaces and local electron correlation play indispensable parts in the CDW transition in FeGe.
We carried out first-principles density functional theory (DFT) calculations by employing the full-potential all-electron code Wien2k [77]. The local spin density approximation (LSDA) [78] was used as the exchange-correlation functional in our calculations for the A-type AFM state. We chose a fine k-mesh of 200\(\times\)200\(\times\)100 in the irreducible BZ to ensure that the nesting function calculated from the eigenvalues is robust to the number of \(K\) points. Since our conclusions are quite different from the typical kagome materials such as AV\({}_{3}\)Sb\({}_{5}\)[30, 31, 33], we have also used the Vienna ab initio Simulation Package (VASP) [79, 80] with the projector augmented wave (PAW) [81, 82] method to confirm our electronic structure calculations. The results of the two methods are well consistent, and in this paper we present the calculated results from Wien2k.
We start by performing LSDA calculation for FeGe based on the experimental crystal structure (see Supplemental Materials (SM)) and A-type AFM ground state [74]. The band structure and the density of states (DOS) (see SM) are basically similar in many aspects to the isostructure FeSn [54, 55, 56, 57, 58]. Our calculations reproduce the results of previous studies [61] quite well. The magnetic moment of the Fe ions is estimated to be 1.55 \(\mu_{B}\), which is close to the previous experimental value of about 1.72 \(\mu_{B}\)[70, 72]. Besides, similar to AV\({}_{3}\)Sb\({}_{5}\)[31, 33], we find a pair of vHSs close to Fermi surface on both the \(k_{z}\)=0 and 0.5 high symmetric plane as shown in Fig. 6 of SM.
In order to analyse the orbital components of the energy bands, it is important to take the proper local coordinate axis. Since the angle between the orientations of Fe\({}_{A}\), Fe\({}_{B}\), and Fe\({}_{C}\) (see SM) to the same nearest neighbour Ge\({}^{out}\) atom is 120\({}^{\circ}\) in the global coordinate, the x/y components of Fe\({}_{A/B/C}\)-\(d\) orbitals are not equivalent. The suitable local coordinate of Fe\({}_{A}\), Fe\({}_{B}\), and Fe\({}_{C}\) we have chosen are shown in Fig. 5(c) of SM, any two of which could be transformed into each other by the \(C_{3}\) symmetry operation. The local x-axis always points to the nearest-neighbor Ge\({}^{in}\) atom and the local z-axis direction is the same as the one in the global coordinate. In this local coordinate, the five Fe-\(3d\) orbitals are clearly divided into higher \(d_{yz}\) and \(d_{x^{2}-y^{2}}\) parts and lower \(d_{z^{2}}\), \(d_{xz}\) and \(d_{xy}\) parts, consistent with the \(e_{g}\)-\(t_{2g}\) relationship in the ortho-octahedral crystal field. The selection of such local coordinate is also helpful to the analysis of the Fermi pockets as shown in the following.
We present the orbital components of the energy bands in Fig. 1 using the above local coordinate. As mentioned above, the kagome lattice with nearest-neighbour hopping will present the typical three-band structure including flat band, vHS, and Dirac cone [5, 6, 8, 9, 10, 11]. Along the high symmetry directions \(\Gamma-M-K-\Gamma\) lying in \(k_{z}\)=0 plane, there are two different kagome structures near the Fermi level. One of the Kagome structures consists of Fe-\(d_{xz}\) orbitals, which are located between -2.0 and 0.5 eV, and contain the vHS1 located at -0.26 eV below \(E_{F}\), as marked in red in Fig. 1. The other kagome structure containing vHS2 consists of a combination of \(d_{x^{2}-y^{2}}\) and \(d_{xy}\) orbitals shown in purple and blue respectively. The vHS2 is located above the Fermi energy (0.07 eV). Three Fe-\(d_{yz}\) orbitals (labeled in orange in Fig. 1) also form a kagome three-band structure. However, the interaction between the Fe atoms and the \(p_{z}\) orbitals of the Ge\({}^{out}\) leads to two bands shifting downward away from \(E_{F}\). Along the \(A-L-H-A\) path in the \(k_{z}\)=0.5 plane there are three kagome structures. Due to the interaction with the \(p\) orbitals of Ge\({}^{out}\), the three-band kagome structure composed of Fe-\(d_{z^{2}}\) orbitals is located below and away from the Fermi energy level in the \(k_{z}\)=0 plane (from -2.5 to -1.0 eV), while in the \(k_{z}\)=0.5 plane the energy bands rise and cross the Fermi level. Two vHSs at the \(L\) point close to the Fermi energy level are contributed by Fe-\(d_{x^{2}-y^{2}}\)/\(d_{xy}\) and Fe-\(d_{z^{2}}\) orbitals, marked as vHS2 and vHS3 respectively in Fig. 1 (The analysis below confirms that the vHS2 in \(k_{z}\)=0 plane slowly turn into the vHS2 in \(k_{z}\)=0.5 plane with \(k_{z}\) shifts from 0 to 0.5). It is worth mentioning that vHSs near \(E_{F}\) have different orbital characters in \(k_{z}\)=0 and \(k_{z}\)=0.5 planes, indicating the important role of \(k_{z}\) in electronic structures.
Furthermore, using group theory [83, 84], we obtain the irreducible representations (irreps) of the little group for each
Figure 1: The orbital-projected electronic band structure of FeGe near Fermi level. The Fe-\(3d\) orbital projection is performed with the local coordinate. The orbital characters are labelled by different colors.
vHS based on the wave functions from DFT calculations. At the \(M\) point, our calculations identify the irrep of vHS1 as B\({}_{3g}\), while vHS2 corresponds to the irrep A\({}_{g}\). On the other hand, at \(L\) point, the irrep of the vHS2 and VHS3 are both A\({}_{u}\). The irreps of the vHSs closer to \(E_{F}\) are different in \(k_{z}\)=0 and \(k_{z}\)=0.5 planes, also indicating the strong 3D feature in FeGe, which will be discussed carefully in the following.
To understand the CDW instability, we calculate the Fermi surface nesting function \(\xi(\mathbf{q})\)[85]. The 2\(\times\)2\(\times\)2 supercell structure of CDW phase for the non-magnetic pristine phase is suggested by experimental results [63; 64; 65; 66]. Since the A-type AFM structure has already enlarged unit cells doubled along the z-direction, we focus the in-plane \(\mathbf{q}\) vector in the following, and obtain the nesting function \(\xi(q_{x},q_{y},0)\), as is illustrated in Fig. 2. It can be seen that the maximum values of nesting function are located at the \(K\) point instead of the \(M\) point, which is different from the results of the tight-binding model of kagome lattice [8; 9; 10; 11] and AV\({}_{3}\)Sb\({}_{5}\)[31; 33]. This motivated a careful analysis of the band structure, Fermi surface and the nesting function of FeGe.
As shown in Fig. 3(a), for \(k_{z}=0\) plane, the Fermi surface pocket formed by the \(d_{xz}\)-dominant energy band (labeled as \(\beta\)) presents a hexagon parallel to the BZ (hereafter called the h-hexagon), and the maximum of nesting function is located at \(K\) point (bottom panel of Fig. 3(a)). Differently, the Fermi surface structure in AV\({}_{3}\)Sb\({}_{5}\)[31; 33] is v-hexagon, i.e. a hexagon with a difference of 60\({}^{\circ}\) rotation from the BZ edge direction, where the peak of nesting function is located at \(M\) point. It is worth mentioning that in Fig. 2, besides the maximum of total nesting function located at the \(K\) point, there are also peaks along the \(\Gamma-M\) direction. Therefore we carefully analyse the \(k_{z}\) momentum-dependent evolution of band structure, Fermi surface and the nesting function, and show the results in Fig. 3(a-f). As \(k_{z}\) increases from 0 to 0.2, vHS1 gradually approaches the Fermi surface, and the Fermi surface pocket \(\beta\) gradually changes from h-hexagon to v-hexagon as shown in Fig. 3(a-c). The \(d_{x^{2}-y^{2}}\)/\(d_{xy}\) bands forming the pocket \(\alpha\) and containing vHS2 shift down as \(k_{z}\) increases, and the shape of the pocket \(\alpha\) changes from a v-hexagonal shape (see Fig. 3(c)) to a circle (see Fig. 3(d)), and then gradually to an h-hexagonal shape (see Figs. 3(e) and (f)) as \(k_{z}\) increases. For \(k_{z}\)=0.5 plane, the Fermi surface evolved to contain three different h-hexagonal-shaped pockets. As can be seen from Fig. 3, although the shape of the Fermi surface changes with \(k_{z}\), pockets \(\alpha\) and \(\beta\) both retain their hexagonal-like shape for a wide range of \(k_{z}\) values, suggesting a large contribution to the Fermi surface nesting.
For the \(k_{z}\)= 0 and 0.1 and planes, the maximum of nesting function are located at or near the \(K\) point, as shown in Fig. 3(a) and (b). While in the \(k_{z}\)=0.2 plane, the nesting function shows an enhancement along the \(\Gamma-M\) direction in the momentum space, as shown in Fig. 3(c), which is originate from the nesting of v-hexagonal pockets \(\alpha\) and \(\beta\) in Fig. 3(c). As \(k_{z}\) continues to shift to around 0.4, the shape of the pockets \(\alpha\) and \(\beta\) changes again from a v-hexagon to an h-hexagon, with the maximum of nesting function gradually moving away from the \(\Gamma-M\) direction to the \(\Gamma-K\) direction. For \(k_{z}\)=0.5 plane, the h-hexagonal pockets around \(\Gamma\) with similar area to the BZ plane lead to the maxima of the nesting function near the \(\Gamma\) point along the \(\Gamma-K\) direction, as shown in Fig. 3(f).
The difference between the Fermi surface and nesting functions of AV\({}_{3}\)Sb\({}_{5}\) and that of FeGe origin from their different crystal structures. In AV\({}_{3}\)Sb\({}_{5}\) (A = K, Cs, Rb), due to the presence of the \(A\) layers, the distance between the interlayer V atoms is at least 9.308 \(\AA\). In FeGe, on the other hand, the minimum distance between nearest-neighboring interlayer Fe atoms is 4.041 \(\AA\), since in FeGe only Fe-Ge\({}^{in}\) and Ge\({}^{out}\) layers are alternately arranged. Thus, the hopping parameters of Fe-3\(d\) orbitals in the z-direction are significantly larger than those of V-\(d\) orbitals, and the bands near the Fermi surface in FeGe, which are mainly contributed by Fe-3\(d\) orbitals, have strong 3D features.
In tight-binding models of kagome lattice [8; 9; 10; 11] and AV\({}_{3}\)Sb\({}_{5}\)[30; 31; 33], the vHSs near the Fermi level could induce the peak of nesting at \(M\) point. Meanwhile, in AV\({}_{3}\)Sb\({}_{5}\), the phonon instability [30; 33] and the CDW transition [22; 23; 24; 25; 26; 27] at \(M\) point driven by Fermi surface nesting is suggested. However, in FeGe, where the Fermi surface has strong 3D feature, the vHSs are not always near the Fermi energy level. For example, vHS1 is close to the Fermi level (\(\pm\)0.1 eV) only in the small range of \(k_{z}\)=0.183-0.267, which is 16.8% of the BZ. Meanwhile, the fraction of \(k_{z}\) when vHS2 and vHS3 are close to the Fermi level are 53.4% and 13.4%, respectively. Our numerical results show that the ratio of nesting function contributed by vHS1, vHS2, and vHS3 around the Fermi level to the total nesting function are 11.02\(\%\), 20.44\(\%\) and 7.53\(\%\), respectively. It means that unlike AV\({}_{3}\)Sb\({}_{5}\)[30; 31; 33], vHSs near the Fermi surface is not the most important factors of the nesting function in FeGe.
In FeGe, the maximum of nesting function is at the \(K\) point, which does not correspond to the observed CDW wave vector
Figure 2: The nesting function \(\xi(q_{x},q_{y},0)\) of FeGe. We neglected the peak at \(q\)=0 to better present the nesting function.
at \(M\) point, indicating the interaction plays an important role in CDW transition [75, 86]. Meanwhile, the theoretically calculated phonon spectrum remain positive [65, 61, 66]. The electron-phonon interaction is generally weakly momentum-dependent except for the singular behaviour in non-traditional superconductivity [87]. Therefore, we believe that the conflict between the nesting function at K point and the CDW transition at M point does not originate from electron-phonon interaction. Therefore, we analyze the local electron-electron correlation interaction, which is of substantial importance in 3\(d\) electron systems [88].
Based on DFT calculations, we obtain the wave functions \(\psi_{n,\mathbf{k+q}}\left(\mathbf{r}\right)\) and \(\psi_{n,\mathbf{k}}\left(\mathbf{r}\right)\), which are electronic Bloch states at the Fermi level connected by the vector \(\mathbf{q}\). We expand the distribution of \(\psi_{n,\mathbf{k+q}}\left(\mathbf{r}\right)\) and \(\psi_{n,\mathbf{k}}\left(\mathbf{r}\right)\) to the basis set of atomic orbitals in real space. We find that when \(\mathbf{q}=\mathbf{q}_{K}(1/3,1/3,0)\), the wave functions \(\psi_{n,\mathbf{k+q}}\left(\mathbf{r}\right)\) and \(\psi_{n,\mathbf{k}}\left(\mathbf{r}\right)\) are with unequal predominant sublattice occupancy. Meanwhile, when \(\mathbf{q}=\mathbf{q}_{M}(1/2,0,0)\), the wave functions \(\psi_{n,\mathbf{k+q}}\left(\mathbf{r}\right)\) and \(\psi_{n,\mathbf{k}}\left(\mathbf{r}\right)\) are mainly distributed from the same Fe site. We take a representative \(k_{z}\)=0.267 plane as an example to show the above mentioned results for the nesting vector \(\mathbf{q}_{K}\) in Fig. 4(a), with the characters of three sublattice Fe\({}_{A}\), Fe\({}_{B}\), and Fe\({}_{C}\) indicated in red, green and blue, respectively. The Fermi surface contours of \(\beta\) pockets coincide when shifted along the nesting vector \(\mathbf{q}_{K}(1/3,1/3,0)\), as shown in red arrows of Fig. 4(a), resulting in the peak of nesting function at \(K\) point. However, the nesting vector \(\mathbf{q}_{K}\) connects Fermi surface points with mainly different sublattice occupation, as shown in Fig. 4(a). It is worth mentioning that, due to the locality of Coulomb correlation, the electron-electron correlation interaction is diagonal in the index of atomic sites. It means that the susceptibility is suppressed regardless of the peak of nesting function at \(K\) point [75]. Meanwhile, we demonstrate the results of the nesting vector \(\mathbf{q}_{M}\) by \(k_{z}\)=0.433 plane in Fig. 4(b). There are nested Fermi surfaces along \(\mathbf{q}_{M}(1/2,0,0)\) connecting the opposite edges of Fermi pocket \(\alpha\), as shown in red arrow of Fig. 4(b). It can be seen that the wave functions \(\psi_{n,\mathbf{k+q}}\left(\mathbf{r}\right)\) and \(\psi_{n,\mathbf{k}}\left(\mathbf{r}\right)\) connected by the vector \(\mathbf{q}_{M}\) are dominated by same sublattice occupancy as mentioned above, leading to enhanced susceptibility at the \(M\) point. Therefore, similar with the sublattice mechanism in superconductors [11, 76], the CDW in
Figure 3: (a)-(f) The orbital-projected electronic band structures (top panel) and 2D Fermi surface (middle panel), and the nesting function (bottom panel) of FeGe along the \(DT(0,0,k_{z})-U(\frac{1}{2},0,k_{z})-P(\frac{1}{2},\frac{1}{3},k_{z})-DT(0,0,k _{z})\) path in the \(k_{z}\)=0, 0.1, 0.2, 0.3, 0.4, and 0.5 planes. For the orbital-projected band structures, the circle size shows the relative portion of each orbital. For the 2D Fermi surface, the pockets \(\alpha\), \(\beta\), \(\Gamma\), and \(\delta\) represent the pockets dominated by a combination of Fe-\(d_{xy}\) and \(d_{x^{2}-y^{2}}\), Fe-\(d_{xz}\), Fe-\(d_{yz}\), and Fe-\(d_{z^{2}}\) orbital characters, respectively.
Figure 4: The 2D Fermi surface in \(k_{x}-k_{y}\) plane with (a)\(k_{z}\)=0.267 and (b)\(k_{z}\)=0.433. The sublattice characters of the Fermi level states including three different Fe\({}_{A}\), Fe\({}_{B}\), and Fe\({}_{C}\) sites, are marked in red, green and blue, respectively. The red arrows indicate the nesting vectors \(q_{K}\)=(1/3,1/3,0) and \(q_{M}\)=(1/2,0,0).
stability at \(M\) point is considered to be derived by the local electron correlation. Since the electronic instability can significantly affect phonons [75; 86], it may explain the experimentally observed phonon anomalies [61; 65].
In summary, based on DFT calculations, we comprehensively investigated the Fermi surface nesting and the microscopic origin of the CDW order in the kagome magnetic metal FeGe. Our results indicate that the energy bands and Fermi surfaces of FeGe vary significantly with \(k_{z}\), and the maximum of nesting function is at the \(K\) point instead of the CDW vector at \(M\) point. We find that the susceptibility at the \(K\) point is significantly suppressed due to the sublattice interference mechanism [11; 76], on the other hand the CDW instability at the \(M\) point is enhanced, which indicates that the electron correlation plays an indispensable part in the CDW transition.
## I Acknowledgement
This work was supported by the NSFC (No. 12188101, 11834006, 12004170, 11790311, 51721001), Natural Science Foundation of Jiangsu Province, China (Grant No. BK20200326), and the excellent programme in Nanjing University. Xiangang Wan also acknowledges the support from the Tencent Foundation through the XPLORER PRIZE.
## II Supplemental Materials
### Crystal Structure
Hexagonal FeGe is an intermetallic compound of the CoSn structure and crystallize into the \(P6/mmm\) (No. 191) space group[68]. As shown in Fig. 5(a), there are two distinct types of Ge atoms in a unit cell, labeled Ge\({}^{in}\) (site 1a) and Ge\({}^{out}\) (site 2d) respectively, depending on whether they are on the same layer as Fe atoms. In the Fe-Ge\({}^{in}\) plane, three Fe atoms at different sites (noted as Fe\({}_{A}\), Fe\({}_{B}\), and Fe\({}_{C}\) in Fig. 5(a)) form Kagome lattices, and Ge\({}^{in}\) atoms are located in the center of the hexagons. Ge\({}^{out}\) atoms compose honeycomb structures above and below the Fe-Ge\({}^{in}\) plane.
The local structure of the FeGe\({}_{6}\) octahedron is shown in Fig. 5(b). It can be seen that each Fe atom is surrounded by six Ge atomic octahedrons, including two Ge\({}^{in}\) atoms and four Ge\({}^{out}\) atoms. In the \(O_{h}\) crystal field, ortho-octahedral structure leads to \(t_{2g}-e_{g}\) energy splitting. Here the octahedron is distorted and induces further splitting of the five Fe-3\(d\) orbitals.
### Band structure and density of states
We perform LSDA calculation for FeGe based on the experimental A-type AFM ground state [74], and show the band structure and the DOS in Fig. 6. As shown in Fig. 6, it is clear that the Fe-3\(d\) orbitals dominate the DOS around \(E_{F}\) (-2 to 2 eV relative to \(E_{F}\)), while the Ge-4\(p\) orbitals are mainly located between -6.0 and -2.0 eV. In the AFM phase, there is an upshift of the spin minority b
Figure 5: (a)The crystal structure of FeGe. The light blue, dark blue, and orange spheres represent Ge\({}^{in}\), Ge\({}^{out}\), and Fe atoms, respectively. The Fe atoms located at three different sites are labeled as Fe\({}_{A}\), Fe\({}_{B}\), and Fe\({}_{C}\). (b) The local structure of the FeGe\({}_{6}\) octahedron. The octahedron includes two Ge\({}^{in}\) atoms and four Ge\({}^{out}\) atoms. (c) Our local coordinate for each Fe atom. The local x-axis always points to the nearest-neighbor Ge\({}^{in}\) atom, and the z-axis of each local coordinate is perpendicular to the paper surface.
splitting induced by the ordered moment. Two peaks located near 0.4 and -1.3 eV correspond to the flat bands of the spin minority and spin majority states, respectively. The contribution of the Fe-\(3d\) and Ge-\(4p\) orbitals is relatively close in the range from -6 to -4 eV, suggesting hybridization between the Fe-\(3d\) and Ge-\(4p\) orbitals.
|
2310.13620 | Bridging Information-Theoretic and Geometric Compression in Language
Models | For a language model (LM) to faithfully model human language, it must
compress vast, potentially infinite information into relatively few dimensions.
We propose analyzing compression in (pre-trained) LMs from two points of view:
geometric and information-theoretic. We demonstrate that the two views are
highly correlated, such that the intrinsic geometric dimension of linguistic
data predicts their coding length under the LM. We then show that, in turn,
high compression of a linguistic dataset predicts rapid adaptation to that
dataset, confirming that being able to compress linguistic information is an
important part of successful LM performance. As a practical byproduct of our
analysis, we evaluate a battery of intrinsic dimension estimators for the first
time on linguistic data, showing that only some encapsulate the relationship
between information-theoretic compression, geometric compression, and
ease-of-adaptation. | Emily Cheng, Corentin Kervadec, Marco Baroni | 2023-10-20T16:12:13Z | http://arxiv.org/abs/2310.13620v2 | # Bridging Information-Theoretic and Geometric Compression
###### Abstract
For a language model (LM) to faithfully model human language, it must compress vast, potentially infinite information into relatively few dimensions. We propose analyzing compression in (pre-trained) LMs from two points of view: geometric and information-theoretic. We demonstrate that the two views are highly correlated, such that the intrinsic geometric dimension of linguistic data predicts their coding length under the LM. We then show that, in turn, high compression of a linguistic dataset predicts rapid adaptation to that dataset, confirming that being able to compress linguistic information is an important part of successful LM performance. As a practical byproduct of our analysis, we evaluate a battery of intrinsic dimension estimators for the first time on linguistic data, showing that only some encapsulate the relationship between information-theoretic compression, geometric compression, and ease-of-adaptation.
## 1 Introduction
To speak a language is not to memorize all possible utterances, but to instead extract the finite ruleset and lexicon that generates them (Chomsky, 1986). That is, language, though nominally high-dimensional, can be compressed to a comparatively small _intrinsic dimension_.
The recent success of _(large) language models_ demonstrates that artificial neural networks, too, can acquire linguistic knowledge. Current language models (LMs) are Transformer-based architectures at-scale (Vaswani et al., 2017) that are trained on the conditional distribution of natural language (OpenAI, 2023; Zhang et al., 2022; Touvron et al., 2023; Chowdhery et al., 2022) and that are inching closer to human-like linguistic robustness (Brown et al., 2020; Liang et al., 2022; Wang et al., 2019). For an LM to faithfully model language, it must encode linguistic training data into finitely many variables that allow generalization to infinitely many grammatical utterances. That is, it must perform a successful form of data compression.
Thus, following the line of research that aims to better understand LM behavior (Wei et al., 2022; Zhang et al., 2021; Rogers et al., 2020), in this work we provide initial insights on how they _compress linguistic knowledge_. We demonstrate an empirical link between two types of compression: geometric and information-theoretic. In particular, we ask: how, and by how much, do LMs compress linguistic data? Furthermore, what are linguistic correlates to compressibility? Is compression a good predictor of rapid adaptation? We show that (1) intrinsic dimension (ID) of linguistic data representations under an LM tracks information-theoretic coding length; (2) greater data compression predicts ease-of-adaptation in causal language modeling tasks; (3) interpretable linguistic properties such as vocabulary size and syntactic structure modulate ID; and (4) different model sizes recover similar ranges of ID. Finally, as a practical contribution, (5) we explore different ways to estimate ID of linguistic data, and find only some to capture the relation between ID, coding length, and ease-of-adaptation.
## 2 Related Work
Causal Language ModelsState-of-the-art language models are based on the Transformer architecture (Vaswani et al., 2017), which consists of alternating feed-forward and self-attention modules (Bahdanau et al., 2015). They are trained using _self-supervised learning_ on sequences of _tokens_, where a token is defined as the atomic unit (e.g., a word or a sub-word) fed into the language model. Due to their current ubiquity, we focus on autoregressive models trained on a _causal language modeling_ objective, that is, next token prediction given a context of previous tokens (Brown et al., 2020; Radford and Narasimhan, 2018; Zhang et al., 2022).
Language models are typically measured against human performance on linguistic benchmarks.
Evaluation may be done post-finetuning or in a low-shot regime, where the model completes a linguistic task given few or zero examples. A variety of benchmarks, such as GLUE Wang et al. (2019), SuperGLUE Wang et al. (2019), and BigBENCH Srivastava et al. (2022) have been proposed, evaluating, for instance, model performance on textual entailment, question-answering, semantic equivalence, or sentiment analysis. Indeed, human baselines for GLUE and SuperGLUE have already been surpassed by LMs that contain billions of parameters (e.g., PaLM 540B Chowdhery et al. (2022)).
### Compression in LMs
There is a wide body of work analyzing compression in deep neural architectures. In statistical learning theory, compression has been empirically and theoretically linked to generalization Shwartz-Ziv and Tishby (2017); Arora et al. (2018). Moreover, deep learning models are thought to minimize description length Perez et al. (2021); Voita and Titov (2020); Blier and Ollivier (2018). In large LMs, implicit compression of neural network parameters has been linked to ease-of-finetuning and generalization Aghajanyan et al. (2021). Our work complements this line of research by focusing on compression of _data representations_ rather than network parameters. We consider compression from two different perspectives: information-theoretic and geometric (intrinsic dimension).
Information-theoretic compressionCompression in neural networks can be quantified from an information-theoretic point of view Shannon (1948). For instance, the _information plane_ (IP), a widely studied framework introduced by Shwartz-Ziv and Tishby (2017) and Tishby and Zaslavsky (2015), quantifies compression per-layer as the mutual information between representations and inputs. However, there is little consensus in the relevant literature on the appropriate estimator to measure this _internal_ compression of inputs Saxe et al. (2018); Goldfeld et al. (2019); Noshad et al. (2019); Chelombiev et al. (2019); Geiger (2022).
Instead, as probabilistic models, LMs are natural _black-box_ compressors: the negative log-likelihood of the next token given context is, by definition, its Shannon coding length in bits Shannon (1948). Concurrent work explores the equivalence between self-supervised prediction and lossless compression, demonstrating that LMs can be powerful general-purpose compressors Deletang et al. (2023). We similarly focus on _information-theoretic coding length_ of inputs under a pre-trained model, which is a simple measure of compression that, to our knowledge, has not been shown to be analytically equivalent to geometric compression. We develop this measure further in section 3.2.
Intrinsic Dimension (ID)_Geometric_ compression is commonly quantified using dimensionality reduction techniques. Often underlying these approaches is the manifold learning hypothesis Goodfellow et al. (2016), or the notion that real-life, high-dimensional data often lie on a low-dimensional manifold. Intrinsic dimension (ID), or the number of degrees of freedom in the data, is the dimension of this data manifold.
Perhaps the most prototypical ID estimators are linear projective methods like random projection Li et al. (2018) or Principal Component Analysis (PCA) Jolliffe (1986). While these project data to a _linear_ subspace, the underlying geometric object need not be linear; therefore, e.g., PCA poorly estimates ID for curved manifolds Campadelli et al. (2015). Nonlinear ID estimators include Correlation Dimension Grassberger and Procaccia (1983); Fisher Separability Albergante et al. (2019); and a host of "nearest-neighbor" (NN)-based methods, which use the fact that manifolds look locally Euclidean to fit the ID based on local neighbor distributions Facco et al. (2017); Levina and Bickel (2004); Haro et al. (2008); Amsaleg et al. (2018). Such methods outperform linear ones on ID estimation benchmarks Campadelli et al. (2015). In section 4, we will assess these methods in the context of linguistic data, and analyze how each of them relates to coding length and ease-of-adaptation.
In deep learning, there has been recent interest in using ID to characterize learning complexity. Intrinsic dimension has been quantified for neural network parameters Li et al. (2018), as well as for input data and their representations in visual and protein-sequence domains Cohen et al. (2020); Recanatesi et al. (2019); Ansuini et al. (2019); Valeriani et al. (2023); Pope et al. (2021). These studies show that deep neural architectures learn low-dimensional structures, encoding parameter weights and training data into orders-of-magnitude lower ID than their ambient dimension.
In the linguistic domain, low ID of LM _parameters_ has been shown to underlie efficient task adaptation Aghajanyan et al. (2021), where optimization occurs in low-dimensional, task-specific sub
spaces (Zhang et al., 2023). Moreover, parameter redundancy in pre-trained LMs can be exploited to design parameter-efficient finetuning methods such as LoRA (Hu et al., 2022). We are interested in ID of _data representations_ as opposed to _LM parameters_, as (1) we want to study how different linguistic properties affect their coding; (2) ID estimation of model parameters can be expensive- large LMs can have billions of parameters, while input representations are lower-dimensional, e.g., \(D=4096\) in OPT-6.7b (Zhang et al., 2022). In related work on LM _representation_ ID, contextual word embeddings have been found to lie in low-dimensional linear subspaces (Mamou et al., 2020; Hernandez and Andreas, 2021). Most similar to our work, Cai et al., 2021 show that Transformer embeddings of the Wikitext and Penn TreeBank datasets constitute nonlinear manifolds of ID \(\sim\mathcal{O}(10)\).
## 3 Methods
Our work attempts to bridge notions of geometric and information-theoretic compression of linguistic data under an LM, and subsequently relate these to ease-of-adaptation. We do so by quantifying the ID of data representations, information-theoretic coding length of linguistic inputs under an LM, and ease-of-finetuning in order to determine whether these three phenomena are correlated.
NotationLet a linguistic dataset \(X=\{x^{(i)}\}_{i=1}^{N}\) consist of \(N\) sequences of tokens, where each sequence \(x^{(i)}\) has length \(l(x^{(i)})\). Let \(\mathcal{M}\) be a (pre-trained) causal language model described by \(p_{\mathcal{M}}(\cdot|x_{<j})\), the conditional probability distribution of the \(j^{\text{th}}\) token given its past context of tokens.
Models & DatasetsExperiments are performed for the product of models \(\mathcal{M}\in\) [OPT-350m, OPT-1.3b, OPT-6.7b] and datasets \(X\in\) table 1. We focus on OPT suite of causal language models due to their accessibility (Zhang et al., 2022). For the datasets, we start from a list of corpora including the GLUE and SuperGLUE benchmarks, then pick those whose size is large enough for ID estimates to converge (\(N\geq 10000\)). Then, for computational efficiency, we randomly subsample each dataset to size \(N^{\prime}=\max(N,50000)\), where \(50000\) is chosen conservatively based on preliminary analyses of convergence of bootstrapped ID estimates (see appendix A.2).
In addition to external datasets, we create one baseline dataset per model, which we call OPTCorpus, of \(\sim 24\) million tokens by repeatedly randomly sampling from \(\mathcal{M}\) until [EOS] is reached. The conditional next-token distribution of OPTCorpus approximates that of \(\mathcal{M}\) so to serve as a reference datapoint.
In order to determine the effects of syntax and lexical semantics on compression, we define three transformations which we apply to a _dataset_: (1) _dataset-permuted:_ for each sequence in _dataset_, randomly permute its tokens. This ablates syntax to retain bag-of-tokens lexical information. (2) _dataset-swapped_: excluding special tokens, create a random permutation \(\sigma\) over the vocabulary. For each sequence in _dataset_, deterministically map each token by \(\sigma\). This ablates lexical patterns, retaining syntactic structure. (3) _dataset-random_: randomly replace each token in _dataset_ with another (excluding special tokens). This ablates both syntactic and lexical structure.
Notably, several shallow linguistic descriptors are preserved with these transformations: dataset size, sequence length and vocabulary size (1-3), vocabulary entropy (1,2), and token frequency (1). We apply the transformations to OPTCorpus and wikitext, producing six additional datasets.
### Intrinsic Dimension
Given model \(\mathcal{M}\) and dataset \(X\), we estimate the representational ID of \(X\) under \(\mathcal{M}\) as follows (see also fig. 1):
\begin{table}
\begin{tabular}{l|l} Benchmark & Datasets \\ \hline GLUE & cola, mmli, mrpc, qmli qqp, rte, \\ & sst2, stsb \\ \hline SuperGLUE & boolq, mulitirc, wic \\ \hline & IMDB, Penn Treebank, Bookcorpus, Wikitext fr, Tweets, Pile-10k, CNN \\ & Dailymail, Openwebtext-10k, CONCODE, OPTCorpus, OPTCorpus-permuted, OPTCorpus-swapped, OPTCorpus-random, Wikitext, Wikitext-swapped, Wikitext-random \\ \hline \end{tabular}
\end{table}
Table 1: List of datasets used in experiments. We use all datasets for ID and PPL estimation and the ones in the last block for finetuning (except for OPTCorpus, which already reflects the distribution of the model).
1. **Preprocess data.** Let the context window length of \(\mathcal{M}\) be \(l_{\mathcal{M}}\). We preprocess \(X\) by splitting all \(x^{(i)}\) with length \(l^{(i)}>l_{\mathcal{M}}\) into sequences with maximum length \(l_{\mathcal{M}}\).
2. **Gather representations.** Evaluate \(\mathcal{M}(X)\), gathering intermediate representations after contextualized embedding and after each attention+feed-forward block. In particular, representations are extracted after the residual connection and LayerNorm.
3. **Aggregate representations.** Because input sequences \(x^{(i)}\) are variable-length, we use the vector associated to the last token of each layer to represent it. Due to auto-regressive self-attention, the last token in the sequence is the only one to incorporate information from all other tokens in the sequence. Moreover, in the context of causal language modeling, the last token representation is the one used for next-token prediction in the top layer of the model, so it is the one where all information relevant to the prediction task should be concentrated. We leave testing alternative aggregation strategies, such as average pooling (cf. Valeriani et al., 2023) to future work. After the aggregation step, we have dataset representations \[\mathbf{R}:=\{R_{j}\}_{j=1}^{M};R_{j}\in\mathbb{R}^{N\times D},\] where \(D\), the hidden dimension of the model, is the ambient (extrinsic) dimension of data representations, and \(\mathcal{M}\) has \(M\) layers.
4. **Estimate ID.** Per layer \(j\), compute the ID \(d_{j}\) of \(R_{j}\) using ID estimator \(g:\mathbb{R}^{N\times D}\rightarrow\mathbb{Z}_{+};R_{j}\mapsto d_{j}\).
We test 12 different ID estimators \(g\), grouping them into categories based on technique: nine NN-based (Facco et al., 2017; Farahmand et al., 2007; Carter et al., 2010; Amsaleg et al., 2019, 2018; Haro et al., 2008; Johnsson et al., 2015; Rozza et al., 2012; Ceruti et al., 2014), one projective (PCA), one based on fine-grained clustering (Fisher Separability, Albergante et al., 2019), and one fractal-based (Correlation Dimension, Grassberger and Procaccia, 1983). Further details on estimators can be found in appendix A.1. We implement all estimators using the skdim Python package (Bac, 2020).
### Information-Theoretic Compression
Information-theoretic compression is directly related to the training objective of the model, which minimizes the average negative log-likelihood loss of next-token prediction over the training set.
Learning minimizes coding lengthThe average negative log-likelihood (NLL) training objective of causal LMs is given by
\[\min_{\theta}\frac{1}{\sum_{i=1}^{N}l(x^{(i)})}\sum_{i=1}^{N} \sum_{j=1}^{l(x^{(i)})}-\log p_{\mathcal{M}}(x_{j}^{(i)}|x_{<j}^{(i)};\theta), \tag{1}\]
that is, to minimize the empirical negative log-likelihood of the next token given its context with respect to model parameters \(\theta\). This is analytically equivalent to minimizing the average number of bits to encode the \(j^{\text{th}}\) token under \(p_{\mathcal{M}}\).
We are interested in quantifying information-theoretic compression at the sequence level. We do so by using perplexity, a common metric in NLP.
PerplexityThe perplexity (PPL) of a sequence \(x^{(i)}\) is the exponentiated negative log-likelihood loss
\[PPL^{(i)}:=2^{\frac{1}{l(x^{(i)})}\sum_{j=1}^{l(x^{(i)})}-\log p _{\mathcal{M}}(x_{j}^{(i)}|x_{<j}^{(i)};\theta)}. \tag{2}\]
We compute the average PPL for each dataset \(X\) by performing forward passes through \(\mathcal{M}\). As PPL is monotonic in coding length, we use PPL as our measure of interest to proxy information-theoretic compression, later relating this quantity to the representational ID of \(X\).
Figure 1: ID estimation. Data (bottom left) are fed into an LM with \(M\) blocks (left). Activations post-embedding/[feedforward, attention] blocks \(i=1\cdots M\) are extracted and aggregated across the sequence length to produce an \(N\times D\) dimensional matrix of representations \(R_{i}\). Then, the ID of each \(R_{i}\) is estimated to produce an ID “profile” (right plot).
### Ease-of-Adaptation
Low ID of pre-trained LM parameters has been shown to predict ease-of-finetuning Aghajanyan et al. (2021). We complement this finding by correlating the ID of _data representations_ to an LM's ease-of-adaptation to that dataset.
Ease-of-adaptation to a downstream task depends not only on the inputs \(X\) but also on task-specific outputs. For instance, binary classification can be less complex than causal language modeling given the same inputs \(X\). As it is not always clear what is the best way to encode the outputs in order to measure the quantities of our interest, we focus on adaptation under a causal language modeling objective, which is the same as the model's pre-training objective. This entails little loss of generality, as task adaptation is nowadays commonly framed as a language model adaptation problem.
Adaptation procedureWe perform finetuning for each of OPT-350m, 1.3b, and 6.7b on the datasets \(X\) in table 1 that are suited to causal language modeling, i.e., omitting [Super]GLUE.
Due to resource constraints, and as we compare between datasets and not models, we perform full finetuning for OPT-350m and finetune using LoRA Hu et al. (2022) for the larger sizes. We end finetuning at a maximum of 15 epochs or when validation loss converges. Loss is considered to have converged as soon as it fails to decrease for 3 evaluation steps, each 500 iterations apart. Detailed hyperparameter settings may be found in appendix C.
Adaptation metricsWe quantify ease-of-adaptation with the following, where \(T\) is defined as the number of iterations until convergence:
1. \(PPL_{T}\): final evaluation perplexity.
2. Sample complexity \(S=\frac{1}{T}\sum_{t=1}^{T}PPL_{t}\), where \(PPL_{t}\) is the evaluation PPL at evaluation step \(t\).
Finally, we compute Spearman correlations between these metrics, zero-shot perplexity (\(PPL_{0}\)), and ID to assess whether the two types of compression and ease-of-adaptation are linked.
## 4 Results
We find that, similar to protein models and visual networks Ansuini et al. (2019); Valeriani et al. (2023), the ID of linguistic data representations is significantly lower than their ambient dimension: in our case, by roughly 2 orders of magnitude. In particular, while the ambient dimension of representations in OPT-350m, OPT-1.3b, and OPT-6.7b are \(D=1024\), \(2048\), and \(4096\), respectively Zhang et al. (2022), dataset representational ID is \(d=\mathcal{O}(10)\), see fig. 5.
For simplicity, in the main article we present results on one representative NN-based estimator, the Expected Simplex Skewness (ESS) method of Johnsson et al. (2015), as it is the only one to significantly correlate with all other estimators (\(\alpha=0.1\)) (see fig. 6).1 We present results on other estimators in appendix E, and we comment on other (non-NN-based) ID estimators in section 4.4. Also for practicality, we report in the main article results obtained with one model, OPT-6.7b, only commenting on other models when relevant, and one aggregated ID measure across layers: the _max_ ID value, seen as a conservative upper bound on ID across layers. This choice is supported by the observation that, for most datasets, the ID profile over layers is quite flat (appendix D). Results with other aggregated measures are presented in appendix E.
Footnote 1: We also experimented with ensembling ID metrics, but found the various ensembling methods hard to justify as NN-based estimators are very over-represented in our panel.
Our primary result is that, in the pre-trained LMs tested, information-theoretic compression (PPL) predicts geometric compression (ID), and low ID predicts ease-of-adaptation. We then take a closer look at which linguistic attributes of a dataset predict its ID, finding that not only several shallow linguistic descriptors but also grammatical and lexical structure enable geometric compression. Finally, we find that, qualitatively, ID tends to be stable across model sizes and types, and comment on the differences between ID estimators.
### ID tracks information-theoretic compression
Information-theoretic and geometric description length of data under OPT are Spearman-correlated. As shown for OPT-6.7b in fig. 2a, data PPL predicts ID, \(\rho=0.51\) (\(p=0.01\)). The significant positive correlation between PPL and ID, moreover, holds for all model sizes tested: \(\rho=0.66\), \(p<0.01\) for OPT-350m and \(\rho=0.49\), \(p=0.01\) for OPT-1.3b, the correlation being most salient for the smallest model (see figs. E.3a and E.3d). We hypothesize that model optimization in higher dimensions permits discovery of better representa
tions in \(d\)-dimensional intrinsic space, as evidenced by the correlation between performance and model size in LM scaling laws. Then, on a given dataset, larger model sizes may encounter an ID floor effect, thus weakening the correlation between PPL and ID compared to smaller sizes.
### Compression is linked to
**Ease-of-Adaptation**
As evidenced by the positive trends in figs. 1(b) and 1(c), ID predicts sample complexity (\(\rho=0.72\), \(p<0.01\)) as well as final PPL after convergence (\(\rho=0.73\), \(p<0.01\)). These trends, moreover, are robust to model size: ID predicts sample complexity at \(\rho=0.61\), \(p=0.01\) for OPT-1.3b and \(\rho=0.81\), \(p=0.01\) for OPT-350m (figs. 1(b) and 1(e)); and ID predicts final PPL after fine-tuning at \(\rho=0.65\), \(p<0.01\) for OPT-1.3b and \(\rho=0.81\), \(p=0.01\) for OPT-350m (figs. 1(c) and 1(f)).
Results indicate that data which are more compressed zero-shot under the LM are easier to adapt to. Moreover, they corroborate findings in Aghajanyan et al. (2021), in which low _parameter_ ID in pre-trained LMs predicts rapid adaptation. We hypothesize that this is because intrinsic data rank bottlenecks intrinsic parameter rank and vice-versa (see Rozza et al., 2012 for discussion).
### Linguistic Correlates to Compression
In pre-trained OPT, geometric compression can be explained partially by data perplexity. But, taking a closer look, ID also correlates with interpretable linguistic descriptors such as syntactic structure or token entropy.
in the dataset), see Table 2. In contrast, average sequence length (in tokens) \(\tilde{L}\) and dataset size (in tokens) \(N_{tok}\) do not predict ID.2
Footnote 2: It is crucial that the dataset be big enough to prevent spurious correlations between size and ID due to scaling effects (see appendix A.2).
That descriptors such as vocabulary size and entropy predict ID is intuitive: with, e.g., a larger vocabulary, more information needs to be encoded. Furthermore, as expected, these descriptors are correlated to one another. However, the descriptors are _not positively correlated_ to PPL (fig. 4), suggesting that the relationship between information-theoretic and geometric compression is not explained by shallow dataset properties.
The relation between shallow linguistic descriptors and ID generally holds across model size and ID estimators; see appendix E for further discussion, extending to other layer-aggregate measures of ID.
### Intrinsic Dimension of Representations
We have presented ID as it relates to information-theoretic compression and ease-of-adaptation; now we address the question of geometric compression itself. First, we show that the various models compress data into similar ranges of ID _regardless of extrinsic hidden dimension_, lending weight to the manifold hypothesis for linguistic data. We further report on how ID estimation with different methods produces a complicated picture, and we comment on results evaluated on the full battery of 12 estimators.
Different model size, similar IDWe find that all tested models compress data to a similar range of ID, around \(\mathcal{O}(10)\). Although the model hidden dimension doubles from OPT-350m to 1.3b to 6.7b, the range of data representational ID does not sig
Figure 4: Spearman correlations between data descriptors: information-theoretic descriptor PPL (top/left), and shallow linguistic descriptors (bottom/right); only correlations significant at \(\alpha=0.1\) shown. Shallow metrics highly correlate to each other but not to PPL.
Figure 5: Distributions of max ID (aggregated over layers of the Transformer) across tasks, computed using the ESS estimator. Despite the model extrinsic dimension doubling (ED = \(1024\), \(2048\), \(4096\) from top to bottom), the ID remains fairly stable. Apart from a few outliers, the max ID over layers is less than \(100\), that is, all IDs computed over all layers are generally \(\mathcal{O}(10)\).
Figure 3: Example evolution of ID (ESS) over layers for three tasks (left to right): wikitext, sst2, and tweets. In general, ID profiles can be dissimilar for different datasets under the same model, and ID profiles for OPT-350m, 1.3b, and 6.7b appear correlated, with larger extrinsic dimension lending itself to slightly (not proportionately) larger intrinsic dimension.
nificantly change, see fig. 5.
Moreover, the _evolution_ of ID across the layers of different model sizes follows similar trajectories: qualitatively, for all tasks, the OPT models' ID profiles follow similar global patterns (e.g., similar number of peaks), with larger models having slightly larger intrinsic dimension, see examples in fig. 3. For extended discussion of ID evolution across layers, see appendix D.
Similar ID values and evolution across different models echo results in the visual domain (Ansuini et al., 2019) and evidence the manifold hypothesis for linguistic data. Together with past work, our results suggest that different-sized (but similar-performing) models which are trained on the same data and objective can independently recover the latent dimension of data and exhibit similar patterns of processing.
**ID estimators aren't equal; some are useful** Though ID estimators based on similar analytical methods are correlated (fig. 6), different ID estimators can produce different ID profiles for the same dataset (e.g., fig. 7). This can stem from a number of factors pertaining to assumptions and analytical method. For instance, we find the NN-based methods to be most predictive of data perplexity and ease-of-adaptation, and PCA and Fisher Separability to be least predictive. This may be because (1) PCA assumes that the underlying data manifold is linear, which may lead to poor ID estimation (consider a 1D line embedded in 2D space; PCA will estimate an ID of 2 as there are two principal directions of variance); (2) Fisher Separability systematically underestimates ID for non-uniformly distributed data (Albergante et al., 2019), and Transformer representations are highly anisotropic (Cai et al., 2021). Lastly, among the 12 estimators tested, we could not produce sensible results for three of
Figure 6: Spearman correlations between ID metrics: the bottom-right block of correlated metrics correspond to NN-based methods, while the top-left are PCA, Fisher Separability, and Correlation Dimension, respectively a linear projective, fine-grained clustering, and fractal method.
Figure 7: Different ID methods’ relative ID estimates (= ID / ED) over layers in OPT-6.7b for the Tweets dataset. While some groups of methods are correlated, they produce different ID profiles for the same data.
them (Rozza et al., 2012; Ceruti et al., 2014; Carter et al., 2010), see appendix A.3 for discussion.
While we cannot claim that any single estimator produces the "true" ID, it appears that, for purposes of ID estimation of linguistic datasets as encoded in LMs, NN-based methods are the most _useful_ ones, being reliable predictors of information-theoretic compression and ease-of-adaptation (appendix E). More generally, the differing results obtained with various ID estimators reveal a need to validate them against linguistic data, which may violate underlying assumptions of common estimators, such as the global isotropy assumption in Fisher Separability (Albergante et al., 2019). While there indeed exist ID estimation benchmarks for synthetic manifolds and image data, and while NN-based estimators outperform linear ones in these benchmarks (Campadelli et al., 2015), a benchmark has not yet been developed for linguistic data, to our knowledge.
## 5 Discussion
We have quantified geometric compression in neural language models using the ID of representations, where ID tracks Shannon information-theoretic coding length. This bridges two notions of description length in pre-trained neural LMs by showing they are significantly positively correlated. Our result has also practical implications, suggesting that ID and perplexity predict how easy it is to finetune a model to a task (similarly to what observed in the context of zero-shot prompting by Gonen et al., 2022). More speculatively, the relation between ID and task adaptation may inform future modeling work that actively encourages data compression at training time, to indirectly inject fast-adaptation capabilities into a model.
ID estimators are not equal: we focus on NN-based methods because they explain useful properties of the data and ease-of-finetuning. Our work is a first attempt to evaluate a wide range of ID estimators on natural language representations, and reveals the need for a further principled study of ID estimation of linguistic data.
That nonlinear ID estimators predict information-theoretic compression and ease-of-adaptation over linear ones highlights a need to go beyond PCA in analyzing compression of specific linguistic phenomena. Our experiments on wikitext and variants also demonstrate a need for further experiments on, e.g., idiomaticity, or specific linguistic constructions (cf. Hernandez and Andreas, 2021 for analysis using PCA).
While our work investigates the relationship between ease-of-adaptation and _zero-shot_ compression, a logical next step is to investigate how finetuning dynamically affects data compression under the model. We hypothesize that the result may depend on whether the dataset used for finetuning is memorized by the model during training.
Finally, while LMs are trained to reproduce the distribution of human language, it is yet unclear whether analyzing the linguistic representations of the former allows us to make statements about the dimensionality of the latter. Then, an open question remains: what is the "true" dimensionality of natural language, and to what extent do LMs recover it?
### Limitations
* While we confirmed that modern LMs do compress language data, and that this compression is correlated with ease-of-learning, we only provided a limited characterization of the relation between LM compression and linguistic properties of the input, such as lexical information and syntactic structure.
* Due to both access restriction and computational limitations, we cannot replicate our investigations on huge language models such as ChatGPT.
* A related question is to what extent the correlation we report between compression and ease of learning would hold (or even how it could be meaningfully formulated) in the context of prompt-based zero-shot task adaptation as afforded by huge LMs.
* More generally, it remains to be explored how a number of modeling choices, such as non-causal predictive objectives or instruction tuning, would affect our generalizations.
### Ethics Statement
This paper does not introduce new models or datasets, and it presents an abstract analysis of language model data compression that, we think, should not raise ethical concerns. We believe on the other hand that our focus on improving the understanding of how language models process information can be generally beneficial, in an AI landscape in which powerful language models are deployed with little understanding of the mechanics by which they work, and, consequently, little ability to control their behavior.
## Acknowledgements
We thank Alessandro Laio for generous guidance on intrinsic dimensionality estimation. Jacob Andreas provided very helpful feedback on the project. We also thank the members of the UPF COLT lab, especially Gemma Boleda, Roberto Dessi and Lucas Weber for early feedback, the members of the Barcelona Apple Machine Learning Research group and the participants in the EviL seminar for helpful feedback and suggestions. Our work was funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101019291). This paper reflects the authors' view only, and the ERC is not responsible for any use that may be made of the information it contains.
|
2304.11816 | Multiplierless In-filter Computing for tinyML Platforms | Wildlife conservation using continuous monitoring of environmental factors
and biomedical classification, which generate a vast amount of sensor data, is
a challenge due to limited bandwidth in the case of remote monitoring. It
becomes critical to have classification where data is generated, and only
classified data is used for monitoring. We present a novel multiplierless
framework for in-filter acoustic classification using Margin Propagation (MP)
approximation used in low-power edge devices deployable in remote areas with
limited connectivity. The entire design of this classification framework is
based on template-based kernel machine, which include feature extraction and
inference, and uses basic primitives like addition/subtraction, shift, and
comparator operations, for hardware implementation. Unlike full precision
training methods for traditional classification, we use MP-based approximation
for training, including backpropagation mitigating approximation errors. The
proposed framework is general enough for acoustic classification. However, we
demonstrate the hardware friendliness of this framework by implementing a
parallel Finite Impulse Response (FIR) filter bank in a kernel machine
classifier optimized for a Field Programmable Gate Array (FPGA). The FIR filter
acts as the feature extractor and non-linear kernel for the kernel machine
implemented using MP approximation and a downsampling method to reduce the
order of the filters. The FPGA implementation on Spartan 7 shows that the
MP-approximated in-filter kernel machine is more efficient than traditional
classification frameworks with just less than 1K slices. | Abhishek Ramdas Nair, Pallab Kumar Nath, Shantanu Chakrabartty, Chetan Singh Thakur | 2023-04-24T04:33:44Z | http://arxiv.org/abs/2304.11816v1 | # Multiplierless In-filter Computing for tinyML Platforms
###### Abstract
Wildlife conservation using continuous monitoring of environmental factors and biomedical classification, which generate a vast amount of sensor data, is a challenge due to limited bandwidth in the case of remote monitoring. It becomes critical to have classification where data is generated, and only classified data is used for monitoring. We present a novel multiplierless framework for in-filter acoustic classification using Margin Propagation (MP) approximation used in low-power edge devices deployable in remote areas with limited connectivity. The entire design of this classification framework is based on template-based kernel machine, which include feature extraction and inference, and uses basic primitives like addition/subtraction, shift, and comparator operations, for hardware implementation. Unlike full precision training methods for traditional classification, we use MP-based approximation for training, including backpropagation mitigating approximation errors. The proposed framework is general enough for acoustic classification. However, we demonstrate the hardware friendliness of this framework by implementing a parallel Finite Impulse Response (FIR) filter bank in a kernel machine classifier optimized for a Field Programmable Gate Array (FPGA). The FIR filter acts as the feature extractor and non-linear kernel for the kernel machine implemented using MP approximation and a downsampling method to reduce the order of the filters. The FPGA implementation on Spartan 7 shows that the MP-approximated in-filter kernel machine is more efficient than traditional classification frameworks with just less than 1K slices.
IoT, FPGA, Filtering, Edge Computing.
## I Introduction
One of the biggest challenges in biomedical classification is capturing data from different biosensors and providing interpretable information to improve diagnosis [1][2]. On the other hand, in the case of wildlife conservation, identifying and localizing the threatened species and providing supportive or corrective measures to enable restoration is a challenge [3]. Emerging technologies in edge computing devices like low-power wireless sensor networks are currently being used in agriculture [4] and healthcare, [5] in combination with Machine Learning (ML) techniques, known as tinyML. Most of the edge-based sensor data are time-series, and it has been proven that such data can be efficiently used for tinyML classification [6]. This type of classification can be applied to healthcare with Electrocardiogram (ECG), Electroencephalogram (EEG), Electromyography (EMG), and other time-series biomedical sensor data [2].
Similar systems have been in place for ecological [7] and marine monitoring [8]. Acoustic based classification using wireless biosensors have been proven highly efficient in case of ecological applications [9][10] as shown in Fig.1. These sensors may generate a high amount of data, but the relevant training data will be sparse, like in the case of rare or near-extinct species detection [11]. Hence, classification at the sensor node becomes even more critical as large data transmission over the network will require higher bandwidth. Despite performing well for a high volume of data, Deep Neural Networks (DNNs) do not generalize well in IoT applications, as training data is rare [12]. Moreover, training a DNN requires high-powered systems to generate appropriate learned parameters. IoT-based edge devices typically have limited resources and power. Hence, training on edge is highly improbable. However, IoT systems can have inference executed on devices using quantized and pruned versions of the neural networks [13]. Hence, such wireless sensors must perform classification at the edge and be tunable while having the lowest possible area and power.
Machine learning techniques like Support Vector Machines (SVMs), K-Nearest Neighbour (KNN), and kernel machines have proven to be robust and interpretable for rare event classification [14][15][16]. However, these techniques have traditionally been computationally intensive for training and inference. As most of the computation is based on Matrix-Vector Multiplication (MVM) operation, replacing multipliers with more fundamental basic primitives like addition/subtraction
Fig. 1: Ecological Conservation and Corrective System.
will enable to design energy-efficient classification framework [17]. Multipliers tend to produce a precision explosion. We can exploit the computational primitives and approximations inherent in digital units like counters, underflow/overflow, and additions/subtraction by replacing them. In literature, there have been ways to tackle precision explosion due to multiplication for multiply-accumulate operations like quantization [18] or representation in a new number system [19].
Traditionally, IoT-based machine learning and neural networks train offline with full precision and deploy the inference at lower precision fixed point [20]. Even with quantization-aware training, the backpropagation in training is done in full precision, and only the forward pass is quantized [21]. This may be an efficient training technique, but it still is expensive for re-training on the IoT platform. There have been instances where the gradients in backpropagation have been quantized [22][23]. However, these systems fail to achieve convergence during backpropagation [18], or they work well only when training data is available in large numbers. Moreover, the front-end, like filters and feature extractors in most edge devices, is implemented at higher precision with only the classifier quantized.
This paper leverages the energy-efficient bird density detection tinyML system [24] which uses the in-filter computing [6] with template-based SVM architecture [25]. Here we apply the Margin Propagation (MP) principle [26] to this architecture to develop a multiplierless in-filter computing framework, which exploits the computing and nonlinear primitives in the feature extraction process. Our goal is to exploit the hardware efficiency due to MP approximation for designing multiplierless filtering and classification operations, including nonlinear transformations using the kernel functions, also used as a feature extractor for training and inference. The multiplierless MP-based kernel machine has been proven to provide energy-efficient classification [27]. In our design, we implement an FIR filter bank, used as a feature extractor and kernel function, arranged in a multi-rate frequency model [28] using a multiplierless approach based on MP. This model reduces the FIR filter order used in the filter bank and helps achieve a low computation footprint. We believe that our proposed framework has the following key advantages:
* End-to-end multiplierless framework for acoustic classification using only basic primitives like addition/subtraction, underflow/overflow, shift, and comparison operations.
* Feature extraction and kernel function are combined to form an efficient computational system.
* Scalable system with user-defined memory footprint based on IoT hardware constraints.
* Integrated training using MP-based approximation mitigates approximation errors introduced in filtering and classifier.
* Since our framework uses basic computational primitives (no multipliers), it enables the implementation to push for much higher clock frequency (in this case 166MHz).
We have implemented the inference framework on an FPGA as proof of concept IoT implementation. We have validated our architecture on the environmental sound dataset [29], which showcases the capabilities of potential deployment to identify wildlife sounds or even sounds that may indicate possible poaching or timber smuggling.
The rest of this paper is organized as follows. Section II provides a brief discussion of related work, followed by section III, where we present the in-filter computation using an MP kernel machine. Section IV provides the FPGA implementation details. Section V provides results with an audio-based dataset for detection and surveillance applications. Section VI concludes this paper, provides some useful applications and discusses possible future work using this framework.
## II Related Works
A framework to build automated animal recognition in the wild, aiming at an automated wildlife monitoring system, has been discussed in [30]. The authors propose to use camera traps to capture data which becomes increasingly difficult in low light conditions and requires regular maintenance with increased power consumption. Extensive studies have been done using animal sounds for the recognition and classification of species employing SVMs and Hidden Markov Models (HMM) [31]. However, hardware implementation of such systems is not energy efficient for wildlife deployment. An efficient hardware implementation of acoustic classification for biometric security access was realized in [32], where Mel-Frequency Cepstral Coefficients (MFCC) is used as a feature extractor and implemented on-chip along with an SVM classifier. The MFCC and the SVM kernel occupy a high amount of hardware. To improve the hardware efficiency, we can explore the method where we can combine the feature extraction and SVM kernel as a single function as described in [6]. This eliminates the use of separate feature extraction and improves the hardware efficiency in terms of power (8 mW) and area. The work in [24] shows efficient microcontroller implementation of the work described in [6] for classification in case of ecological applications.
We still have a scope to improve the system's efficiency by exploring approximate computing to replace traditional resource-heavy operations like multipliers with more basic operators like additions. In literature, we have seen many approximate techniques like Canonic Signed Digits (CSD) [33], logarithmic function using look-up tables [34] or powers of 2 approximation [35]. These systems do not offer a complete end-to-end multiplierless computation.
Traditionally, energy efficiency can be achieved using a quantization technique instead of approximate computation, like fixed-point precision [36]. However, such linear quantization techniques result in accuracy loss as the data is represented uniformly. To mitigate this error, adaptive quantization can be used as described in [37]. This technique uses a measurement model to estimate the correct quantization for all the parameters of the classification system. Implementation of such a system is not feasible in the case of edge devices as there is the overhead of estimating the quantization levels, compromising the device's energy efficiency. Representation of the entire framework in the bfloat number system can provide similar effects of adaptive quantization [38].
In [27], we see an end-to-end multiplierless system using the MP approximation technique for kernel machines. We extend the capabilities of this framework in our current work where we use the feature extraction used as kernel function, as used in [6] and [24], and implement this kernel in MP approximation using the MP principle in [27]. The resulting framework is a one-of-a-kind digital hardware implementation of a multiplierless acoustic classifier with a feature extractor used as a kernel.
## III In-Filter Computation using Margin Propagation Kernel Machine
In-filter computation described in [6] and [24], combines the feature extraction and non-linear SVM kernel into a single function [25] as opposed to a traditional SVM as shown in Fig.2. We leverage this principle, use an FIR filter as the kernel function, and implement this framework using MP-based approximation. MP-based kernel machine has proven to be an energy-efficient system for implementing a classification framework for edge devices [27].
### _Precision Explosion and MP as Adaptive Quantization_
Consider a fully connected single hidden layer network. In the case of a fully connected network, for a 8-bit quantized inputs, the hidden layer output will generate 16-bit output from multiplication. For the 16-dimension input vector, the hidden layer will generate 20-bit output, and the final output layer will have 44-bit output. If we quantize this output to 8-bit, we may lose a substantial amount of accuracy. In contrast, the MP operation, which involves addition, would generate 18-bit at the output, which would result in lower accuracy loss. Thus, MP avoids numerical precision explosion caused by conventional MAC operations. Moreover, the quantization error can be mitigated by having an online training system, which has been shown in [27].
### _Multiplierless Kernel Machine using MP_
We develop a classification framework based on multiplierless kernel machine using the MP approximation [27]. Consider a vector \(\mathbf{x}\in\mathbb{R}^{d}\), the decision function for kernel machines [39] is given as,
\[f(\mathbf{x})=\mathbf{w}^{T}\mathbf{K}+\mathbf{b}. \tag{1}\]
Here, \(\mathbf{K}\) is a function of \(\mathbf{x}\). Following the derivations in [27], we can rewrite eq.(1) in MP domain as,
\[f_{MP}(\mathbf{x})=z^{+}-z^{-}. \tag{2}\]
where,
\[z^{+} =MP([\mathbf{w}^{+}+\mathbf{K}^{+},\mathbf{w}^{-}+\mathbf{K}^{-},\mathbf{b}^{+}],\gamma_{1}). \tag{3}\] \[z^{-} =MP([\mathbf{w}^{+}+\mathbf{K}^{-},\mathbf{w}^{-}+\mathbf{K}^{+ },\mathbf{b}^{-}],\gamma_{1}). \tag{4}\]
\(\gamma_{1}\) is a hyper-parameter that is learned using gamma annealing. Here \(\mathbf{K}^{+}=\mathbf{K}\) and \(\mathbf{K}^{-}=-\mathbf{K}\). \(\mathbf{K}\) is the kernel which we derive using in-filter computation described in Section III-C. We normalize the values for \(z^{+}\) and \(z^{-}\) for better stability of the system using MP,
\[z=MP([z^{+},z^{-}],\gamma_{n}). \tag{5}\]
Here, \(\gamma_{n}\) is the hyper-parameter used for normalization. In this case, \(\gamma_{n}=1\). The output of the system can be expressed in differential form,
\[p=p^{+}-p^{-}. \tag{6}\]
Here, \(p\in\mathbb{R}\), \(p^{+}+p^{-}=1\) and \(p^{+},p^{-}\geq 0\). As \(z\) is the normalizing factor for \(z^{+}\) and \(z^{-}\), we can estimate the output using reverse water filling algorithm [40], which is generated by the MP function for each class,
\[p^{+} =[z^{+}-z]_{+}.\] \[p^{-} =[z^{-}-z]_{+}. \tag{7}\]
As shown in the Fig.3, the kernel function forms a vector (p \(\times\) 1) defined as \(\mathbf{K}\). Using the principle of template based classification described in [25] and [6], we use the parallel FIR filterbank as the kernel as well as the feature extractor.
### _FIR Filter Bank as Kernel_
Filter banks are commonly used for feature extraction in acoustic classification [41][42]. We use FIR filter bank due to it's stability and ease of implementation especially in approximate computing [43][44]. Each filter in the filter bank has resonators with center frequencies based on the Greenwood function [45]. Fig.3 shows the detailed block architecture of the filter bank. The input \(x(n)\in\mathbf{x}\) is an acoustic instance sampled at 16 kHz frequency, i.e, \(N=16000\). \(\mathbf{B_{p}}\) denotes
Fig. 2: Comparison of Traditional SVM and In-Filter Compute Template based SVM. The Template based SVM, using \(p\) filters, enables user-specified hardware constraints and uses feature extractor as the Kernel Function [6].
the \(p^{th}\) bandpass FIR filter and \(p\in P\), i.e., \(P\) is the total number of filters in the filter bank.
\[\mathbf{B_{p}}(n)=\sum_{k=0}^{M-1}h_{p}(k)x(n-k). \tag{8}\]
Here, \(h_{p}\) is the filter coefficient based on \(p^{th}\) filter cut off frequency. \(M\) is the order of the filter. The output of the band pass filter is half wave rectified using HWR block, which is then accumulated over \(N\) samples and then standardized (STD block) to get the kernel function \(\Phi_{p}\in\mathbb{R}\). The derivation for this kernel function is described in Appendix A.
The filter bank has been divided into multiple octaves with a bank size of 5 filters per octave. Octaves are defined based on the sampling frequencies in decreasing order. The cut-off frequencies is equally spaced within the octaves. The coefficients (\(h_{p}(n)\)) are precomputed and provided as inputs to each filter. We use the technique of downsampling input sampling frequency and segregating the cut-off frequencies into separate octaves, as shown in Fig.3. The cut-off frequencies are arranged in descending order which helps to reduce input sampling frequency. This is a proven efficient way of implementing a filter bank, as shown in [28]. The downsampling employs a low pass filter (L) used for anti-aliasing at the input for each octave. Downsampling of input ensures usage of lower order FIR filter to obtain the desired output. This is shown in Fig.4 using a comparison of the output of an FIR filter bank with and without downsampling for a size of 30 filters. We use a chirp signal as the input with increasing frequency and sampling rate at 16 kHz. We require a range of higher-order FIR filters to get the desired output in filter
Fig. 4: FIR Filter Bank Output (Gain Response) for chirp signal (a) Without Downsampling (Filter Order ranges from 15 to 200) (b) With Downsampling (Filter Order fixed at 15)
Fig. 3: FIR parallel filter bank framework for MP based classification for \(p\times 1\) Kernel. Here, the input \(x_{n}\) is provided to the parallel FIR filter bank to generate a \(p\times 1\) kernel. This kernel is used as input for a single layer classification network formed using MP modules. The parallel filter bank and the downsampling low pass filter blocks also use MP modules for computation.
banks without downsampling. The higher cut-off frequencies required a filter order of 15, and as the cut-off frequencies were reduced, the order of the filters increased, which resulted in the lowest 5 frequencies requiring 200 ordered filters. In contrast, we see that the same output can be obtained with just 15 ordered FIR filters by downsampling the input signal for each range of cut-off frequencies. In this case, the cut-off frequencies were segregated into 6 octaves, corresponding to the sampling frequencies in descending order. The additional low pass filter required the same order as the bandpass filter. Thus, this down-sampling technique provides an efficient way of implementing an FIR filter bank for low-powered devices.
### _Filtering operation in MP domain_
We use two types of filtering operation in our filter bank, i.e., a low pass filter for downsampling and a bandpass filter. These filtering operations result in an inner product computation between the filter coefficients (\(h_{p}(n)\in\mathbf{h}\)) and input samples (\(x(n)\)) as per eq.(8). Following the derivations in [27], we can express this filtering operation in MP domain as below,
\[y=MP\left(\left[\mathbf{h}^{+}+\mathbf{x}^{+},\mathbf{h}^{-}+ \mathbf{x}^{-}\right],\gamma_{f}\right)\] \[-MP\left(\left[\mathbf{h}^{+}+\mathbf{x}^{-},\mathbf{h}^{-}+ \mathbf{x}^{+}\right],\gamma_{f}\right). \tag{9}\]
For this implementation, we have \(\mathbf{h}^{+}\) = \(\mathbf{h}\), \(\mathbf{h}^{-}\) = \(-\mathbf{h}\), \(\mathbf{x}^{+}\) = \(\mathbf{x}\) and \(\mathbf{x}^{-}\) = \(-\mathbf{x}\), \(\gamma_{f}\) is the MP parameter for the filtering operation. Fig.5 shows the MP implementation of the filtering operation for the low pass and the bandpass filters. Since the property of MP inherently exhibits low pass filtering, based on the reverse water filling algorithm described in [40], we require a lower-ordered low pass filter implementation in the case of the MP domain. We can see the frequency response of the filter bank in the MP domain in Fig.6.
We observe some amount of distortion in the gain response of the chirp signal output. This is due to the MP approximation of the inner product for filtering operation. The learning algorithm can mitigate this approximation error, where the weights will be adjusted, considering the approximation error. MP approximation technique minimizes the error rather than mitigating the approximation itself, improving the system's overall accuracy. This technique requires basic primitives like comparators, shift operators, counters, and adders to implement the system, making it hardware-friendly.
## IV FPGA Implementation
The high-level block diagram of the proposed design is shown in Fig 7. The target frequency of the proposed design is set to 50 MHz, and the input sampling rate is set to 16 KHz. The number of clock cycles available in between two samples are 3125. The architecture is designed such a way that processing of a new sample is completed within this time limit. Here, 3 MP modules (MP0-2) work simultaneously to meet the time constraints. The MP0 is used to implement 4 low pass (LP) filters and two MP modules (MP1 and MP2) are responsible for Band Pass (BP) filtering operation. The internal architecture and working principle of a MP module is described in [27]. The window size of LP filter is 6 and the samples are stored in a register bank of dimension 6-bit. In LP filter section, four register banks (LPRegBank) are used to store inputs for four LP filters. The selection of a particular register bank is done by the select lines \(sel0\) and \(sel1\). The ROM (ROM0) is used to store coefficients for LP filters. The precision of the data path is set to 10 bits for the proposed design. Initially, the input samples (\(x(n)\)) are stored in a register bank and fed to the MP0 for implementing LP filter \(L_{1}\), and the output of \(L_{1}\) is down-sampled by 2 and passed to the corresponding parallel BP filter bank (for generating octave 2) (as discussed in Section. III-C). Here, MP0 implements 4 LP filters in time multiplexed fashion and generates desired outputs for Octave 1, 2, 3 and 4. The outputs are stored in corresponding register banks (RegBank1-4) using select signal \(sel1\). The contents of the register banks (RegBank1-4) are used for parallel BP filtering operation and generates kernel function \(\Phi_{5}\) to \(\Phi_{30}\).
One single MP module (MP1) is used repeatedly to generate outputs for octave 1. The window size of the BP filter is
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Device**} & \multicolumn{3}{c|}{Spartan 7} \\ & \multicolumn{2}{c|}{xc7\%cpg2196-2} \\ \hline \multicolumn{2}{|c|}{**Frequency**} & \multicolumn{2}{c|}{50 MHz} \\ \cline{2-3} \multicolumn{2}{|c|}{**Dynamic Power**} & \multicolumn{2}{c|}{17 mW} \\ \cline{2-3} \multicolumn{2}{|c|}{**Slices**} & \multicolumn{2}{c|}{903} \\ \hline
**FFs** & **LUTs** & **DSP** & **BRAM** \\ \hline
2376 & 1503 & 0 & 0 \\ \hline \end{tabular}
\end{table} TABLE I: FPGA Implementation Summary.
Fig. 5: Filtering operation using MP.
Fig. 6: MP Filter bank output (Gain Response) for chirp signal. This response shows distortion due to MP approximation errors.
16. The inputs and coefficients are stored in register bank (RegBank0) and ROM (ROM1) respectively. The register bank RegBank5, is used to store accumulated values of filter outputs of all 16000 samples i.e., when a output is generated from a filter it added with the previous value and stored in corresponding register in RegBank5. The MP2 is used for BP filter outputs of octaves 2,3,4 and 5. Here also a single MP module used repeatedly to generate desired outputs. The down sampling of the LP filter outputs provides more time span between two outputs which are the inputs to the BP filters generating octaves 2,3,4 and 5. Hence a single MP (MP2) is sufficient to produce \(\Phi_{5}\) to \(\Phi_{30}\) in time multiplexed fashion. Like RegBank5, the RegBank6 is also used to store accumulated values of filter outputs of all 16000 samples. The select lines \(sel4\) and \(sel5\) are used to access a particular register in the register banks RegBank5 and RegBank6 respectively. The contents of the RegBank5 and RegBank6 are the kernel function \(\Phi_{1}\) to \(\Phi_{30}\) of the proposed design. The ROM2 is used to store coefficients of BP filters for octaves 2, 3, 4 and 5.
The inference engine starts working after the completion of kernel computation.The three MP blocks MP3, MP4 and MP5 are used in the inference engine to generate the output \(p\).The architecture and working principle of the inference engine are discussed in Section III-B. The \(w^{+}\) and \(w^{-}\) are the weight matrix and stored in a ROM. The kernel function \(\Phi\) and weight matrix \(w^{+}\) and \(w^{-}\) are the inputs to the inference engine. The kernel function is selected sequentially by select signal \(sel6\). The upper 10 bits of the kernel function (\(\Phi_{1}\) to \(\phi_{30}\)) are used for inference engine. The high-level block diagram of an MP module and the implementation details of the inference engine has been discussed in [27].
We have used Spartan 7 series FPGA to implement our
\begin{table}
\begin{tabular}{|l||c|c|c|c|c|c|} \hline
**Related Work** & **Mahmoodi, et al.** & **Cutajar, et al.** & **Boujelben, et al.** & **Ramos-Lara et al.** & **Nair et al.** & **This work** \\ \cline{2-7} & **[46]** & **[47]** & **[48]** & **[32]** & **[6]** & **This work** \\ \hline \hline
**Year** & 2011 & 2013 & 2018 & 2009 & 2021 & 2022 \\ \hline
**FPGA** & Virtex4 & Virtex-II & Artix-? & Spartan 3 & Spartan 7 & Spartan 7 \\ & xc4vs5x & xc2v3000 & xc/x1007 & xc2v000 & xc/x6cgai96 & xc/x6cgai96 \\ \hline
**Operating Frequency** & 151.286 MHz & 42.012 MHz & 101.74 MHz & 50 MHz & 25 MHz & 50 MHz\({}^{2}\) \\ \hline
**Input Sampling Frequency** & NA\({}^{1}\) & 16 kHz & 6 kHz & 8 kHz & 16 kHz & 16 kHz \\ \hline
**Flip Flop** & 11589 & 1576 & 17074 & 5351 & 2864 & 2376 \\ \hline
**LUTs** & 9141 & 11943 & 16563 & 6785 & 1517 & 1503 \\ \hline
**RAM (18 Kb)** & 99 & NA\({}^{1}\) & 4 & NA\({}^{1}\) & 0 & 0 \\ \hline
**DSP** & 81 & 64 & 87 & 21 & 4 & 0 \\ \hline
**Power (mW/MHz)** & NA\({}^{1}\) & NA\({}^{1}\) & 1.12 & NA\({}^{1}\) & 0.32 & 0.34 \\ \hline
**Techniques** & SVM & DWT and SVM & FFT and SVM & CAR-IHC IIR & FIR and \\ \cline{2-7} & Persian & TIMIT & Respiratory & Speaker & ESC-10 and & ESC-10 and \\ \cline{2-7} & Digits [49] & Corpus [50] & Sound [48] & Verification [32] & FSDD [29] & FSDD [29] \\ \hline
**Average Accuracy (\%)** & 98 & 61 & 94 & 95 & 88 & 88 \\ \hline \end{tabular}
* These works did not report this entity for their designs. \({}^{2}\) Maximum operating frequency of the proposed design is 166 MHz.
\end{table} TABLE II: Comparison of architecture and resource utilization of related work.
Fig. 7: Architectural implementation of proposed Multiplierless In-Filter Acoustic Classification Framework.
design, as this FPGA caters to edge applications. Table I shows the resource utilization and power consumption of our design. We were able to implement our design with usage of less than 1K of FPGA slices and the dynamic power consumption is limited to 17 mW for a 50 MHz operating frequency. Table II compares similar designs using varied edge datasets for resource utilization and power consumption. We see a better resource utilization of our design in comparison to these systems with lower power consumption in mW/MHz. The proposed study consumes almost the same amount of LUTs and 488 fewer FFs than the similar design presented in [6]. Due to multiplierless design, the proposed architecture does not consume any DSPs, whereas the design reported in [6] consumes 4 DSPs. We computed the number of LUTs required to replace 4 DSPs for fair comparisons. We have implemented 4\(\times\)4, and 8\(\times\)8 signed multipliers (Baugh Wooley) in FPGA and found that they have consumed 19 and 72 ( 4\(\times\) more) LUTs, respectively. The design reported in [6] uses 4 signed multipliers and the dimensions are 20\(\times\)12, 20\(\times\)12, 12\(\times\)12, 16\(\times\)8 respectively. The approximation calculation shows that all 4 multipliers consume at least 890 LUTs. Hence the proposed multiplierless design can save at least 25% hardware resources (LUTs + FFs) compared to the design presented in [6]. The power consumption (mW/MHz) is almost same for the proposed design and the design presented in [6], as the design is small and only 4 multipliers are used. However, for the bigger network such as DNN our multiplierless approach would give significant benefit. The proposed design can achieve maximum operation frequency of 166 MHz which can be used to support more input sampling rate.
## V Results and Discussion
The framework's classification ability is showcased using the environmental sounds dataset. Identification of different environmental sounds shows the versatile nature of the framework that can be put to use in various acoustic applications. As wildlife conservation would result in rare data event detection, Environmental Sounds Classification (ESC-10) dataset [29] would be an ideal dataset to showcase the framework capabilities in case of ecological application. Also, we compared our system with [6] using ESC-10 and Free Speech Digit Dataset (FSDD), where we use speaker identification as the application.
ESC-10 dataset consists of sound clips constructed from recordings publicly available through the Freesound project. It consists of 400 environmental recordings with 10 classes, i.e., 40 clips per class and 5 seconds per clip. Each class contains 40 wav format audio files. These clips had a lot of silence, so we trimmed the silence part and further trimmed the remaining clips into a 1-second version of the same class, thus increasing the dataset's number of samples. Table III shows the results for this dataset, having classes like a dog bark, rain, sea waves, crying baby, clock ticking, person sneezing, helicopter, chainsaw, crawing rooster, and fire crackling. The classification uses one versus all methodology to identify the classes, where the data is balanced and randomly arranged for train and test sets (shown in brackets). We use the in-built MATLAB
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Classes**} & \multicolumn{3}{c|}{**Normal SVM**} & \multicolumn{3}{c|}{**CARIHC SVM**} & \multicolumn{3}{c|}{**MP Kerrel**} \\ \cline{2-9} & \multicolumn{3}{c|}{**Floating Point**} & \multicolumn{3}{c|}{**Floating Point**} & \multicolumn{3}{c|}{**Floating Point**} & \multicolumn{3}{c|}{**Fixed Point**} \\ \cline{2-9} & **SVs** & **Train** & **Test** & **Train** & **Test** & **Train** & **Test** & **Train** & **Test** \\ \hline \hline
**Theo (761/254)** & 107 & 96 & 96 & 93 & 91 & 92 & 93 & 92 & 92 \\ \hline
**Nicolas(889/297)** & 15 & 100 & 100 & 98 & 97 & 99 & 99 & 98 & 98 \\ \hline \end{tabular}
\end{table} TABLE IV: FSDD classification accuracy results in percent. Number of filters for our work is fixed at 30. We used 8-bit fixed point for our design
Fig. 8: Impact of bit-width on dataset accuracy for Crying Baby class from ESC-10.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Classes**} & \multicolumn{3}{c|}{**Normal SVM**} & \multicolumn{3}{c|}{**CARIHC SVM**} & \multicolumn{3}{c|}{**MP In-Filter Compute**} \\ \cline{2-9} & \multicolumn{3}{c|}{**Floating Point**} & \multicolumn{3}{c|}{**Floating Point**} & \multicolumn{3}{c|}{**Floating Point**} & \multicolumn{3}{c|}{**Fixed Point (8-bit)**} \\ \cline{2-9} & **SVs** & **Train** & **Test** & **Train** & **Test** & **Train** & **Test** & **Train** & **Test** \\ \hline \hline
**Dog (129/33)** & 42 & 90 & 94 & 89 & 90 & 91 & 94 & 91 & 94 \\ \hline
**Rain (119/40)** & 44 & 86 & 90 & 89 & 87 & 90 & 90 & 88 & 88 \\ \hline
**Sea\_Waves (200/50)** & 80 & 87 & 90 & 84 & 78 & 89 & 88 & 88 & 88 \\ \hline
**Crying Baby (144/49)** & 37 & 93 & 84 & 91 & 87 & 92 & 87 & 89 & 88 \\ \hline
**Cock Tick (114/50)** & 54 & 81 & 76 & 92 & 85 & 85 & 86 & 85 & 84 \\ \hline
**Person Sneez (10/144)** & 49 & 85 & 75 & 87 & 80 & 86 & 80 & 85 & 80 \\ \hline
**Helicopter (197/50)** & 45 & 92 & 88 & 95 & 90 & 88 & 85 & 85 & 86 \\ \hline
**Chainsaw (99/34)** & 41 & 90 & 85 & 93 & 82 & 92 & 81 & 92 & 80 \\ \hline
**Rooster (12/54)** & 40 & 93 & 93 & 93 & 96 & 90 & 94 & 91 & 94 \\ \hline
**Fire Crackling (152/66)** & 46 & 93 & 83 & 89 & 87 & 89 & 92 & 90 & 88 \\ \hline \end{tabular}
\end{table} TABLE III: ESC-10 dataset classification accuracy results in percent. Number of filters for our work is fixed at 30. We used 8-bit fixed point for our design
library with default command lines for the traditional SVM. Here, the CARIHC SVM employs a completely different approach compared to standard SVM for arriving at the accuracy, which is detailed in [6, 25]. Since the dataset size is small, the accuracy values differ by a bigger margin for some classes between the traditional SVM and the other two SVM implementation, as small variations in positive or negative classification will lead to a greater impact on accuracy number. Similarly, we compare the identification of two speakers from the FSDD dataset. These results show that our framework takes advantage of template SVM methodology with a fixed number of templates and the MP approximation technique, delivering comparable results. We have also compared our system with similar systems, which is area efficient, as shown in Table II.
We use an 8-bit fixed point for implementing the hardware. We performed an empirical analysis of the dataset (using the crying baby class) with different bit widths. As shown in Fig.8, the training and testing accuracy remains stable till 8-bit and decreases sharply for bit width lower than 8-bit. We use Keras [51] implementation for training our system, as this software framework is robust and highly optimized. The FIR filter banks are quantized at 8-bit, and a custom layer for the MP function is written for the Keras framework. The optimization of the model is done using Tensorflow [52] libraries for quantization.
## VI Conclusion
This paper presents a novel multiplierless framework for acoustic classification using an FIR filterbank as the feature extraction and kernel function stage simultaneously. This framework is entirely multiplierless since the FIR filter bank is implemented using MP approximation along with the inference logic. Furthermore, the framework is tunable to any time series data by tuning the filter parameters in the FIR filter bank. The framework's scalability is evident as the number of filters is user-defined and can be controlled to adhere to IoT system constraints. Being completely multiplierless makes the system highly efficient for deployment in battery-powered edge devices. Various time series data generated from different biosensors can be used in their raw format and for classification using this framework without additional preprocessing or hardware. A network of edge devices running our proposed classification framework can be used for continuous monitoring of wildlife species and detecting anomalies in case of poaching or timber smuggling. This framework can be extended to other biomedical applications using edge devices capable of healthcare monitoring with raw ECG, EMG, and EEG signals. Wearable IMU sensors with this framework can be used to detect anomalies in posture. To make our framework more energy efficient, we can fabricate this system into an Application Specific Integrated Chip.
### _FIR Filter Kernel Derivation_
The output of each band pass filter, as shown in Fig.3, forms the input to the HWR blocks in parallel.
\[HWR(q)=max(0,q). \tag{10}\]
The half wave rectified output is summed over \(N\) samples of a single input, and this forms the input for the standardization (\(STD\)) blocks in parallel. Let \(d_{p}(n)=HWR_{p}(B_{p}(n))\),
\[s_{p}=\sum\limits_{n=1}^{N}d_{p}(n). \tag{11}\]
Here, \(s_{p}\in\mathbb{R}\).
\[STD(S_{p,i})=\frac{S_{p,i}-\mu_{p}}{\sigma_{p}}. \tag{12}\]
where \(\{s_{p}\in S_{p,i}|1\leq i\leq M\}\) with \(M\) as the number of training inputs, \(\mu_{p}=mean(S_{p,1},S_{p,2},..,S_{p,M})\) and
\(\sigma_{p}\) =\(\sqrt{\frac{1}{M-1}\sum\limits_{i=1}^{M}(S_{p,i}-\mu_{p})^{2}}\)
\[\Phi_{p}=STD(S_{p,i}). \tag{13}\]
Here, \(\Phi_{p}\in\mathbb{R}\).
The summation over \(N\) samples of the output of HWR is taken as per eq.(11) for each filter. Then standardization technique, commonly used in neural network optimizations [53], is applied across \(M\) training input samples as per eq.(12). Note that \(\mu_{p}\) and \(\sigma_{p}\) are calculated only during training, and these vectors are passed as learned parameters to the inference engine. Therefore, an input signal vector \(X_{n}\) sampled at a sampling frequency \(f_{s}\) generates \(N\) samples with each sample denoted as \(x(n)\). It is then processed by a parallel FIR filter bank to estimate the kernel function \(\Phi_{p}\) with \(p\) as the filter stage out of \(P\) filters as per (13). The output is a \(P\times 1\) kernel vector (K), as shown in Fig.3.
## Acknowledgment
This research was supported in part by (i) INSPIRE faculty fellowship (DST/INSPIRE/04/2016/000216), SPARC grant (SPARC/2018-2019/P606/SL) from Ministry of Human Resource Development and IMPRINT Grant IMP/2018/000550 from the Department of Science and Technology, India. The authors would like to acknowledge the joint Memorandum of Understanding (MoU) between Indian Institute of Science, Bangalore and Washington University in St. Louis for supporting this research activity.
|
2310.13146 | CLIFT: Analysing Natural Distribution Shift on Question Answering Models
in Clinical Domain | This paper introduces a new testbed CLIFT (Clinical Shift) for the clinical
domain Question-answering task. The testbed includes 7.5k high-quality question
answering samples to provide a diverse and reliable benchmark. We performed a
comprehensive experimental study and evaluated several QA deep-learning models
under the proposed testbed. Despite impressive results on the original test
set, the performance degrades when applied to new test sets, which shows the
distribution shift. Our findings emphasize the need for and the potential for
increasing the robustness of clinical domain models under distributional
shifts. The testbed offers one way to track progress in that direction. It also
highlights the necessity of adopting evaluation metrics that consider
robustness to natural distribution shifts. We plan to expand the corpus by
adding more samples and model results. The full paper and the updated benchmark
are available at github.com/openlifescience-ai/clift | Ankit Pal | 2023-10-19T20:43:11Z | http://arxiv.org/abs/2310.13146v1 | # CLIFT : Analysing Natural Distribution Shift on Question Answering Models in Clinical Domain
###### Abstract
This paper introduces a new testbed CLIFT (**C**linical **S**hift) for the clinical domain Question Answering task. The testbed includes 7.5k high-quality question-answering samples to provide a diverse and reliable benchmark. We performed a comprehensive experimental study and evaluated several QA deep-learning models under the proposed testbed. Despite impressive results on the original test set, the performance degrades when applied to new test sets, which shows the distribution shift. Our findings emphasize the need for and the potential for increasing the robustness of clinical domain models under distributional shifts. The testbed offers one way to track progress in that direction. It also highlights the necessity of adopting evaluation metrics that consider robustness to natural distribution shifts. We plan to expand the corpus by adding more samples and model results. The full paper and the updated benchmark are available at github.com/openlifescience-ai/clift
## 1 Introduction
Digital healthcare data has significantly expanded in recent years. Question Answering (QA) in the clinical domain involves extracting answers from clinical texts in Electronic Health Records (EHRs). Clinicians frequently consult EHR notes and documents to aid medical decision-making. [22] However, developing robust clinical QA systems is challenging due to distribution shifts in real-world data. Recent studies [20; 3; 4] have applied neural QA models in this domain, such as BERT and variants, with promising results. However, prior work has focused narrowly on achieving state-of-the-art accuracy on existing benchmarks rather than robust generalization. The key goal is not achieving the highest possible accuracy on a specific dataset, but developing models that can reliably transfer to new clinical data from diverse regions, hospitals, and organizations. This requires resilience to distributional shifts between populations and institutions. Therefore, progress in clinical QA should emphasize model robustness over optimization on benchmarks. Advancing the field requires rigorously evaluating model generalization under real-world distribution shifts. Our work introduces CLIFT as a benchmark designed specifically for this purpose.
In statistical learning theory, it is often assumed that test data points originate from the same distributional underpinnings as training data for a machine learning model. However, in reality, data may be sampled from various distributions with considerable domain changes [7; 21; 9]; therefore, this assumption is valid sometimes. Despite impressive results of QA models trained in the clinical domain, when data distribution unexpectedly changes, models that seem to be doing well in test set accuracy fail when deployed in real-time. [27] research indicates that when predicting questions that have been paraphrased and completely new, the F1 score drops by around 40% to 60%. We are evaluating the Models trained on emrQA to investigate natural distribution shifts for different test sets, for example, changing from an existing test set to a cancer, heart, and other test sets. Our Study focuses on a question:
#### Are Clinical Models fine-tuned on emrQA robust to natural distribution shifts?
We perform a comprehensive experimental study on the emrQA dataset [14] in this paper. We introduce a new testbed derived from the MIMIC-III database. Our results show that emrQA models exhibit robustness to some natural distribution shifts. Despite impressive results on the training dataset, when applied to new test sets, the performance degrades, and models in our testbed experiences a drop in F1 metric. Our experiments underline the need and the potential for improving the robustness of clinical domain models under distributional shifts. The results also highlight the necessity of adopting evaluation metrics that consider robustness to natural distribution shifts.
In brief, the contributions of this study are as follows. (i) We are proposing five new test datasets. The testbed covers different clinical domain diseases and multiple sub-topics. These questions are from MIMIC-III [8], and human experts validated all the questions, answers, and evidence triples. (ii) This dataset offers several advantages over existing datasets. They evaluate the diverse reasoning abilities of clinical models over a vast array of disease topics and subtopics. The testbed is a comprehensive evaluation of clinical domain data. (iii) Detailed statistics and fine-grained evaluation of natural distribution shift are provided in this study. We examine and quantify the shortcomings of the clinical deep learning models trained on emrQA when evaluated with new test data. Moreover, We also analyzed the dataset on difficulty metrics such as answer diversity and Syntactic divergence. (iv) We conduct our experiments with the help of the HuggingFace Transformers library [26]. Furthermore, we also open-source the code and model checkpoints to reproduce the results and test with new models in the future. (v) We provide a new QA tool to validate the Question Answering Dataset. We hope this will accelerate the tagging process and robust ML research in this domain.
## 2 Background
### Natural Distribution shift
For a prediction algorithm \(\mathcal{M}:\,\mathcal{X}\rightarrow\mathcal{Y}\) We assume a clinical domain dataset from a Hospital \(A\) where input space is \(X\) and label space is \(Y\). To evaluate \(M\)'s performance on a test dataset \(Q=\{x_{i}|i=1,\ldots,n\}\) drawn from a data distribution \(P\), we will assume that algorithm \(M\) is fixed. The initial steps would be choosing a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\) and calculating the loss \(\ell_{Q}(M)=\sum_{i=1}^{n}\ell\left(M\left(x_{i}\right),y_{i}\right)\) and population loss \(\ell_{\mathcal{P}}(M)=\mathbb{E}\mathcal{P}[\ell(M(x),y)]\) under the test distribution.While the input \(X\) and label \(Y\) spaces are fixed in this case, there may be many distinct joint distributions. Furthermore, Besides the dataset from Hospital \(A\), we collect another clinical domain dataset from Hospital \(B\) where the test set is \(Q^{\prime}\) from a new distribution \(P^{\prime}\)
In addition to the test loss \(l_{Q}(M)\), we are interested to know the loss \(l_{Q^{\prime}}(M)\) under real-time applications, known as distribution shift. The observations of models' expected loss to changes or differences would be helpful to improve the model's robustness, eliminate the biases and understand the dataset better. This difference may be broken down into three parts.[13]
\[l_{Q}(M)-l_{Q^{\prime}}(M)=\underbrace{(l_{Q}(M)-l_{\mathcal{P}}(M))}_{\text {Adaptivity gap}}+\underbrace{(l_{\mathcal{P}}(M)-l_{\mathcal{P}^{\prime}}(M))}_{ \text{Distribution gap}}+\underbrace{(l_{\mathcal{P}^{\prime}}(M)-l_{Q^{\prime}}(M) )}_{\text{Generalization gap}} \tag{1}\]
The adaptivity gap quantifies how adjusting the model to the test set affects the estimate of population loss. While the distribution gap indicates how much a change in the sample's distribution impacts the model's performance, the difference between the sample and population losses due to \(Q\)'s random sampling is captured by the generalization gap.
Figure 1: Comparison of models’ F1 scores on emrQA and our proposed test datasets. The result demonstrates that despite impressive results on the training dataset when applied to new test sets, the model’s performance degrades, which shows the distribution shift
emrQA Dataset
emrQA [14] is a large-scale extractive question-answering dataset generated semi-automatically from question templates, and human experts annotated the templates. The proposed task was to extract a continuous text span from the clinical note while giving a clinical note and a question as input to the QA model. The corpus contains almost 400,000 triplets of a question, logical form, and evidence/answer.
We used Medication and Relation datasets for fine-tuning the models. The following two comprise 80% of the overall emrQA dataset, and their structure is congruent with the span prediction task, which is more significant for clinical decision assistance. Table 1 describes the statistics of the emrQA dataset.
## 4 Proposed Testbed
The MIMIC-III [8] is the most popular dataset for Electronic Health Record (EHR) related tasks. In the dataset, A clinical record is maintained for each patient, which contains information such as ECGs, physician notes, discharge summaries, nurse progress reports and more. Due to its comprehensive collection of unstructured clinical notes, MIMIC-III has been extensively used in clinical NLP tasks.[15]
In order to evaluate the ability of healthcare domain QA systems to generalize to new clinical data, we created five new test sets from MIMIC-III. First, we randomly sampled the patient's records from the MIMIC-III dataset, and later, we filtered the sampled data based on the Heart, Medication, Obesity, Smoking, and Cancer subset's keywords. Finally, we utilized the same question templates provided by emrQA to create the question and answer pairs, along with a few more templates annotated by experts.
**Heart Disease Test set** The Heart disease test subset contains 1.5k annotated questions and answers related to heart disease and issues such as heart attack, heart failure, coronary artery disease Ischemia, and more.
**Medication Test set** The medication subset contains around 1.5k samples. It includes the question and answers related to medical prescriptions, drugs name, drug dosages, history, and & status of medications; it also contains questions related to the number of doses, side effects of medications, reasons for medications and more.
**Obesity Test set** Similar to the above datasets, we extracted 1k questions and answers related to obesity, such as comorbidities associated with the current issue, hypertension, Sleep Apnea, Diabetes, and more. The test set's diversity helps to quantify the degree to which current systems overfit the original test set.
**Smoking Test set** For the smoking subset, 2k questions were selected related to the current status & history of smoking, quantity & type of smoking, issues and more.
**Cancer Test set** The subset includes 1.5k questions and answers related to different cancers, such as Breast Cancer, Colon cancer, Rectal Cancer, Thyroid Cancer, and more. Moreover, the set also contains samples mentioning the Cancer cause, side effects, medications for current cancer, stage of cancer and more.
### Preprocessing & Quality Checks
The following procedures were used to sanitize the raw data and to ensure that all questions could be answered using textual input. We lowercase the text in the subset test sets, remove punctuation marks and tokenize the text using whitespace. We set the min length to 100 and split the long documents into sub-documents. We also converted a few abbreviations to complete forms such as y.o., y/o, to a year old and yr, yr's, yrs to years. Also, additional data cleaning procedures were performed to ensure that the information gathered from the question, answer, and evidence triples met the data quality objectives. We generated 25k question-answers pairs from selected medical records; 15k samples were discarded due to low data quality, noisy and wrong answers. After the data team re-validated, at the time of writing the paper, 7.5k final samples were finalized for the testbed. However, We plan
to expand the corpus by adding more samples and model results. The full paper and the updated benchmark are available at github.com/openlifescience-ai/clift
### Dataset Statistics & Analysis
This dataset covers different diseases and medical conditions based on patient information.
Table 2 in the Appendix summarises the general statistics of preprocessed data. Moreover, Figure 7 in the Appendix depicts an additional useful statistic: the number of unique tokens in the dataset. The performance of the models is influenced by the amount of vocabulary, which is a good indicator of the linguistic and domain complexity associated with a text corpus. The Obesity & Heart subset was observed to include more extended questions and a broader vocabulary than the others. As a result, the former subsets' question are more complex than the rest.
In addition, We use the two difficulty metrics provided by [19] to compare the original emrQA test set to our five new test sets. Figure 2 shows the Syntactic divergence histograms for the emrQA & proposed datasets.
## 5 Experimental Setup
We followed the [27] random seed and split the combined Medication and Relation datasets based on the clinical texts obtaining train, dev, and test with the ratio 7:1:2. Table 7 shows the descriptive statistics of the emrQA dataset. Similar to [19], we used the F1 score (F1) as our evaluation matrices. The hyperparameters are chosen based on [4]. In order to fine-tune it for four epochs, we set the batch size to 8, the learning rate to 2e5, and the maximum response length to 30. we optimize the model's parameters using the AdamW optimizer.
We implemented these models using Pytorch [16] and obtained pre-trained checkpoints from the HuggingFace library [25]. All models are fine-tuned on 4 GPUs with NVIDIA Quadro RTX 8000 and 256 GB of RAM.
### Baseline Models
In our baseline studies, we consider 12 existing models as a baseline, which have shown strong performance in biomedical NLP tasks. These are developed on several pre-trained language models employing the Transformers architecture [24]. Notably, BioBERT [10], and variant BioClinicalBert and BioBert-squad-L pre-trained on large-scale biomedical corpora. DistilBert is a distilled version of the BERT base model. We used two different versions of DistilBert, including DistilBert-squad-B and BERT-ClinicalQA, fine-tuned on Squad and ClinicalQA, respectively.
To better utilize the knowledge of MIMIC and PubMed documents for QA tasks, we fine-tuned the BlueBert Models, including BlueBert-MIMIC and BlueBert-PubMed. Moreover, We also used the Roberta architecture models, including Biomed-Roberta, and Roberta-squad-B. In addition, we fine-tuned the PubMedBERT, Bert-Large, and Electra-Squad-L on the emrQA dataset. Section D in the Appendix describes the models in detail.
We finetuned the models on the training dataset of emrQA and evaluated on the proposed test sets.
Figure 2: Syntactic divergence histograms for the emrQA & proposed datasets’ question and response phrases.
Results & Discussion
The objective of the proposed testbed is to test the performance of a clinical model when evaluated on data relevant to the same Question-Answering task but from multiple data sources/hospitals and environments. We ran a battery of experiments and evaluated twelve models on proposed datasets. In addition to investing the state of art models in clinical domains for Question Answering, we examine the robustness of models on the natural distribution shift in the emrQA dataset. More information about baseline models is provided in Section 5.1 and Appendix D
### Comparison of Transformer Based clinical Models on emrQA & Cliff
Under the emrQA test set, PubMedBERT performed better than other models. Moreover, Biomedical-Roberta gave the second-best results on the same. The Roberta-squad-Base model outperformed other models on the Medication, Obesity, and Heart test set. However, Bert-Large BlueBert-MIMIC gave the best accuracy on Cancer and Smoke datasets.
To examine the QA model's result closely, We also performed interpretability experiments and provided the results in Appendix B
### Are Clinical Models fine-tuned on Emrqa robust to natural distribution shifts?
To test the hypothesis that the original test set performance is better than new test sets, we compare the F1 scores for all the models trained on the emrQA to the F1 scores on each of our additional test sets. Figure 1 shows that, when applied to proposed test datasets, All the models indicate a performance loss ranging from 43.82% to 73.07%. Table 3 in the Appendix shows the performance of different models on the medication test set. As we can see that BERT-ClinicalQA drops around 70% F1 points while evaluating the medication dataset, compared to a drop of about 61%, 55%, 75%, and 53 % F1 points when evaluating the model on heart, cancer, obesity, and smoke dataset respectively.
PubMedBERT fine-tuned on emrQA shows the worst performance on the Heart test set with a gap of 70.37 % F1. Moreover, Biomed-Roberta shows a large gap of 64.22 % on the Cancer dataset and DistilBert-squad-B, BioBert-B suffers from generalizing well on the Obesity and Smoke dataset and dropping by 73.07 and 54.98 respectively, in the performance. Roberta-squad-B outperformed all other models on 3 out of 5 datasets, including Medication, Heart, and Obesity, with an F1 score of 24.99%, 29.90%, and 19.88%, respectively.
These results suggest that the models are sensitive to distribution shifts. Real-world deployments of health machine learning systems may encounter these failure situations. Therefore, depending on the entity of interest, performance for downstream applications would vary greatly. Table 5, 6, 4 and 7 in the Appendix show the results of different models on cancer, obesity, heart, and smoke dataset respectively.
These results imply the potential for further study handling distribution shifts in clinical datasets. In addition, future research should look at specific model training parameters and the datasets' size, quality, and quantity that contribute to these variations.
## 7 Conclusion
Robust machine learning aims to create techniques that consistently work in different environments. To this end, we propose a CLIFT testbed to evaluate the clinical deep learning model's performance under the distributional shift of the different test sets. We evaluate this testbed on several algorithms trained on the emrQA dataset and find that the performance degrades when applied to new test sets despite impressive results on the original test set. Moreover, In this study, we performed an empirical analysis on emrQA and analyzed the proposed test sets on difficulty metrics such as answer diversity and syntactic divergence. Our extensive testbed will indicate progress towards trustworthy machine learning in healthcare.
## Acknowledgments
We thank Samsudeen Mugaideen for building essential Front-end tools for data tagging and validation. We greatly appreciate the support of our lovely team members with medical backgrounds Srishti Awasthi, Amanpreet Kaur, Dipali Mishra, Chitvan Gautam and Nidhi, who played a crucial role in validating the tagged data, supported by Geetha Eswaran and Prashant Shukla. We also want to thank anonymous reviewers & Malaikannan Sankarasubbu for their valuable comments & feedback.
|
2310.03355 | A Note on the LogRank Conjecture in Communication Complexity | The LogRank conjecture of Lov\'asz and Saks from 1988 is the most famous open
problem in the communication complexity theory. The statement is as follows:
Suppose that two players intend to compute a Boolean function $f(x,y)$ when $x$
is known for the first and $y$ for the second player, and they may send and
receive messages encoded with bits, then they can compute $f(x,y)$ with
exchanging $(\log \rank (M_f))^c $ bits, where $M_f$ is a Boolean matrix,
determined by function $f$. The problem is widely open and very popular, and it
has resisted numerous attacks in the last 35 years. The best upper bound is
still exponential in the bound of the conjecture. Unfortunately, we cannot
prove the conjecture, but we present a communication protocol with $(\log \rank
(M_f))^c $ bits, which computes a -- somewhat -- related quantity to $f(x,y)$.
The relation is characterized by a representation of low-degree, multi-linear
polynomials modulo composite numbers. This result of ours may help to settle
this long-time open conjecture. | Vince Grolmusz | 2023-10-05T07:20:21Z | http://arxiv.org/abs/2310.03355v2 | # A Note on the LogRank Conjecture in Communication Complexity
###### Abstract
The LogRank conjecture of Lovasz and Saks from 1988 is the most famous open problem in the communication complexity theory. The statement is as follows: Suppose that two players intend to compute a Boolean function \(f(x,y)\) when \(x\) is known for the first and \(y\) for the second player, and they may send and receive messages encoded with bits, then they can compute \(f(x,y)\) with exchanging \((\log\operatorname{rank}(M_{f}))^{c}\) bits, where \(M_{f}\) is a Boolean matrix, determined by function \(f\). The problem is widely open and very popular, and it has resisted numerous attacks in the last 35 years. The best upper bound is still exponential in the bound of the conjecture. Unfortunately, we cannot prove the conjecture, but we present a communication protocol with \((\log\operatorname{rank}(M_{f}))^{c}\) bits, which computes a - somewhat - related quantity to \(f(x,y)\). The relation is characterized by a representation of low-degree, multi-linear polynomials modulo composite numbers. This result of ours may help to settle this long-time open conjecture.
## 1 Introduction
### Communication Games
Two-player communication games were first defined by Yao in 1979 [1]: There are two players, Alice and Bob, and a Boolean function
\[f:\{0,1\}^{N}\times\{0,1\}^{N}\to\{0,1\},\]
and two Boolean sequences, \(x,y\in\{0,1\}^{N}\). Alice knows the value of \(x\), Bob the value of \(y\), and they want to compute collaboratively the value of \(f(x,y)\). The local computational power of the players are unlimited, and the cost of the collaborative computation is the number of bits exchanged between the parties. Function \(f\) is computed by the players if one of them knows \(f(x,y)\), and the other knows that the first player knows [2].
Clearly, any \(f\) can be computed by \(N\) bits of communication: Alice tells \(x\) to Bob at cost \(N\), and Bob computes \(f(x,y)\) for free. This is the trivial protocol for computing \(f\).
We say that the players follow a _communication protocol_ for computing \(f\), where the protocol prescribes that in each step, what message a player should send to the other for a given message history and input. The cost of a protocol, computing \(f(x,y)\), is the maximum number of bits communicated, taken for all \(x\) and \(y\) inputs.
The communication complexity of Boolean function \(f\) is the minimum cost of protocols, which compute \(f\). The communication complexity of \(f\) is denoted by \(\kappa(f)\)[2]. For a more formal introduction and examples, we refer to [2; 3].
The communication games and communication complexity have become a central field of theoretical computer science, hundreds of publications (e.g., [4; 5; 6; 7; 8] and numerous books [3; 9; 10] have been appeared on the topic. For example, difficult-to-handle areas, such as Boolean circuit complexity, applied communication games for gaining upper- and lower bounds [11; 12; 13; 14; 15; 16; 17; 10; 18].
One of the main questions is finding general upper- and lower bounds for the communication complexity of \(f\). To describe the bounds, we need to define the communication matrix of the function \(f\):
**Definition 1**.: _The communication matrix of \(f:\{0,1\}^{N}\times\{0,1\}^{N}\to\{0,1\}\) is a \(2^{N}\times 2^{N}\) 0-1 matrix \(M_{f}\), where its rows corresponds to the different \(x\in\{0,1\}^{N}\) values, the columns to the different \(y\in\{0,1\}^{N}\) values, and in the position of row \(x\) and column \(y\) is the value of \(f(x,y)\in\{0,1\}\). Let \(\operatorname{rank}(M_{f})\) denote the matrix rank over the rational field. Let \(\log\) denote the logarithm base 2, and \(\ln\) the natural logarithm._
A general lower bound, which implies that for most of the natural communications problems (e.g., the identity function or the set disjointness) the trivial protocol is optimal, was proved by Mehlhorn and Schmidt in 1982:
**Theorem 1** ([19]).: _If \(f\) is not the identically \(0\) function, then_
\[\kappa(f)\geq\log\operatorname{rank}(M_{f}).\]
The most famous open problem in communication complexity is the LogRank conjecture of Lovasz and Saks from 1988:
**Conjecture 2** ([20]).: _There exists a polynomial \(P\) such that for all \(f\not\equiv 0\),_
\[\kappa(f)\leq P(\log\operatorname{rank}(M_{f})).\]
The 35-year-old conjecture is widely open today, inspiring numerous theorems and approaches even in the last few years [21; 22; 23; 24; 25; 26; 27].
The best published upper bound of Lovett [8] (for a large enough \(\operatorname{rank}(M_{f})\)) is still exponentially far from the conjectured upper bound:
\[\kappa(f)=O\left(\sqrt{\operatorname{rank}(M_{f})}\log\operatorname{rank}(M_{ f})\right).\]
We note that the LogRank conjecture can be formulated with the terms of graph theory as a relation between the rank of the adjacency matrix and the chromatic number of a graph [20; 8], without even mentioning communication games.
### Polynomial representations modulo composite numbers
We will need some definitions and theorems from [28; 29]:
Let \(g:\{0,1\}^{n}\to\{0,1\}\) be a Boolean function and let \(m\) be a positive integer. _Barrington_, _Beigel_ and _Rudich_[30] gave the following definition:
**Definition 2**.: _The polynomial \(P\) with integer coefficients weakly represents Boolean function \(g\) modulo \(m\) if there exists an \(S\subset\{0,1,2,...,m-1\}\) such that for all \(x\in\{0,1\}^{n}\),_
\[g(x)=0\Longleftrightarrow\big{(}P(x)\bmod m\big{)}\in S.\]
_Here \((a\bmod m)\) denotes the smallest non-negative \(b\equiv a\bmod m\)._
We are interested in the smallest degree polynomials representing \(g\). Since \(g\) is Boolean, we may assume that \(P\) is multilinear (since \(x_{i}^{2}=x_{i}\) over \(\{0,1\}^{n}\)).
Let \(\operatorname{OR}_{n}:\{0,1\}^{n}\to\{0,1\}\) denote the \(n\)-variable OR-function:
\[\operatorname{OR}_{n}(x_{1},x_{2},\ldots,x_{n})=\Big{\{}\begin{array}{l}0, \text{ if }x_{1}=x_{2}=\cdots=x_{n}=0\\ 1\text{ otherwise.}\end{array}\]
If polynomial \(P\) weakly represents \(\operatorname{OR}_{n}\) modulo a prime \(p\) then we may assume that for \(x\in\{0,1\}^{n}\),
\[P(x)=0\bmod p\iff x=(0,0,...,0).\]
Then
\[1-P^{p-1}(1-x_{1},1-x_{2},...,1-x_{n})\]
is clearly the \(n\)-variable AND function with a unique multi-linear form
\[x_{1}x_{2}x_{3}...x_{n}.\]
Therefore, the degree of \(P\) is at least \(n/(p-1)\).
_Tardos_ and _Barrington_[31] showed that the same conclusion holds if \(p\) is a prime power.
However, _Barrington, Beigel_ and _Rudich_[30] proved that the conclusion fails for composite moduli with at least two prime divisors:
**Theorem 3** ([30]).: _There exists an explicitly constructible polynomial \(P\) of degree \(O(n^{1/r})\) which weakly represents \(OR_{n}\) modulo \(m=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}...p_{r}^{\alpha_{r}}\), where the \(p_{i}\)'s are distinct primes._
An explicit example of such a non-trivial polynomial mod 6 is given in the Appendix.
We have applied the polynomial of Theorem 3 in [32] for falsifying a long-standing conjecture for the size of set systems with restricted intersections and also for giving new explicit Ramsey-graph constructions, among other applications for set systems and codes described in [33, 34, 35, 36, 37, 38].
In [32] we have reproduced a short proof of Theorem 3 of [30], and we have proved the following
**Corollary 4**.: _Let \(m=p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}}...p_{r}^{\alpha_{r}}\). Then there exists an explicitly constructible polynomial \(P^{\prime}\) with \(n\) variables and of degree \(O(n^{1/r})\) which is equal to 0 on \(x=(0,0,\ldots,0)\in\{0,1\}^{n}\), it is nonzero mod \(m\) for all other \(x\in\{0,1\}^{n}\), and for all \(x\in\{0,1\}^{n}\) and for all \(i\in\{1,\ldots,r\}\), \(P(x)\equiv 0\pmod{p_{i}^{\alpha_{i}}}\) or \(P(x)\equiv 1\pmod{p_{i}^{\alpha_{i}}}\)._
Using the results of [32], we have found a remarkable application for elementary symmetric polynomials in [28].
We will need the following definition from [29] with small changes to describe the result:
**Definition 3**.: _Let \(m\) be a composite number with prime-factorization \(m=p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{\ell}^{e_{\ell}}\). Let \(\mathbb{Z}_{m}\) denote the ring of integers modulo \(m\) and \(\mathbb{Z}\) the integers. Let \(f\) be a multi-linear polynomial of \(n\) variables over the integers:_
\[f(x_{1},x_{2},\ldots,x_{n})=\sum_{\alpha\in\{0,1\}^{n}}a_{\alpha}x^{\alpha},\]
_where \(a_{\alpha}\in\mathbb{Z}\), \(x^{\alpha}=\prod_{i=1}^{n}x_{i}^{\alpha_{i}}\). Then we say that_
\[g(x_{1},x_{2},\ldots,x_{n})=\sum_{\alpha\in\{0,1\}^{n}}b_{\alpha}x^{\alpha},\]
_is an_
* alternative representation _of_ \(f\) _modulo_ \(m\)_, if_ \[\forall\alpha\in\{0,1\}^{n}\ \ \exists j\in\{1,2,\ldots,\ell\}:\] \[a_{\alpha}\equiv b_{\alpha}\pmod{p_{j}^{e_{j}}};\]
* 0-a-strong representation _of_ \(f\) _modulo_ \(m\)_, if it is an alternative representation, and furthermore, if for some_ \(i\)_,_ \(a_{\alpha}\not\equiv b_{\alpha}\pmod{p_{i}^{e_{i}}},\) _then_ \(b_{\alpha}\equiv 0\pmod{p_{i}^{e_{i}}};\)__
* 1-a-strong representation _of_ \(f\) _modulo_ \(m\)_, if it is an alternative representation, and furthermore, if for some_ \(i\)_,_ \(a_{\alpha}\not\equiv b_{\alpha}\pmod{p_{i}^{e_{i}}},\) _then_ \(a_{\alpha}\equiv 0\pmod{m};\)__
That is for modulus 6; in the alternative representation, each coefficient is correct, either modulo 2 or modulo 3, but not necessarily both.
In the 0-a-strong representation, the 0 coefficients are always correct both modulo 2 and 3; the non-zeroes are allowed to be correct either modulo 2 or 3, and if they are not correct modulo one of them, say 2, then they should be 0 mod 2. Consequently, the coefficient 1 can be represented by 1, 3, or 4, but nothing else.
In the 1-a-strong representation, the non-zero coefficients of \(f\) are correct for both moduli in \(g\), but the zero coefficients of \(f\) can be non-zero either modulo 2 or modulo 3 in \(g\), but not both.
**Remark 5** ([29]).: _The 1-a-strong representations of polynomial \(f\) can be written in the form modulo \(m\):_
\[f+p_{1}^{e_{1}}g_{1}+p_{2}^{e_{2}}g_{2}+\dots+p_{\ell}^{e_{\ell}}g_{\ell},\]
_where the \(g_{i}\) have no monomials in common with each other, nor with \(f\)._
**Example 6** ([29]).: _Let \(m=6\), and let \(f(x_{1},x_{2},x_{3})=x_{1}x_{2}+x_{2}x_{3}+x_{1}x_{3}\), then \(g(x_{1},x_{2},,x_{3})=3x_{1}x_{2}+4x_{2}x_{3}+x_{1}x_{3}\) is a 0-a-strong representation of \(f\) modulo 6; \(g(x_{1},x_{2},,x_{3})=x_{1}x_{2}+x_{2}x_{3}+x_{1}x_{3}+3x_{1}^{2}+4x_{2}\) is a 1-a-strong representation of \(f\) modulo 6; \(g(x_{1},x_{2},,x_{3})=3x_{1}x_{2}+4x_{2}x_{3}+x_{1}x_{3}+3x_{1}^{2}+4x_{2}\) is an alternative representation modulo 6._
**Remark 7**.: _Clearly, the 1-a-strong representation of \(f\) is not unique. Suppose that_
* _all coefficients of_ \(f\) _and_ \(f^{\prime}\) _are either 1 or -1 mod_ \(m\)_, and_
* \(g\) _is a 1-a-strong representation of_ \(f\) _and also of_ \(f^{\prime}\)_, where_ \(f,f^{\prime}\) _and_ \(g\) _are multilinear, homogeneous degree-_\(d\) _polynomials, that is, every monomials of_ \(f,f^{\prime}\) _and_ \(g\) _are degree_ \(d\)_,_
_then \(f=f^{\prime}\) modulo \(m\). Clearly, one can set the monomials of \(f\) or \(f^{\prime}\) to 1 one by one, and the \(p_{i}\)-multiplied monomials need to be 0 in \(g\) because of homogeneity._
We have proved in [28] and stated in this form in [29] the following
**Theorem 8** ([28]).: _Let the prime factorization of positive integer \(m\) be \(m=p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{\ell}^{e_{\ell}}\), where \(\ell>1\). Then a degree-2 0-a-strong representation of the second elementary symmetric polynomial_
\[S_{n}^{2}(x,y)=\sum_{\stackrel{{ i,j\in\{1,2,\dots,n\}}}{{i\neq j }}}x_{i}y_{j}, \tag{1}\]
_modulo \(m\):_
\[\sum_{\stackrel{{ i,j\in\{1,2,\dots,n\}}}{{i\neq j}}}a_{ij}x_{i}y _{j} \tag{2}\]
_can be computed as the following product with coefficients from \(\mathbb{Z}_{m}\):_
\[\sum_{j=1}^{t}\left(\sum_{i=1}^{n}b_{ij}^{\prime}x_{i}\right)\left(\sum_{i=1} ^{n}c_{ij}^{\prime}y_{i}\right)\]
_where \(t=\exp(O(\sqrt[t]{\log n(\log\log n)^{\ell-1}})).\) Moreover, this representation satisfies that \(\forall i\neq j:a_{ij}=a_{ji}.\)_
Now we need the main theorem from [29]:
**Theorem 9** ([29], Theorem 6).: _Let \(m=p_{1}^{e_{1}}p_{2}^{e_{2}}\cdots p_{\ell}^{e_{\ell}}\), where \(\ell>1\), and \(p_{1},p_{2},\dots,p_{\ell}\) are primes. Then a degree-2, 1-a-strong representation of the dot-product \(f(x_{1},x_{2},\dots,x_{n},y_{1},y_{2},\dots,y_{n})=\sum_{i=1}^{n}x_{i}y_{i}\) can be computed with \(t+1=\exp(O(\sqrt[t]{\log n(\log\log n)^{\ell-1}}))\) multiplications of the form_
\[\sum_{j=1}^{t+1}\left(\sum_{i=1}^{n}b_{ij}x_{i}\right)\left(\sum_{i=1}^{n}c_{ ij}y_{i}\right), \tag{3}\]
_where all coefficients are integers._
The proof is immediate by subtracting the 0-a-strong representation of \(S_{n}^{2}(x,y)\) from \((x_{1}+x_{2}+\dots+x_{n})(y_{1}+y_{2}+\dots+y_{n})\)[29].
An explicit example for the \(b_{ij}\) and \(c_{ij}\) coefficients, with \(m=6,n=16,t+1=13\) can be found in the Appendix.
## 2 The LogRank Protocol
Here, we describe our protocol. Let \(m=p_{1}p_{2}\ldots p_{\ell}\) be the product of the first \(\ell\) primes. Then \(m=e^{(1+o(1))\ell\ln\ell}\), by the estimation of the first Chebyshev number [39]. Let \(\ell=\lfloor\log\log n\rfloor\). Then \(m\) can be given with less than \((\log\log n)^{c}\) bits, with a \(c>0\).
The value \(t\) from Theorem 9 satisfies
\[t\leq\exp(O((\log n)^{1/\ell}\log\log n))\leq\exp(c_{1}\log\log n)\leq(\log n)^ {c_{1}},\]
with a positive \(c_{1}\).
Now suppose that we have a Boolean function \(F:\{0,1\}^{N}\times\{0,1\}^{N}\to\{0,1\}\), with a 0-1 \(2^{N}\times 2^{N}\) communication matrix \(M_{F}\) with \(\operatorname{rank}(M_{F})=n\) over the rationals, \(n\leq 2^{N}\). By the definition of the rank, one may choose \(n\) linearly independent 0-1 columns of \(M_{F}\), let the \(2^{N}\times n\) 0-1 matrix \(X\) contain these columns. Then there exists an \(n\times 2^{N}\) rational matrix \(Y\), such that \(M_{F}=XY\).
Note that each entry of \(M_{F}\) is a dot product of a length-\(n\) row of \(X\) and a length-\(n\) column of \(Y\). Now, by Theorem 9, the 1-a-strong representation of this dot product can be computed modulo \(m\) as a sum (3):
\[\sum_{j=1}^{t+1}\left(\sum_{i=1}^{n}b_{ij}x_{i}\right)\left(\sum_{i=1}^{n}c_{ ij}y_{i}\right),\]
where all coefficients are integers modulo \(m\). Now, Alice can compute privately for \(j=1,2,\ldots t+1\) the mod \(m\) values of sums
\[\left(\sum_{i=1}^{n}b_{ij}x_{i}\right)\]
and can communicate each with \((\log\log n)^{c}\) bits. Since \(t<(\log n)^{c_{1}}\), the total communication of Alice is polylogarithmic in the rank \(n\).
Now, Bob, knowing the (rational) values of \(y_{1},y_{2},\ldots,y_{n}\) can privately compute the sum (3), without further communication.
**Remark 10**.: _For a given \(M_{f}\), the players can agree on that the will compute the \(kF(x,y)\) instead of \(F(x,y)\), and then matrix \(Y\) can be changed for an integer matrix \(kY\). Note also that this convention will not increase the communication, since Bob does not communicate linear combinations of his variables._
## 3 Remarks and Conclusion
In the previous section we have presented a protocol, which computed a value with \((\log\operatorname{rank}(M_{F}))^{c_{2}}\) communication, with a positive constant \(c_{2}\) which is substituted in the 1-a-strong representation \(g\) modulo \(m=p_{1}p_{2}\ldots p_{\ell}\) of the dot-product polynomial \(f=\sum_{i=1}^{n}x_{i}y_{i}\):
\[g(x,y)=\sum_{i=1}^{n}x_{i}y_{i}+p_{1}g_{1}(x,y)+p_{2}g_{2}(x,y)+\ldots+p_{\ell }g_{\ell}(x,y). \tag{4}\]
Note that the "surplus" terms in (4) are zero modulo for at least one of the prime divisors of \(m\). Note also that there is no common monomial (i.e., \(x_{i}y_{j}\)) in the dot-product and among the \(g_{i}\) polynomials.
**Remark 11**.: _We do not know how to eliminate the surplus terms with the \(g_{i}\) polynomials from (4) with a log-rank bounded additional communication. One can imagine numerous possible approaches, for example, substituting a prime \(p_{i},1\leq i\leq\ell\) for all or just a subset of 1's in the \(x_{i}\)'s, and repeating the protocol; or repeating the protocol above for 0-1 rows instead of 0-1 columns, or repeating the protocol above for overlaying subsets of indices \(i=1,2,\ldots,n\) several times._
**Remark 12**.: _As we described in [29], Theorem 9 has numerous applications in representing the matrix product with very few multiplications or the hyperdense coding of numbers, vectors or matrices. We also note that our definition and use of the term "hyperdense coding" [40] precede the quantum-computational use of an unrelated but identically named term of [41; 42] by more than 9 years._
## Funding
VG was partially funded by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme.
## Competing interests
The author declares no competing interests.
|
2303.14711 | Unsupervised detection of small hyperreflective features in ultrahigh
resolution optical coherence tomography | Recent advances in optical coherence tomography such as the development of
high speed ultrahigh resolution scanners and corresponding signal processing
techniques may reveal new potential biomarkers in retinal diseases. Newly
visible features are, for example, small hyperreflective specks in age-related
macular degeneration. Identifying these new markers is crucial to investigate
potential association with disease progression and treatment outcomes.
Therefore, it is necessary to reliably detect these features in 3D volumetric
scans. Because manual labeling of entire volumes is infeasible a need for
automatic detection arises. Labeled datasets are often not publicly available
and there are usually large variations in scan protocols and scanner types.
Thus, this work focuses on an unsupervised approach that is based on local
peak-detection and random walker segmentation to detect small features on each
B-scan of the volume. | Marcel Reimann, Jungeun Won, Hiroyuki Takahashi, Antonio Yaghy, Yunchan Hwang, Stefan Ploner, Junhong Lin, Jessica Girgis, Kenneth Lam, Siyu Chen, Nadia K. Waheed, Andreas Maier, James G. Fujimoto | 2023-03-26T12:45:42Z | http://arxiv.org/abs/2303.14711v1 | Unsupervised detection of small hyperreflective features in ultrahigh resolution optical coherence tomography
###### Abstract
Recent advances in optical coherence tomography such as the development of high speed ultrahigh resolution scanners and corresponding signal processing techniques may reveal new potential biomarkers in retinal diseases. Newly visible features are, for example, small hyperreflective specks in age-related macular degeneration. Identifying these new markers is crucial to investigate potential association with disease progression and treatment outcomes. Therefore, it is necessary to reliably detect these features in 3D volumetric scans. Because manual labeling of entire volumes is infeasible a need for automatic detection arises. Labeled datasets are often not publicly available and there are usually large variations in scan protocols and scanner types. Thus, this work focuses on an unsupervised approach that is based on local peak-detection and random walker segmentation to detect small features on each B-scan of the volume.
## 1 Introduction
Age-related macular degeneration (AMD) ranks fourth on the leading causes of vision loss worldwide [1]. To develop new potential treatments, trial endpoints need to be defined based on early markers of disease progression. New systems such as the ultrahigh resolution spectral domain OCT (UHR SD-OCT) enable the detection of smaller hyperreflective features in the human retina, such as specks (HRS), which have been suggested as indicators for cellular activities associated with visual dysfunction in AMD [1].
For the detection of conventional, but larger, hyperreflective foci (HRF) multiple algorithms already exist [2, 3, 4, 5]. They can be divided into algorithms that work on volumes, B-scans or on enface projections of specific retinal layers. Most approaches focus on intensity based thresholds applied to 8-bit images as they are available in commercial OCT instruments. More recent methods utilize optical attenuation coefficients or deep learning. However, most of these algorithms do not utilize the full dynamic range of the OCT signal, suffer from limited resolution scanners, or do not consider the limited amount of available labeled data. [2, 3, 4, 5]
To overcome these issues, we propose an unsupervised algorithm that is able to detect features in anisotropic high-definition volumetric OCT datasets by applying a
combination of out-of-the-box image analysis tools. In addition, we show that the algorithm is applicable to different scan patterns, including anisotropic high-definition scans and isotropic motion corrected data from multiple merged scans.
## 2 Materials and Methods
### Data acquisition and preprocessing
Studies were performed under an IRB approved protocol at MIT and the NEEC. Written informed consent was obtained. Data was collected by a UHR SD-OCT prototype with an axial resolution of \(\sim\)2.7 \(\upmu\)m. The anisotropic high-definition volumetric scan was acquired over a \(9\times 6\) mm field with an A-scan spacing of 5 \(\upmu\)m and a B-scan spacing of 25 \(\upmu\)m. No B-scan or volumetric averaging was performed and the data was stored using 32 bit floating point values. After resampling, the pixel size is 0.89 \(\upmu\)m in axial direction and the B-scan dimensions are \(1600\times 1800\) pixels.
For evaluation of the algorithm, a total of 49 B-scans from 10 different eyes diagnosed with early (3) and intermediate (7) AMD were selected based on potential appearences of HRS/HRF. Three expert readers independently labeled HRF and HRS and their results were combined using STAPLE [6] with a threshold of \(\frac{2}{3}\) and used as ground truth. Of all annotated feature instances 32.2 % were labeled by all readers. After STAPLE was applied, 49.24 % of all previously labeled feature instances remained as ground truth.
To test the applicability of the algorithm on different imaging protocols, on the same device, 4 isotropic scans with 500\(\times\)500 A-scans were acquired over a 6\(\times\)6 mm field with alternating horizontal and vertical B-scan directions. The transverse and (resampled) axial pixel spacings were 12 \(\upmu\)m and 1.78 \(\upmu\)m, respectively. The scans were motion corrected and merged using the approach of Ploner et al. [7]
### Algorithm description
The algorithm is separately applied to each B-scan of the volumetric scan. An essential requirement is the availability of retinal layer segmentation for the posterior of the outer plexiform layer (OPL), external limiting membrane (ELM) and the inner segment/outer segment border (IS/OS), as detecting HRF and HRS in the outer nuclear layer and external to ELM was our priority. For simplification, we refer to this combined region as ONL. As a first step, the layer boundaries are smoothed and the ONL mask eroded to account for possible inaccuracies in the provided layer segmentation. Next, varying illumination (OCT signal) within the ONL was equalized by dividing with a bias field. The field was computed using the axial mean projection of the ONL without considering zero values, and then smoothed by a 30 pixels standard deviation Gaussian kernel following the implementation by Kraus et al. [8] To prevent false positives on the highly reflective ELM, each pixel value in the area of 5 pixels above and below was adjusted so that their axial mean is just below the axial mean of the ONL. For the detection of high intensity features, we utilize local maxima detection. Very isolated maxima are deleted using a hit-or-miss transform with a single pixel structuring element to prevent false positives. After defining the detected local maxima as foreground pixels, the gray
scale values of the B-scan are re-scaled to a range of [-1,1], based on the minimum and maximum values of the B-scan, and background pixels are defined to be below a threshold of -0.9. Random walker segmentation is then applied to expand the regions around the local maxima [9].
In the end, multiple thresholds are applied to eliminate false positives by constraining the results to optical properties of the system and clinically relevant features. The minimum size of the detected features is set to be at least 4 pixels in axial dimension and 3 pixels transverse, corresponding to laser beam spot size and optical resolution. The pixel intensities of the feature are compared to a threshold derived from all ONL values covering the same transverse span. The threshold requires the maximum intensity to be larger than the mean plus two times the standard deviation of the ONL values. In addition, the axial mean position of the feature should be within a relative position of 0.25 to 0.8 between the IS/OS (0.0) and OPL (1.0). If the positional requirement is not fulfilled, but the maximum intensity of the detected feature is twice as high as the mentioned threshold, the feature is still considered as detected in the evaluation. Another hard requirement is that the features must be detached from the IS/OS and OPL to be considered as detection, as defined by a previous study by Echols et al. [1] All of these constraints were also applied to the ground truth after labeling.
## 3 Results
The evaluation is based on binary classification metrics, in which labeled hyperreflective features were considered positives. Because only very few pixels per B-scan were labeled as positives we chose metrics that take high class imbalances into account. Precision and
Figure 1: a) Annotated B-scan from \(9\times 6\) mm volume including layer segmentation and algorithm results b)-c) Zoomed in version of a reference without annotations and including annotations by algorithm and expert reader d)-e) Zoomed in version of a corresponding frame in the merged \(6\times 6\) mm volume without and with algorithm annotations.
recall were calculated based on accumulated confusion matrix values over all B-scans. Results are shown in table 1 and figure 1. The per instance results are calculated based on 66 true positives out of 93 labeled features and 44 false positives. A detection was considered as true positive, if there is at least one pixel overlap with an area in the ground truth mask. Also shown in table 1 are results reported by Schlegl et al. [2] for the detection of large conventional HRF. Their deep learning method was trained on 1051 B-scans acquired on Cirrus HD-OCT and Spectralis OCT and evaluated for Cirrus and Spectralis images separately. They include cases with AMD, diabetic macular edema and retinal vein occlusion. They harmonized the different scan protocols by re-sampling the images achieving unified pixel dimensions of \(\sim\)6 \(\upmu\)m axial and \(\sim\)11.5 \(\upmu\)m transverse.
In our case, the average dimensions of the labeled features are 5.0 pixels in transverse dimension and 9.8 pixels axial corresponding to 25 \(\upmu\)m transverse and \(\sim\)8.7 \(\upmu\)m axial based on the pixel spacing. The features cover an average area of 36 pixels or \(\sim\)160.2 \(\upmu\)m\({}^{2}\). Unfortunately, Schlegl et al. do not provide average sizes of their features. However, on their figures, they appear to be much larger, corresponding to conventional HRF. In addition, they also include features in different retinal layers. Due to the difference in sizes and scale, the performance of their algorithm is not directly comparable to our results, yet their results show the performance of a different algorithm on a similar task. A more detailed comparison of our algorithm and the ground truth yields the following observations (Fig. 2). Labeled features with slightly higher intensity values than the background are difficult to detect and the reader agreement on these is not high.
Figure 2: a)-d) Example images of well segmented hyperreflective features. e) False positive on ELM with high intensity. f) Ground truth annotations in contact with the OPL and algorithm annotations in contact with the IS/OS are, because of the defined constraints, deleted and not shown. Inaccurate layer segmentation is also visible. g) False negative feature appears clearly in neighboring scan, but has low intensity in the current B-scan. h) Ground truth in contact with IS/OS and, therefore, not considered or shown.
This also applies to boundary regions of the features because of the smooth transition to background intensities and a missing consensus of where to delineate feature boundaries. Another situation of disagreement can occur on much higher intensity values on the ELM compared to the rest of the ELM. They are usually detected by the algorithm, but usually not annotated by the human experts. In addition, for features that are close to IS/OS and OPL, it is sometimes unclear, if they are detached from these layers. It occurs that only either the algorithm or the readers include or exclude them, which has a high impact on the scores as the regions are comparably large. Also, features that clearly appear in neighboring B-scans, but are not clearly visible in the current frame, are occasionally annotated by human readers and not by the algorithm.
In figure 1 we show, that the algorithm is also applicable to data acquired with a different scan protocol and averaged scans. The algorithm only requires the adjustment of hyperparameters that can easily be converted based on the pixel sizes. These include the illumination correction and the size thresholds.
## 4 Discussion
Compared to previous work, this algorithm has the advantage that it is inherently explainable and does not require labeled data. For the deployment on new datasets only certain parameters have to be converted to match the pixel spacing of the new input. We demonstrated the generalization to data that was not used during the development and hyperparameter selection. Although the study data can, at this point, not be made publicly available, the method can readily be re-implemented and used out of the box. Another advantage of the algorithm is that it utilizes depth information, which is not possible for algorithms that only work on enface projections of the volume. By this, we exploit one of the big strengths of our prototype, namely the high resolution.
We choose thresholds based on the optical properties of the OCT system and include the smallest features that are still possible to detect. As a result, this has a great negative influence on the evaluation scores, because the features are relatively small and single-pixel misclassifications have a large impact on pixel-wise metrics. However, it is not proven that the exact size of the feature is of clinical relevance. In contrast, studies suggest that the number of detected HRF and HRS might be of clinical relevance and can be associated with delayed rod-mediated dark adaptation, which is linked to AMD progression [1]. Therefore, we propose to rather look at detected feature instances instead of semantic segmentation, which is a measure used by Echols et al. [1] Comparing the algorithm to other state-of-the-art methods is very difficult and we only report the results
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Evaluation per pixel & Evaluation per instance & Schlegl et al. per pixel \\ \hline Precision & 49.00 \% & 60.00 \% & 66.55 \% / 55.98 \% \\ Recall & 63.25 \% & 70.97 \% & 64.01 \% / 73.32 \% \\ F1 score & 55.22 \% & 65.03\% & 65.26 \% / 63.49 \% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Per pixel and per instance evaluation for the detection of small hyperreflective specks and foci in comparison to pixel-wise results for larger conventional hyperreflective foci reported by Schlegl et al. [2] for data acquired on a Cirrus/Spectralis OCT scanner.
of Schlegl et al. [2] to give an idea about the performance of most recent methods on similar tasks. However, most of the datasets are collected with different devices and scan protocols and differ in resolution, dynamic range and image sizes. For example, Schlegl et al. cannot report on features as small as ours because of a lower resolution. In addition, there is not yet a clinical consensus on what HRS exactly are and how to label them. HRS sizes are usually below or close to the resolution of commercial systems and have not been investigated on a larger scale. Our work will provide a tool for larger clinical studies on HRF and HRS distributions close to optical resolution and can potentially be used on datasets collected at different sites with different devices. The only requirement is the availability of layer segmentation for the IS/OS, OPL and ELM. However, the ELM scaling is an optional step in the processing and can be omitted. In the future, the algorithm could be adapted to other retinal layers. For now, the focus lies on the ONL, as we expect more HRF and HRS in this layer during the early stages of AMD. In addition, an extension for isotropic volumes should be considered that operates in 3D to, for example, overcome the issue of detached or non-detached features from the ONL mask boundaries. At the time of submission only limited data for isotropic volumes was available. More clinical studies with larger patient cohorts will follow.
#### 4.0.1 Acknowledgement.
We acknowledge funding by the National Institutes of Health, project 5-R01-EY011289-36, and the German Research Foundation, project 508075009.
|
2308.16274 | Learning Diverse Features in Vision Transformers for Improved
Generalization | Deep learning models often rely only on a small set of features even when
there is a rich set of predictive signals in the training data. This makes
models brittle and sensitive to distribution shifts. In this work, we first
examine vision transformers (ViTs) and find that they tend to extract robust
and spurious features with distinct attention heads. As a result of this
modularity, their performance under distribution shifts can be significantly
improved at test time by pruning heads corresponding to spurious features,
which we demonstrate using an "oracle selection" on validation data. Second, we
propose a method to further enhance the diversity and complementarity of the
learned features by encouraging orthogonality of the attention heads' input
gradients. We observe improved out-of-distribution performance on diagnostic
benchmarks (MNIST-CIFAR, Waterbirds) as a consequence of the enhanced diversity
of features and the pruning of undesirable heads. | Armand Mihai Nicolicioiu, Andrei Liviu Nicolicioiu, Bogdan Alexe, Damien Teney | 2023-08-30T19:04:34Z | http://arxiv.org/abs/2308.16274v1 | # Learning Diverse Features in Vision Transformers
###### Abstract
Deep learning models often rely only on a small set of features even when there is a rich set of predictive signals in the training data. This makes models brittle and sensitive to distribution shifts.
In this work, we first examine vision transformers (ViTs) and find that they tend to extract robust and spurious features with distinct attention heads. As a result of this modularity, their performance under distribution shifts can be significantly improved at test time by pruning heads corresponding to spurious features, which we demonstrate using an "oracle selection" on validation data.
Second, we propose a method to further enhance the diversity and complementarity of the learned features by encouraging orthogonality of the attention heads' input gradients. We observe improved out-of-distribution performance on diagnostic benchmarks (MNIST-CIFAR, Waterbirds) as a consequence of the enhanced diversity of features and the pruning of undesirable heads.
Machine Learning, ICML 2023 Workshop on Spurious Correlations, Invariance, and Stability. Honolulu, Hawaii, USA. Copyright 2023
## 1 Introduction
State-of-the-art models in machine learning show their limits when faced with distribution shifts. Even though they can perform remarkably well when training and test data are drawn from the same distribution, their predictive performance can degrade dramatically otherwise. The reason is that features that are predictive in the training data may be spurious and misleading at test time. Out-of-distribution (OOD) generalization is the capability of a model to maintain its predictive performance in the face of such shifts.
Prior work has shown that deep learning models often rely only on a small set of predictive features (Geirhos et al., 2020). If any of these features are spurious and affected by a distribution shift, chances are high that a model's performance will be affected. A recent line of work seeks to increase the diversity of the learned features, either as an objective when training a predictive model (Ross et al., 2020; Teney et al., 2022; Lee et al., 2022) or as a step prior to the application of invariance-learning methods (Chen et al., 2023; Zhang et al., 2022).
This paper focuses on computer vision tasks and vision transformers (ViT) (Dosovitskiy et al., 2020). We apply a regularizer based on input gradients (Ross et al., 2020; Teney et al., 2022) to a ViTs' attention heads to diversify the features learned across these heads. This encourages different parts of the model to rely on different aspects of the data and to discover additional predictive patterns. In contrast to methods that diversify functional behaviour in prediction space (Lee et al., 2022; Chen et al., 2023), our approach operates in feature space and does not require any OOD data (even unlabeled) during training.
This paper presents early experiments on standard OOD benchmarks (MNIST-CIFAR, Waterbirds). First, we find that **ViTs already have an inherent property for modularity**: their attention heads rely each on different features, such that they can be pruned selectively to discard spurious ones and improve generalization. Second, we show that **the proposed regularizer can further increase the diversity and complementarity of the learned features**. Our method, _DiverseVIT_ (see Figure 1), leads to improvements in a standard OOD evaluation setting, and even more so when we allow pruning attention heads at test time using for selection the highest accuracy obtained on a labeled validation set from the target distribution.1
Footnote 1: This setting provides an upper bound on the performance achievable with ideal model selection heuristics.
**Summary of contributions.**
* We evaluate off-the-shelf ViTs on diagnostic OOD benchmarks and find inherent modularity in their representations, such that OOD generalization can be improved by
pruning the attention heads that rely on spurious features.
* We propose a simple regularizer to increase the diversity of features learned by ViTs.
* We evaluate models trained with our method (DiverseViT) and observe increased diversity and complementarity of the learned features. They show better OOD performance in standard evaluations, and yet much higher performance when pruning specific attention heads at test time.
## 2 Related Work
Diversity of solutions in machine learning.A range of methods have been proposed to train models that are diverse in properties such as OOD generalization (Lee et al., 2022; Teney et al., 2022), interpretability (Chen et al., 2023; Semenova et al., 2022), or fairness. These diversification methods are relevant in cases of underspecification (D'Amour et al., 2020) when the standard ERM objective (empirical risk minimization) does not constrain the solution space to a unique one.
Increasing diversity.Most diversification methods train a set of multiple _entire_ models. By contrast, our approach seeks to increase diversity within a single transformer. Existing methods train multiple models in parallel or sequentially. They encourage diversity in **feature space**(Heljakka et al., 2022; Yashima et al., 2022), **prediction space**(Pagliardini et al., 2022; Lee et al., 2022), or **gradient space**(Ross et al., 2018, 2020; Teney et al., 2022;b). Our method applies a gradient-based approach similar to that of Ross et al. (2020); Teney et al. (2022) to the attention heads of ViTs.
Vision transformers (ViTs).We focus on ViTs because they achieve state-of-the-art performance for multiple tasks in computer vision (Khan et al., 2022). Multiple works have sought to understand the features learned by these models (Bhojanapalli et al., 2021; Naseer et al., 2021; Zhou et al., 2022). Our findings are complementary. We also seek to nudge the (generally beneficial) inductive biases of ViTs. In particular, we seek to overcome the general "simplicity bias" of deep learning models (Shah et al., 2020).
## 3 Proposed method
We describe a simple method to diversify the features learned by ViTs trained for image classification. We propose a regularizer that encourages orthogonality of the input gradients corresponding to their attention heads. We show empirically that this provides a better inherent robustness to distribution shifts. Moreover, this allows further improvements in OOD performance with test-time pruning of the attention heads that correspond to spurious features, while retaining those necessary for robust classification.
### Background: ViTs and attention heads
The input image to a ViT is partitioned into a grid of small patches. A sequence of tokens is formed by combining the patches with positional embeddings. The main operation to aggregate a sequence of tokens is the multi-head self-attention, defined as:
\[h_{i}=\text{softmax}\Bigg{(}\frac{xW_{q}(xW_{k})^{T}}{\sqrt{D}}\Bigg{)}xW_{v} \tag{1}\]
where \(x\in\mathbb{R}^{N\times D}\) represents the set of \(N\) input tokens of the self-attention layer and \(W_{q},W_{k},W_{v}\in\mathbb{R}^{D\times D}\) are learnable parameters of the layer.
The output of all heads are concatenated and the result is projected with a linear mapping:
\[y=[h_{1},...,h_{H}]W_{o} \tag{2}\]
with \(W_{o}\in\mathbb{R}^{D\times H\ast D}\) learnable parameters.
### Encouraging feature diversity with input gradients
Our diversification method is a regularizer term added to the optimization objective when training a ViT for a supervised task with standard ERM. This approach increases the diversity of features within a single transformer, which contrasts with many diversification methods that train multiple entire models. The motivation for our approach is (1) its lower computational cost and (2) leveraging the existing inductive biases of vision transformers to avoid the type of "adversarial" solutions to the gradient-based regularizer that were described in Teney et al. (2022).
Input gradients.To determine how much each dimension of a feature vector contributes to the prediction of the model, we look at the gradient of the prediction with respect to this vector. Concretely, we compute the gradient of the top predicted score \(p^{*}\) with respect to the input \(x\):
\[\nabla_{x}=\frac{\partial p^{*}}{\partial x}\in\mathbb{R}^{N\times D}. \tag{3}\]
Influence of each head.The outputs of all attention heads are concatenated and projected (Equation 2), so \(\nabla_{x}\) considers all attention heads' effects simultaneously. As we are interested in diversifying the effect of each individual head, we want to capture their individual contributions. For this, we backpropagate the gradient of the top prediction (Equation 3) \(H\) times, each time through a single element \(h_{i}\) of Equation 2 while ignoring the rest. We obtain a set of \(H\) input gradient:
\[\Big{\{}\nabla_{x_{i}}\in\mathbb{R}^{N\times D}\mid i\in\{1\cdots H\}\Big{\}}. \tag{4}\]
Each element in this set represents the importance of the input features to the top prediction for a specific head, with \(\sum_{i=1}^{H}\nabla_{x_{i}}=\nabla_{x}\).
Diversity regularizer.To promote diversity across the heads, we define an orthogonality regularizer over the input gradients. We first compute the cosine similarity between the \(n\)th token in heads \(i\) and \(j\) by normalizing over the channel dimension \(D\) and taking the dot product between all tokens:
\[c_{n,i,j}=\nabla_{x_{i,n}}^{T}\nabla_{x_{j,n}}\in\mathbb{R}. \tag{5}\]
The orthogonality regularizer is then defined as the average squared similarity across all tokens and pairs of heads:
\[\mathcal{L}_{Div}=\frac{1}{N}\sum_{i\neq j}\sum_{n=1}^{N}c_{n,i,j}^{2}. \tag{6}\]
The regularizer, weighted by a hyperparameter \(\lambda\), is added to the standard cross-entropy classification loss to form the complete training objective:
\[\mathcal{L}\ =\ \mathcal{L}_{ERM}+\lambda\,\mathcal{L}_{Div}. \tag{7}\]
### Pruning attention heads
Different attention heads of a transformer can attend to different features which can be more or less robust (i.e. useful across distribution shifts) or spurious. Therefore, using only a subset of heads is a simple technique for ignoring undesirable features and improving OOD performance.
To prune a subset of the heads, we multiply their output \(h_{i}\) by zero in Equation 2. To compensate for the missing heads, we scale the remaining ones to remain in the same range after the projection. This can be done at test time with no need for further adaptation of the model.
## 4 Experiments
We present experiments on two popular diagnostic datasets: MNIST-CIFAR and Waterbirds. See Appendix A for details. Our experiments answer the following questions:
1. Does diversifying the learned features of the self-attention heads lead to better OOD performance compared with ERM training? **Yes.**
2. Can we improve generalization by pruning heads associated with spurious features with a standard ERM-trained ViT (i.e. without our diversification method)? **Yes.**
3. Does our diversification method amplify the distinction between spurious and robust features, improving post-pruning performance even more? **Yes.**
The **baseline** experiment (ViT+ERM) trains a ViT with ERM. In both datasets used, a spurious feature is strongly correlated with the label during training. This correlation is reversed at test time. A model that relies on this spurious feature will therefore perform poorly at test time.
In the **diversification** experiment (ViT+Div), we add our diversity regularizer to the training objective. Without any changes in the architecture, we obtain better generalization, possibly as an ensemble effect of a more diverse set of features captured collectively by all attention heads.
For the **head selection** experiments (Sel), we perform inference at test time using a single attention head. We select the head with the highest accuracy on the OOD validation data. This therefore requires access to labeled OOD examples. The pruning is a test-time procedure and requires no further training of the model. In all experiments, we select hyperparameters for highest OOD validation accuracy.
Main results.In Tables 1-2 we report accuracy on both ID (in-domain) and OOD (out-of-distribution) test sets. We observe that the ERM baseline already exhibits a degree of separation between spurious and robust features among the self-attention heads, thus allowing the head selection (_ViT+ERM+Sel_) to improve the OOD accuracy. The diversification method (_ViT+Div_) is effective on its own (i.e. even without head selection). This indicates a benefit from learning diverse features. The combination of diverse features and proper head selections (_ViT+Div+Sel_) is especially powerful, leading to our best result.
Figure 1: **Overview of the proposed DiverseViT method.** We depict a single-layer multi-head attention model.
Understanding the learned featuresTo gain insight into the learned features, we evaluate the models on a balanced split of MNIST-CIFAR with no correlation between the robust and the spurious attributes. We measure the correlation between the model's predictions and either the robust or the spurious attribute. How well each attribute can be predicted with each head indicates how much of each attribute is captured by each head. Figure 2 shows that diversification leads to a higher level of specialization of the heads. This is advantageous, allowing us to keep only the heads containing information relevant to robust predictions.
## 5 Conclusions and future work
We have shown that ViT have an inherent tendency to capture distinct features in their attention heads, and that this property can be improved with a simple regularizer. This directly improves the robustness of ViTs on several diagnostic benchmarks for out-of-distribution generalization, without changing the architecture. These improvements come "for free" as an ensembling effect from the increased diversity of the learned features.
We also empirically showed that these diverse features have little overlap and are complementary. Therefore, pruning selected attention heads at test time is an effective technique to improve OOD performance, using some information that can identify which heads correspond to spurious features (e.g. an OOD validation set).
Future work.Our methods should be evaluated on larger-scale real-world datasets in vision, but also language and reinforcement learning. The head pruning procedure was evaluated using labeled OOD data, which is only meant to provide an upper bound on the performance achievable in an ideal setting. The approach should be evaluated with the recent heuristics proposed for OOD model selection (Baek et al., 2022; Deng et al., 2023; Garg et al., 2022; Liu et al., 2023; Lu et al., 2022). Other options include various forms of human feedback and unsupervised objectives to enable test-time adaptation.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & ID Accuracy & OOD Accuracy \\ \hline ViT+ERM & 88.80 \(\pm\) 0.12 & 56.87 \(\pm\) 4.30 \\ ViT+Div & 88.40 \(\pm\) 0.10 & 62.26 \(\pm\) 1.80 \\ ViT+ERM+Sel & **90.33**\(\pm\) 0.19 & 64.40 \(\pm\) 2.80 \\ ViT+Div+Sel & 89.86 \(\pm\) 1.10 & **70.08**\(\pm\) 3.15 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on MNIST-CIFAR.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & ID Accuracy & OOD Accuracy \\ \hline ViT+ERM & 96.55 \(\pm\) 0.22 & 83.37 \(\pm\) 0.44 \\ ViT+Div & 96.99 \(\pm\) 0.11 & 83.87 \(\pm\) 0.79 \\ ViT+ERM+Sel & 96.50 \(\pm\) 0.58 & 85.70 \(\pm\) 1.64 \\ ViT+Div+Sel & **96.99**\(\pm\) 0.12 & **87.96**\(\pm\) 0.14 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on Waterbirds.
Figure 2: **Per-head performance comparison between standard ERM and our Diversification method on MNIST-CIFAR. We observe an inherent modularity of ViT’s attention heads, such that the heads predicting well on the robust attribute are predicting poorly on the spurious one, and vice-versa. With standard ERM, the gap between the two attributes predictions is not as clear for most heads, indicating a higher overlap in the information captured by each head. With the proposed diversification method, we observe that most heads predict either, but not both of the robust and spurious features. This shows a high level of specialization, which is desirable since this allows pruning undesirable heads without losing information relevant to robust predictions.**
## Acknowledgements
Bogdan Alexe was funded by UEFISCDI under Project EEA-RO-2018-0496. Damien Teney was partially supported by an Amazon Research Award (ARA).
|
2306.16900 | Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research | Many recent improvements in NLP stem from the development and use of large
pre-trained language models (PLMs) with billions of parameters. Large model
sizes makes computational cost one of the main limiting factors for training
and evaluating such models; and has raised severe concerns about the
sustainability, reproducibility, and inclusiveness for researching PLMs. These
concerns are often based on personal experiences and observations. However,
there had not been any large-scale surveys that investigate them. In this work,
we provide a first attempt to quantify these concerns regarding three topics,
namely, environmental impact, equity, and impact on peer reviewing. By
conducting a survey with 312 participants from the NLP community, we capture
existing (dis)parities between different and within groups with respect to
seniority, academia, and industry; and their impact on the peer reviewing
process. For each topic, we provide an analysis and devise recommendations to
mitigate found disparities, some of which already successfully implemented.
Finally, we discuss additional concerns raised by many participants in
free-text responses. | Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge | 2023-06-29T12:44:53Z | http://arxiv.org/abs/2306.16900v2 | # Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
###### Abstract
Many recent improvements in NLP stem from the development and use of large pre-trained language models (PLMs) with billions of parameters. Large model sizes makes computational cost one of the main limiting factors for training and evaluating such models; and has raised severe concerns about the sustainability, reproducibility, and inclusiveness for researching PLMs. These concerns are often based on personal experiences and observations. However, there had not been any large-scale surveys that investigate them. In this work, we provide a first attempt to quantify these concerns regarding three topics, namely, _environmental impact_, _equity_, and _impact on peer reviewing_. By conducting a survey with 312 participants from the NLP community, we capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry; and their impact on the peer reviewing process. For each topic, we provide an analysis and devise recommendations to mitigate found disparities, some of which already successfully implemented. Finally, we discuss additional concerns raised by many participants in free-text responses.
## 1 Introduction
Recent advances in hardware and algorithms have transformed the field of NLP. Whereas NLP practitioners and researchers used to be able to develop and use cutting-edge NLP technology on relatively affordable hardware such as a laptop or a commodity server, modern state-of-the-art approaches have evolved to require more substantial computational power, typically achieved by specialized tensor processing hardware such as a GPU or TPU. This shift has raised at least two concerns among members of the NLP community Strubell et al. (2019); Schwartz et al. (2020); Arase et al. (2021) and the AI community Patterson et al. (2022); Wu et al. (2022): (1) Understanding and mitigating the environmental cost of NLP research and use, in terms of greenhouse gas (GHG) emissions, and (2) equity of access: the extent to which increasing computational requirements restricts who has access to develop and use modern NLP.
In response to these concerns, we formed a working group within ACL with the goal of better understanding the challenges surrounding efficient NLP and establishing policies to address them. In order to quantify views and impacts in the NLP community related to these concerns, we conducted a survey of the ACL community in July 2021, the results of which we report here. Besides concerns about the (1) environmental impact and (2) equity, we further solicit answers about their (3) impact on the whole peer reviewing process, as this is an important matter for inclusiveness. Overall, we elicited 312 responses from a distributed range of junior and senior researchers hailing from industry and academia. Some of our key findings include:
* More than 50% of the survey participants are
Figure 1: Distribution of available GPUs across our participants (in %). As can be seen, 87.8% of our survey participants have access to less than 10% of the total number of GPUs.
moderately or very concerned about the environmental footprint of NLP research; mostly with respect to _training_ and _model selection_.
* Overall, \(\sim\)62% of our respondents have access to less than eight GPUs and moreover, over 90% have access to less than 10% of the total GPU power (Fig. 1). As a frame of reference, recent work Izsak et al. (2021) showed that a clever set of techniques can be used to train BERT in 24 hours on 8 GPUs, and it takes about 7 minutes to fine-tune a RoBERTaLARGE model on the MNLI natural language inference dataset (about 400k training sentences) on one GPU (GTX 2080 Ti) to an accuracy of 85% Zhou et al. (2021).
* A majority (76%) our respondents believe that it would be beneficial to have smaller versions of pre-trained models released together with larger ones. In fact, 33% of our free-text respondents emphasised the importance of sharing artifacts (such as code, models, training logs, etc.).
* The group that suffers most from lack of resources are students, who struggle to reproduce previous results when compared to researchers from large industry.
* While we find disparities between different groups--especially regarding the job sector--our analysis shows that most of them are not _statistically significant_. Instead, we find outliers across all groups showing that there exist disparities within. We find no evidence in our survey responses that "industry" has access to significantly more compute power than "academia". Instead, this mostly seems to be the case for very few extreme outliers (6%).
With this survey, we hope to provide a more solid foundation to back-up the ongoing discussion in the community and for devising concrete actions to make research more inclusive.
## 2 Survey Description
The survey was open over a period of 17 days, from Monday, July 12, 2021 to Thursday, July 29, 2021. It was conducted via Microsoft Forms and distributed across the *CL community by mass mailing to ACL membership, and shared on Twitter. During that time, we collected 312 responses. The creation of the survey indicated that, "input will remain anonymous and the responses will also be summarized in aggregate form".1 Therefore the data will be made available on request with a statement of intended purpose, due to privacy and ethical restrictions.
Footnote 1: [https://www.aclweb.org/portal/content/efficient-nlp-survey](https://www.aclweb.org/portal/content/efficient-nlp-survey)
### Questionnaire
The questions were divided into four categories. First, we collected some general information about our participants, like their current position and seniority (SS2.2). Second, we asked our participants about their concerns regarding the environmental impact of NLP experiments (SS3) and their access to computational resources (SS4). Finally, we asked about the impact of compute-intensive experiments on the reviewing process as well as about specific measurements to alleviate them (SS5).
To keep a low-effort for our participants, we crafted most of the (\(Q\))uestions as simple yes/no/unsure questions. For subjects that require a more fine-grained analysis (e.g., environmental concerns) we used five point scales (either numeric or text-based). Overall, we asked a total of 19 questions from which 15 were multiple-choice questions (13 with a single answer possibility and two with multiple possible answers). \(Q4\) (available compute resources) and \(Q11\) (number of times reviewers asked for expensive experiments) required a numeric answer. Finally, \(Q9\), \(Q18\), and \(Q19\) allowed free text answers. Participants were asked to provide answers to 13 questions, while six questions (\(Q4\), \(Q9\), \(Q11\), \(Q12\), \(Q14\), \(Q19\)) were optional. All questions are provided in Table 1.
### Demographic Overview
In our first three questions, we asked the participants about their seniority, job sector, and geographic location (Fig. 2).
Seniority.We asked our participants about the number of active years in the *CL community as an author, reviewer, or in a related role (\(Q1\)). Overall, a little over half (53.5%) indicated they were junior members of the community, while the remainder were fairly evenly split across mid- and late-career.
Job Sector.We further asked our participants about the current position they are holding (\(Q2\)).
**Demographics**
**Q1. Years active.** How many years have you been active in the ACL community (as an author/reviewer/area chair/etc.)? **Answer**: [1-5], [6-10], [11-15], [16+].
**Q2. Current Role. Answer**: Student, Academic Postdoc, Academic PI, Researcher in large industry, Research in small industry, other.
**Q3. Geographic Location. Answer**: Americas, Europe/Middle East, Africa, Asia/Oceania.
**Equity**
**Q4. Available compute resources.** Please provide a rough estimate of the average number of GPUs or equivalent accelerators that are available to you (for students / researchers) or to each researcher in your lab/group (for PIs / managers). If you cannot quantify the amount of compute resources, leave this field empty. **Answer:** Numeric response (optional).
**Q5. Unable to run experiments**. In the last year, have you been unable to run experiments important for one of your projects due to lack of computational resources? **Answer**: Yes, No, Unsure.
**Q6. More resources would make your work more valuable**. How often do you feel like your work would have been valued more by the community (e.g., accepted instead of rejected to some venue) if you had access to more computational resources? **Answer**: Five point scale.
**Environmental Concern**
**Q7. Concern about environmental footprint.** How concerned are you by the environmental footprint of the field of NLP? **Answer**: Five point scale.
**Q8. Most pressing factor.** Which of the following do you feel is the most pressing factor with respect to the environmental impact of NLP? **Answer**: Choose all that apply: Training, Inference, Model selection, None, Other).
**Q9. Why?** Optionally explain the reasons for your choices above. **Answer**: Free text (optional).
**Reviewing Process**
**Q10. Did reviewers ask for too expensive experiments?** In the past 3 years, have you received feedback from reviewers who requested experiments that were too expensive for your budget for a particular paper? **Answer:** Yes, No, Not sure.
**Q11. If yes, how many times?**
**Q12. Was the critique justified?** If yes, do you feel the critique was justified? I.e., that the main scientific claims in your paper (e.g., that your approach was better than some baseline) were not sufficiently supported by the original, smaller-budget experiments? **Answer:** Yes, No, Not sure.
**Q13. Lack of resources prevents reproduction of previous results.** How often do you find yourself unsuccessful in reproducing a previous result due to lack of computational resources? **Answer**: Never, Rarely, Sometimes, Often, Always.
**Q14. Efficiency Track.** If you have work on efficient methods and/or enhanced reporting, would you consider submitting it to a dedicated track? **Answer**: Yes, No, N/A.
**Q15. Justify allocation of budget for experiments**. As a reader, would you prefer authors to be requested to justify the way they allocate their budget to run experiments which adequately support their scientific claims? **Answer**: Yes, No, Not sure.
**Q16. Reviewers should justify the petition for additional experiments**. As an author, would you prefer it if reviewers took up space in their review to justify their suggestions for additional experiments in terms of the evidence that those additional experiments would provide? I.e., what is currently missing in terms of lack of evidence to support the main claims of the paper, and how the additional experiments would provide evidence for the paper's research questions? **Answer**: Yes, No, Not sure.
**Q17. Releasing small versions of pretrained models**. Would your work benefit from smaller versions of pretrained models released alongside larger ones? **Answer**: Yes, No, Not sure.
**Q18. How to encourage the release of models**. Which of these solutions would you endorse for encouraging the release of trained models? **Answer**: Choose all that apply: Best artifact award, Instruct reviewers to reward papers who share/promise to share models, Visible branding of the paper in conference proceedings, None of the above, Other)
**Q19. Any other thoughts or suggestions? Answer**: Free text (optional).
\begin{table}
\begin{tabular}{l} \hline \hline
**Demographics** \\ \hline
**Q1. Years active.** How many years have you been active in the ACL community (as an author/reviewer/area chair/etc.)? **Answer**: [1-5], [6-10], [11-15], [16+]. \\
**Q2. Current Role. Answer**: Student, Academic Postdoc, Academic PI, Researcher in large industry, Research in small industry, other. \\
**Q3. Geographic Location. Answer**: Americas, Europe/Middle East, Africa, Asia/Oceania. \\ \hline
**Equity** \\ \hline
**Q4. Available compute resources**. Please provide a rough estimate of the average number of GPUs or equivalent accelerators that are available to you (for students / researchers) or to each researcher in your lab/group (for PIs / managers). If you cannot quantify the amount of compute resources, leave this field empty. **Answer:** Numeric response (optional). \\
**Q5. Unable to run experiments**. In the last year, have you been unable to run experiments important for one of your projects due to lack of computational resources? **Answer**: Yes, No, Unsure. \\
**Q6. More resources would make your work more valuable**. How often do you feel like your work would have been valued more by the community (e.g., accepted instead of rejected to some venue) if you had access to more computational resources? **Answer**: Five point scale. \\ \hline
**Environmental Concern** \\ \hline
**Q7. Concern about environmental footprint.** How concerned are you by the environmental footprint of the field of NLP? **Answer**: Five point scale. \\
**Q8. Most pressing factor.** Which of the following do you feel is the most pressing factor with respect to the environmental impact of NLP? **Answer**: Choose all that apply: Training, Inference, Model selection, None, Other). \\
**Q9. Why?** Optionally explain the reasons for your choices above. **Answer**: Free text (optional). \\ \hline
**Reviewing Process** \\ \hline
**Q10. Did reviewers ask for too expensive experiments?** In the past 3 years, have you received feedback from reviewers who requested experiments that were too expensive for your budget for a particular paper? **Answer:** Yes, No, Not sure. \\
**Q11. If yes, how many times?** \\
**Q12. Was the critique justified?** If yes, do you feel the critique was justified? I.e., that the main scientific claims in your paper (e.g., that your approach was better than some baseline) were not sufficiently supported by the original, smaller-budget experiments? **Answer:** Yes, No, Not sure. \\
**Q13. Lack of resources prevents reproduction of previous results.** How often do you find yourself unsuccessful in reproducing a previous result due to lack of computational resources? **Answer**: Never, Rarely, Sometimes, Often, Always. \\
**Q14. Efficiency Track.** If you have work on efficient methods and/or enhanced reporting, would you consider submitting it to a dedicated track? **Answer**: Yes, No, N/A. \\
**Q15. Justify allocation of budget for experiments**. As a reader, would you prefer authors to be requested to justify the way they allocate their budget to run experiments which adequately support their scientific claims? **Answer**: Yes, No, Not sure. \\
**Q16. Reviewers should justify the petition for additional experiments**. As an author, would you prefer it if reviewers took up space in their review to justify their suggestions for additional experiments in terms of the evidence that those additional experiments would provide? I.e., what is currently missing in terms of lack of evidence to support the main claims of the paper, and how the additional experiments would provide evidence for the paper’s research questions? **Answer**: Yes, No, Not sure. \\
**Q17. Releasing small versions of pretrained models**. Would your work benefit from smaller versions of pretrained models released alongside larger ones? **Answer**: Yes, No, Not sure. \\
**Q18. How to encourage the release of models**. Which of these solutions would you endorse for encouraging the release of trained models? **Answer**: Choose all that apply: Best artifact award, Instruct reviewers to reward papers who share/promise to share models, Visible branding of the paper in conference proceedings, None of the above, Other) \\ \hline
**Q19. Any other thoughts or suggestions? Answer**: Free text (optional). \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of questions in the survey. Summaries of the questions in bold. Only full questions were shown to the participants.
Possible responses were student, academic post-doc (Aca. PD), researcher from small (s) and large (l) industries (Ind.), and academic PI (Aca. PI). The largest group of participants were students (38.5%), followed by academic postdocs and PIs (34.3%), and industry researchers (24.7%). Eight participants (2.5%) responded with "other" from which seven were affiliated with academia (e.g., lecturers) and one with industry (consultant). For the fine-grained analysis, we merge each response of "other" into the most fitting group in the survey (one student, five academic PIs, one academic post-doc, and one small industry researcher). For our analysis, we do not merge the academic and industry subgroups, as this may obfuscate existing disparities; e.g., between small and large industry.
Geographic location.We further asked respondents to share their geographic location (\(Q3\)). Overall, 45.8% of responses came from the Americas (AM), 40.4% from Europe (EU) and the Middle East (ME), and 13.8% from Asia (AS) and Australia (Aus). We received no responses from researchers in Africa (AF). The heavily skewed responses in terms of geographic location limits the expressiveness of this factor and thus, will not be considered for our analysis.
### Methodology
In the following sections, we analyse and discuss the participants' responses with respect to the remaining three categories (_environmental concerns_, _equity_, and _impact on the reviewing process_). For each section, we first provide an overview of the distribution in the responses and then provide a fine-grained analysis with respect to the _seniority_ and _job sector_. The goal of the fine-grained analysis is to investigate if we can observe any statistically significant differences across different groups.
Statistical tests.Due to the explorative nature of our survey, the collected data violates the necessary conditions on homoscedasticity Levene (1960) and normality Shaphiro and Wilk (1965) that are required to conduct an analysis of variances ANOVA, Fisher (1921). Instead, we perform a Kruskal-Wallis test Kruskal and Wallis (1952) as an indicator for any statistically significant differences2 and if so, perform pairwise Welch's t-tests Welch (1951) against a Bonferroni corrected \(\alpha=\frac{0.05}{m}\) where \(m\) is the number of pairwise comparisons; i.e., for \(n\) groups, \(m=\frac{n-(n-1)}{2}\)Bonferroni (1936). This results in corrected \(\alpha=0.008\overline{3}\) for seniority with \(n=4\) and \(\alpha=0.005\) for the job sector with \(n=5\) (not merging academia and industry sectors). For the numerical questions (\(Q4\) and \(Q11\)), we further analyze if there exist disparities within each group, using interquartile ranges with \(k=1.5\) to detect outliers Tukey (1977).
Footnote 2: This is the case when \(H>H_{0}^{*}\) with \(H_{0}^{*}\sim\chi_{n-1}^{2}\) for group size \(n\). For \(\alpha=0.05\), we get \(H_{0}^{*}=9.488\) (job sector) and \(H_{0}^{*}=7.815\) (seniority Abramowitz (1988).
## 3 Environmental Footprint
We quantified existing concerns about the environmental impact of NLP experiments using a five point Likert scale (\(Q7\)) and asked our participants to select the most pressing issue in the typical life cycle of an NLP model (\(Q8\)) between (Train)ing, model (Select)tion, and (Infer)ence. Participants were allowed to select _all_ applicable answers and could select (None) or provide (Other) pressing issues. They could also provide a textual justification of their answer(s) (\(Q9\)).
### Analysis
Figure 2(a) shows that more than 50% of our participants were moderately (28.2%) or very (27.9%) concerned about the environmental footprint of NLP, while around 33% of them were slightly (14.7%) or somewhat (18.6%) concerned. 10.6% of participants were not concerned at all. Our participants further agreed that _training_ (75.3%) and _model selection_ (59.9%) are the most pressing issues (Fig. 2(b)).3_Inference_ took third place with 20.2%, while 6.1% of our participants selected _none_. The smallest number responses was given for _other_ with _hyperparameter tuning_ and _travelling_ (6 mentions each) being the most frequent ones. Also mentioned were _storage consumption_, _hardware_, _expectations about large data experiments_, and _scale_. Interestingly, many respondents considered inference less pressing than training and model selection.
Footnote 3: Note that 39.7% of our participants selected exactly these two as the only pressing factors.
Job Sector.Although we do not find significant differences by seniority, we see larger (although not statistically significant) differences when looking at the responses grouped by researchers from different job sectors (Fig. 3(a)). We find that respondents from the large industry sector were mostly _somewhat_ concerned, while the median for all other groups lies at _often_ concerned. Similarly, we also
see larger differences in the most pressing issues between different groups. For instance, small and large industries were substantially more concerned with respect to inference and much less concerned with respect to model selection than academia.
### Discussion and Recommendations
An analysis of the 81 (26%) free-text responses (\(Q9\)) reveals diverse opinions about the environmental impact of NLP and the reasons behind the most pressing factors. For instance, among respondents that stated not to be concerned about NLP's environmental footprint at all, the majority considered the impact of NLP research on climate change to be negligible compared to other factors. Factors mentioned as being more relevant to climate change include air travel (also mentioned twice in the general responses \(Q19\)), cars, and more cost intensive computations from other areas (of science). Another argument brought up multiple times in this group of respondents is that the ACL is not the right institution to tackle challenges of climate change. Some responses alternatively suggested to push for regulatory changes, since big tech companies might not be affected by decisions made by the ACL.
Regarding the most pressing factors, one of the main arguments provided for inference was that industry spends most time on inference, hence it is the most expensive one. However, participants also argued that there exist various methods for efficient inference (see, e.g., Treviso et al., 2022). Prominent arguments with respect to training and model selection were that the pressure to achieve state-of-the-art performance leads to extensive hyperparameter tuning and that a large variety of models are being trained during research and development (even if just for debugging) without being ever deployed.
## 4 Equity
The (in)equity of the available compute resources across groups (e.g., academia and industry) is an increasingly brought up topic in discussions. While the general gist seems to be that many researchers feel excluded by not having access to substantially large compute power (e.g., thousands of GPUs), it often remains unclear whether this is really the case. One of the main objectives of this survey was therefore to quantify such potential disparities.
### Analysis
For \(Q4\), 229 participants responded (73.4%) with the number of GPUs they have access to. Fig. 1 shows the distribution of the total number of GPUs across our participants (in %). Overall, we observe a high disparity across our participants in terms of access to GPUs. For instance, 62% of the participants had access to less than eight GPUs (Fig. 5(a)), the number used for training academic BERT (Izsak et al., 2021), and 87.8% of the participants had access to only 9.7% of the total number of GPUs. 15 participants (6.6%) had access to more than 100 GPUs, up to 3000 GPUs, representing 85.6% of the total GPU count (\(\sim\)11.2k). An outlier analysis shows that 13.1% of the respondents had access to a substantially higher number of GPUs (more than 22 GPUs) than the rest. We further find that 57.4% of our participants were unable to run experiments due to the lack of computational resources, and 36.2% had no lack of resources (\(Q5\)). Finally, Fig. 5 shows that 31.4% of our respondents _never_ or _rarely_ thought that more resources could make their work valuable, while 34.3% of respondents answered _sometimes_, and 34.3% answered _often_ or _always_ (\(Q6\)).
Figure 2: Demographic statistics: (a) describes the seniority, (b) the job sector, and (c) the geographic location of our participants (in %).
Job Sector.As in SS3, our analysis shows no significant differences w.r.t. the seniority, and we find larger disparities by job sector. As we can observe in Fig. 6c, respondents in industry (large) had access to a higher number of GPUs than industry (small) and academia. This is one of the few cases where we have to resort to pairwise testing, as the Kruskal-Wallis test indicates that the Null hypothesis cannot be rejected with \(H^{5}=16.976>H_{0}^{5}=9.488\). While we do not find significant differences in our pairwise comparisons, there are still substantial differences between Ind. (l) and Aca. PI (p-value = 0.0827), as well as Ind. (s) and Ind. (l) (p-value = 0.0850). Even though students reported the lowest number of available GPUs, the differences seem less substantial compared to researchers at small industry (p-value = 0.110). Additionally, we find that large industry has the highest percentage of outliers and the largest in-group disparity. Interestingly, researchers from small industry seem to have the least issues when running experiments; a stark contrast considering they are among those who reported the fewest GPU resources (\(Q5\)). Regarding \(Q6\), researchers from large industry responded slightly less often than other groups that their research could be more valuable if they had access to more compute power.
Figure 4: Concerns and pressing issues, grouped by positions.
Figure 5: Lack of resources for more valuable work.
Figure 3: Environmental concerns and pressing issues (in % of participant answers).
### Discussion and Recommendations
While our survey highlights existing disparities, particularly between small industry or academic researchers and large industry, we also find that there exist substantial disparities _within_ each group. Most surprising might be the general disparity we find across the field, as 87.8% of our participants had access to less than 10% of the total number of GPUs, and 62% had access to less then 8 GPUs. Only a very small fraction of researchers (2.2% of our respondents) had access to GPU compute (1000 or more) to train models with several hundreds of billion parameters for several days or weeks. Many researchers, hence, could only fine-tune models--which requires far fewer resources than pre-training (Zhou et al., 2021)--which is only possible when pre-trained model weights are available. Unfortunately, many recent models are being kept private, which has intensified the discussion about equity in the field (Togelius and Yannakakis, 2023).
## 5 Impact on Reviewing
Finally, we quantified how the concerns about the environmental impact and differences in terms of available compute resources affect peer reviewing (\(Q10\)-\(Q13\)). We further asked our participants four questions (\(Q14\)-\(Q17\)) which relate to concrete ideas that would change the reviewing process and encourage model release (\(Q18\)).
### Analysis
Figure 6(a) shows that 30.1% of the participants experienced being asked (during peer-review) to conduct additional experiments that were too expensive for them (\(Q10\)) with 77 respondents having experienced this more than once (\(Q11\)) and 19.2% having a substantially higher number (five or more times) according to our outlier analysis. Most participants (65.9%) further thought that this criticism was unjustified (\(Q12\)). Figure 6(b) shows that 34.3% of the respondents _often_ or _always_ lacked resources to reproduce previous experiments and 41.4% _sometimes_. Only 7.7% _never_ or 16.7% _rarely_ faced a lack of resources to reproduce experiments.
With respect to the concrete reviewing actions, Fig. 8(a) shows that a large majority (89.8%) of our participants would consider submitting their work to a dedicated track on efficient methods (\(Q14\)). Following up on the results from the survey, such an efficiency track was implemented at EMNLP 2022. 35.9% of our participants were unsure about requesting authors to justify the allocation of budget for experiments (\(Q15\)), with 41% voting for _yes_. Also, even though 52.6% of the participants had not been asked for experiments that were too expensive for them, a clear majority of the participants (83.7%) would like to require reviewers to justify their petitions for more experiments (\(Q16\)). Lastly, we also see a large majority (75.6%) that believed that their work could benefit from the release of small versions of pretrained models alongside large ones (\(Q17\)). To promote this, a majority of our respondents thought that venues should have a visible _branding_ of papers to release a model (59.3%) and that reviewers should be instructed to reward model release (50.6%). 42.6% of respondents thought that the venues should grant a best artifact _award_. 11.5% of respondents supported _none_ of the options. A first step towards increasing the reproducibility and ensuring the submission of
Figure 6: Distribution of GPUs among participants: (a) overall, (b) by seniority, (c) by job sector.
experimental code was implemented at NAACL 2022 by introducing a badge system at the reproducibility track.4 Upon acceptance, the authors could follow specific procedures to earn three types of badges: 1) open-source code, 2) trained model, and 3) reproducible results.
Footnote 4: [https://2022.naacl.org/blog/reproducibility-track/](https://2022.naacl.org/blog/reproducibility-track/)
Seniority.We find no significant differences w.r.t. the seniority of our participants regarding \(Q10\)-\(Q18\). However, junior researchers (1-5 years) showed a substantially higher tendency towards requesting authors to justify their compute budget (\(Q15\)) against all other age groups (p-values \(<0.035\)). We also observe in Figure 8(c) diverging preferences between junior and senior groups in terms of ideas to improve the reviewing process (\(Q18\)). Junior researchers (1-5 years) seemed to be more inclined towards a visual branding as well as instructing reviewers than senior researchers (11-15 years with a p-value of 0.089 and 16+ years with a p-value of 0.085).
Job Sector.In terms of the job sector, we again find no significant differences with respect to reviewers asking for too expensive experiments (\(Q10\)) or critique being justified (\(Q12\)). Interestingly, respondents from small industry received fewer such requests (\(Q11\)) compared to post-docs (p-value = 0.024), PIs (p-value = 0.061), and larger industry (p-value = 0.087). The most concerning trend can be observed when comparing the different groups with respect to their lack of compute resources to reproduce experiments (\(Q13\), Fig. 8); where we find significant differences and conduct pairwise analyses.5 In general, students suffered most, with a significant difference compared to the large industry sector with a p-value of 0.002 \(<0.005=\alpha\) (Bonferroni-corrected). We further find substantial differences between students and academic PIs (p-value = 0.026) and between academic post-docs and large industry labs (p-value = 0.088).
Footnote 5: Kruskal-Wallis test: \(H^{5}\) = 12.486 \(>H_{0}^{5}\) = 9.488.
We find no substantial differences when it comes to actionable items for the *CL community (\(Q14\)-\(Q17\)), indicating that implementing popular ideas would be welcomed by all groups. However, we find some differences when it comes to encouraging the release of models (\(Q18\)). For instance, Figure 8(d) shows that academic post-docs had a higher preference towards reviewers rewarding papers that promise to release models than academic PIs. Also, participants from small industry would prefer visual branding over awards in contrast to large industry.
### Discussion and Recommendations
Our analysis shows that the two most pressing issues among our respondents are the lack of resources to reproduce results and reviewers requesting for too expensive experiments without proper justification. This is reflected in the large support for both respective counter measures; namely, ask
Figure 7: Analysis on how of a lack of resources can affect research. In (a), we show what percentage of participants had been asked by reviewers for too expensive experiments (\(Q10\)) and if so, if they felt the critique was justified (\(Q12\)). In (b), we show how often our participants could not reproduce previous results due to a lack of computational resources (\(Q13\)).
ing reviewers to provide justification and the release of smaller models that would allow researchers to at least reproduce some experiments. Considering the success of badges at NAACL 2022 with 175 code, 98 model and 20 reproducibility badges, introducing an explicit badge for small model release could boost inclusiveness and reproducibility.6 To improve peer reviewing, one immediate action would be to adapt the ARR reviewing guidelines and instruct reviewers to consider the compute budget when asking for more experiments.7 For guidance, such an instruction should also be accompanied by an example that considers differences between the available compute power reported in a paper.
Footnote 6: [https://naacl2022-reproducibility-track-github.io/results/](https://naacl2022-reproducibility-track-github.io/results/)
Footnote 7: [https://aclrollingreview.org/reviewertutorial](https://aclrollingreview.org/reviewertutorial)
Among the 22 additional suggestions for \(Q18\), we find a high emphasis (68.2%) towards the release of artifacts--both because this facilitates future research and helps reproducibility. Moreover, 22 of the 67 general suggestion (\(Q19\)) also touched upon issues about model release and reviewing, highlighting the importance of both topics. The responses mentioned a remarkably wide variety of artifacts: code; trained models; system outputs (to facilitate comparative evaluations without rerunning the code); training checkpoints (to study the training dynamics); and proper documentation of training data (including crowdsourcing questions). In addition to simply releasing trained models, several respondents also wished for a sufficiently high quality of the released models complemented by code and documentation. One particular concern was how the release of artifacts should be integrated into the reviewing process. On the one hand, it seems useful to submit artifacts together with the paper before reviewing, so that reviewers can access them and to prevent breaking promises of future code release. On the other hand, this needs to happen within the constraints of double-blind reviewing. Finally, 12 of the free-text responses of \(Q18\) and \(Q19\) suggested that artifact release should be mandatory for acceptance.
## 6 Further Considerations
Finally, we discuss suggestions (\(Q19\)) that do not fit into any of the previously discussed topics. From the 67 free-text responses (21.5%), the two most prominent topics were evaluation (11 respondents) and a higher focus on research over engineering (7 respondents).
Evaluation.16.4% of the free-text respondents touched upon the issue of evaluation and model comparability; as current benchmarks often focus on improving a single metric. One measure to counter this trend would be to report performance based on Pareto frontiers and to consider the compute budget along with the model performance. To promote such curves, it would also be important to release metadata including preprocessing and hyperparameter choices that allows future research to draw proper comparisons as well as to provide concrete guidelines for reviewers.
Research vs. engineering.10.4% of the free-text respondents further noted that the field seemed to have drifted more towards engineering by primarily chasing high performance; straying away from producing meaningful scientific insights. The respondents bring forward various reasons to combat this; for instance that authors clearly state their scientific hypothesis and then report research that tests this hypothesis using the lowest appropriate amount of resources. Other reasons are to actively promote more theoretical, or more non deep learning work.
Other suggestions.Another suggestion worth mentioning was the creation of a separate track (four respondents); either specifically for small models or for industry that cannot publish their models. Finally, one respondent suggested a shared task with limited resources which could be an interesting idea to approach further; e.g., by continuing the SustaiNLP 2020 shared task (Moosavi et al., 2020).
Figure 8: \(Q13\): Lack of resources by job sector.
## 7 Conclusion
We presented a first attempt to capture and quantify existing concerns about the environmental impact and equity within the *CL community. We further investigated the resulting implications on peer reviewing considering the increasing computational demand. A majority of our respondents were concerned regarding the environmental footprint of NLP experiments with model training and model selection being the most pressing issues. We also found a high disparity among our respondents with students and small industry researchers suffering most from a lack of resources. There was a large support for measures to improve equity and accessibility across all respondents. The most prominent are the inclusion of an efficiency track, asking reviewers to justify the petition for additional experiments, and the release of small versions of pretrained models.
## Limitations
We note that this survey is by no means a representative study within the whole *CL community. Instead, we advertised it throughout various chan
Figure 9: Analysis of responses on how to improve the reviewing process. In (a), we show the distribution of our participants’ responses for \(Q14\)–\(Q17\) (in %). A majority of our participants would submit to an efficiency track (\(Q14\)) and would prefer reviewers to justify a request for more experiments (\(Q16\)). They further would benefit from a release of smaller models (\(Q16\)). In contrast, the responses are more mixed about the authors justifying the compute budget (\(Q15\)). In (b–d), we show our participants’ responses on how to encourage the release of models (in %): (b) overall, (c) by seniority, (d) by job sector. Multiple responses were allowed for \(Q18\).
nels to receive a large number of responses. Parts of this is reflected in the evaluation of the geographic locations, e.g., they were too coarse (e.g., no distinction between south and north America) to capture a more precise picture about existing geographic inequalities. Nonetheless, the fact that we did not receive any responses from bodies located in Africa indicates that there may still exist high disparities in terms of geographic location.
## Acknowledgements
This work was initiated at and benefited substantially from the Dagstuhl Seminar 22232: _Efficient and Equitable Natural Language Processing in the Age of Deep Learning_. We further thank Niranjan Balasubramanian, Jonathan Frankle, Michael Hassid, Kenneth Heafield, Sarah Hooker, Alexander Koller, Alexandra Sasha Luccioni, Alexander Loser, Andre F. T. Martins, Colin Raffel, Nils Reimers, Leonardo Riberio, Anna Rogers, Edwin Simpson, Noam Slonim, Noah A. Smith, and Thomas Wolf for a fruitful discussion and helpful feedback at the seminar.
|
2303.02173 | Flat-band optical phonons in twisted bilayer graphene | Twisting bilayer sheets of graphene have been proven to be an efficient way
to manipulate the electronic Dirac-like properties, resulting in flat bands at
magic angles. Inspired by the electronic model, we develop a continuum model
for the lattice dynamics of twisted bilayer graphene and we show that a
remarkable band flattening applies to almost all the high-frequency in-plane
lattice vibration modes, including the valley Dirac phonon, valley optical
phonon, and zone-center optical phonon bands. Utilizing an approximate
approach, we estimate small but finite magic angles at which a vanishing phonon
bandwidth is expected. In contrast to the electronic case, the existence of a
restoring potential prohibits the emergence of a magic angle in a more accurate
modeling. The predicted phonon band-flattening is highly tunable by the twist
angle and this strong dependence is directly accessible by spectroscopic tools. | Emmanuele Cappelluti, Jose Angel Silva-Guillén, Habib Rostami, Francisco Guinea | 2023-03-03T19:00:02Z | http://arxiv.org/abs/2303.02173v1 | # Flat-band optical phonons in twisted bilayer graphene
###### Abstract
Twisting bilayer sheets of graphene have been proven to be an efficient way to manipulate the electronic Dirac-like properties, resulting in flat bands at magic angles. Inspired by the electronic model, we develop a continuum model for the lattice dynamics of twisted bilayer graphene and we show that a remarkable band flattening applies to almost all the high-frequency in-plane lattice vibration modes, including the valley Dirac phonon, valley optical phonon, and zone-center optical phonon bands. Utilizing an approximate approach, we estimate small but finite magic angles at which a vanishing phonon bandwidth is expected. In contrast to the electronic case, the existence of a restoring potential prohibits the emergence of a magic angle in a more accurate modeling. The predicted phonon band-flattening is highly tunable by the twist angle and this strong dependence is directly accessible by spectroscopic tools.
_Introduction._ The exotic electronic, optical, and lattice properties of graphene have been enriched in the past few years by the additional possibility of manipulating two graphene layers with a finite twist angle. In twisted bilayer graphene (TBG), a complex phase diagram, including superconductivity, a Mott insulating phase, and a novel topology of the electronic bands, has been revealed [1; 2]. A key ingredient in this scenario is the existence of a non-trivial electronic structure with very narrow bandwidth, also known as flat-bands, at the so-called magic angle [3; 4; 5] has been analyzed using schemes based on either tight binding models [3; 4] or continuum models [6; 7].
Nevertheless, along with the investigation focused on the electronic properties, a large interest has also recently arisen concerning the effects of twist on the lattice dynamics. The phonon spectrum in TBG has been studied theoretically [8], and experimentally [9]. Optical [10] and acoustical [11; 12] phonons have been investigated as possible origins of the observed superconductivity. A particular high-energy optical mode at the K and K\({}^{\prime}\) points has been extensively studied in TBG [13; 14; 15], as it gives rise to flat moire bands, and it couples strongly to electrons. These modes are also currently thought to be responsible for the remarkable D and 2D features in Raman spectroscopy of single-layer and multilayer graphene [16; 17; 18; 19; 20]. Concerning the possibility of a strong twist-driven renormalization of the phonon dispersion, calculations based on models of elastic systems have also been carried out. In Ref. [21] the emergence of a flat-band associated with out-of-plane flexural modes was shown. Similar results for the out-of-plane lattice modes were predicted for twisted "artificial" graphene systems [22]. In-plane lattice modes at the K and K\({}^{\prime}\) points, also characterized by Dirac physics, appear however as well, and are even more interesting. On the one hand, these modes were initially associated with the onset of the D and 2D Raman features [23; 24]. On the other hand, the same modes, in the presence of a symmetry breaking of the sublattices as in h-BN or in transition-metal dichalcogenides (TMDs), can host chiral content that enforces fundamental selection rules [25; 26; 27; 28; 29]. In this scenario, it is worth mentioning that flat-bands have been also predicted in moire structures of twisted two-dimensional TMDs [30; 31].
In this Letter, we investigate the effect of twist on the main high-energy (optical) modes at the high-symmetry points \(\Gamma\) and K of the phonon spectrum of TBG, with a special focus on the Dirac-like in-plane lattice modes at K. Using a force-constant (FC) model and a proper generalization of the continuum approach for the lattice phonon modes, we show that: (\(i\)) in-plane Dirac-phonons undergo upon twist a strong renormalization of the effective dispersion giving rise to flat-bands, in a similar way as Dirac-like electrons do; (\(ii\)) a "magic" angle, where the dispersion of these modes approaches zero, can be analytically predicted, and numerically observed, at twist angles remarkably larger than the ones required for the existence of flat bands in the electronic spectrum. Furthermore, we show that the appearance of flat bands is also predicted (at smaller twist angles) for the high-frequency transverse optical (TO) phonon at K and for the longitudinal-optical/transverse-optical (LO/TO) modes at the \(\Gamma\) point, rationalizing thus the numerical results of Ref. [13].
_The model._ A suitable continuum model for the lattice dynamics of TBG is derived from a FC model. Following the well-known scheme, we first construct the proper Hamiltonian for the single-layer, and for the representative limit cases of AA and AB bilayer stacking. The lattice dynamics for the twisted system is further ob
tained by including the appropriate tunneling between a \(\mathbf{q}\) vector in one layer with a \(\mathbf{q}+\mathbf{Q}_{\nu}\) vector in the other layer, where \(\mathbf{Q}_{\nu}\) are the characteristic tunnelling momenta, just as for the electronic case. In order to focus on the physics of the Dirac phonons, we restrict our model to in-plane lattice displacements responsible for the Dirac modes, defining a 8-fold Hilbert basis, \(u_{\alpha,i}(\mathbf{q})\), corresponding to the lattice displacements of the 4 atoms in the \(x\)-\(y\) space. Here \(i=x,y\) are the Cartesian indices and \(\alpha=\) A\({}_{1}\), B\({}_{1}\), A\({}_{2}\), B\({}_{2}\) labels the atoms in the sublattice A, B in layer 1, 2.
The phonon band structure is thus obtained by the solution of the secular equation:
\[M\,\hat{\omega}^{2}(\mathbf{q})\cdot\mathbf{u}(\mathbf{q})=\mathbf{\hat{K}}( \mathbf{q})\cdot\mathbf{u}(\mathbf{q}), \tag{1}\]
where \(M\) is the carbon mass, \(\hat{\omega}^{2}(\mathbf{q})\) the diagonal matrix of the square frequencies, and \(\mathbf{\hat{K}}(\mathbf{q})\) the dynamical matrix that takes into account the elastic couplings between different carbon atoms. In order to provide the clearest analytical insight on the manipulation of the Dirac lattice modes, we include the minimum set of FC parameters preserving the relevant physics. More explicitly, in single-layer we include elastic coupling only between in-plane nearest neighbor atoms, described by two parameters, \(f_{\parallel}\), and \(f_{\perp}\), ruling the relative radial and in-plane tangential lattice displacements between neighbor atoms at interatomic distance \(a\) (see Fig. 1a). The coupling between different layers in the AA and AB structure is thus described by two more kinds of elastic forces (see Figs. 1b,c) : \(f^{\prime}_{\perp}\) connecting vertically two atoms atop each other at the distance \(c\); and \(f^{\prime\prime}\), connecting atoms in different layers at distance \(R=\sqrt{a^{2}+c^{2}}\), with the relevant components \(f^{\prime\prime}_{\parallel}\) and \(f^{\prime\prime}_{\perp}\), governing respectively the relative in-plane longitudinal and transverse displacement of two atoms with respect to their joining vector. The resulting dynamical matrix can thus be written as:
\[\mathbf{\hat{K}}(\mathbf{q})=\mathbf{\hat{K}}^{f}(\mathbf{q})+\mathbf{\hat{K}} ^{f^{\prime}}(\mathbf{q})+\mathbf{\hat{K}}^{f^{\prime\prime}}(\mathbf{q}). \tag{2}\]
_Dirac phonons at K._ The Dirac phonons at the K point are more conveniently described by introducing a chiral basis \(\tilde{u}_{\alpha,\nu}(\mathbf{q})\), where \(\nu=\) R, L and \(\tilde{u}_{\alpha,\text{R}/\text{L}}=(u_{\alpha,x}\pm iu_{\alpha,y})/\sqrt{2}\). The dynamical matrices for the AA and AB structures in this basis read:
\[\mathbf{\hat{K}}^{f}(\mathbf{q}) = \sum_{\nu}\hat{f}_{\nu}\hat{\sigma}_{0}\left[\hat{\tau}_{0}-\tau^ {\prime}_{\nu}(\mathbf{q})\hat{\tau}_{x}+\pi^{\prime\prime}_{\nu}(\mathbf{q}) \hat{\tau}_{y}\right], \tag{3}\] \[\mathbf{\hat{K}}^{f^{\prime}_{\nu}}_{\text{AA}}(\mathbf{q}) = \hat{f}^{\prime}_{\perp}\left[\hat{\sigma}_{0}+\hat{\sigma}_{x} \right]\hat{\tau}_{0},\] (4) \[\mathbf{\hat{K}}^{f^{\prime}_{\mu}}_{\text{AB}}(\mathbf{q}) = \hat{f}^{\prime}_{\perp}\left[\hat{\sigma}_{0}\hat{\tau}_{0}-\hat{ \sigma}_{z}\hat{\tau}_{z}+\hat{\sigma}_{x}\hat{\tau}_{x}+\hat{\sigma}_{y}\hat {\tau}_{y}\right]/2,\] (5) \[\mathbf{\hat{K}}^{f^{\prime\prime}}_{\text{AA}}(\mathbf{q}) = \sum_{\nu}\hat{f}^{\prime\prime}_{\nu}[\hat{\sigma}_{0}\hat{\tau }_{0}-\tau^{\prime}_{\nu}(\mathbf{q})\hat{\sigma}_{x}\hat{\tau}_{x}+i\pi^{ \prime\prime}_{\nu}(\mathbf{q})\hat{\sigma}_{y}\hat{\tau}_{y}]\,,\] (6) \[\mathbf{\hat{K}}^{f^{\prime\prime}}_{\text{AB}}(\mathbf{q}) = \sum_{\nu}\hat{f}^{\prime\prime}_{\nu}\{[3\hat{\sigma}_{0}\hat{ \tau}_{0}-\hat{\sigma}_{z}\hat{\tau}_{z}]/2\] \[-\sum_{\nu}\hat{f}^{\prime\prime}_{\nu}\pi^{\prime}_{\nu}(\mathbf{ q})[\hat{\sigma}_{x}\hat{\tau}_{0}-(\hat{\sigma}_{x}\hat{\tau}_{x}-\hat{\sigma}_{x} \hat{\tau}_{x})/2]\] \[+\sum_{\nu}\hat{f}^{\prime\prime}_{\nu}\pi^{\prime\prime}_{\nu}( \mathbf{q})[i\hat{\sigma}_{y}\hat{\tau}_{0}-(\hat{\sigma}_{x}\hat{\tau}_{y}+ \hat{\sigma}_{y}\hat{\tau}_{x})/2]\},\]
where \(\hat{\tau}_{i}\) are Pauli matrices acting in the (A, B) sublattice space, \(\hat{\sigma}_{i}\) are Pauli matrices acting in the layer space, and \(\hat{f}_{\nu}\), \(\hat{f}^{\prime}_{\nu}\), \(\hat{f}^{\prime\prime}_{\nu}\), \(\hat{f}^{\prime\prime}_{\nu}\) are \(2\times 2\) matrices defined in the (R,L) chiral space, whose explicit expressions are reported in the Supplementary Material (SM) [32]. The index \(\nu\) runs over the three vectors of the in-plane nearest neighbor B atoms with respect to an atom A, determining also the effective phonon dispersion by the non-local factors \(\pi^{\prime}_{\nu}(\mathbf{q})=\text{Re}\{\exp[i\mathbf{q}\cdot\mathbf{\delta}_{ \nu}]\}\), \(\pi^{\prime\prime}_{\nu}(\mathbf{q})=\text{Im}\{\exp[i\mathbf{q}\cdot\mathbf{\delta}_ {\nu}]\}\), \(\mathbf{\delta}_{1}=(1,0)\), \(\mathbf{\delta}_{2}=(-1/2,\sqrt{3}/2)\), \(\mathbf{\delta}_{3}=(-1/2,-\sqrt{3}/2)\). Note that the term \(\mathbf{\hat{K}}^{f}(\mathbf{q})\) in the dynamical matrix does not depend on the specific AA or AB (or twisted) structure since it is purely related to intra-layer physics.
Without interlayer coupling, the phonon dispersion exhibits two degenerate Dirac cones at the K point, emerging from the longitudinal acoustic (LA) and longitudinal optical (LO) branches for each layer. In an AA struc
Figure 1: Force-constant model for the untwisted cases. (a) Single-layer graphene (top) and its phonon dispersion calculated using the model. Only the elastic coupling \(f\) (solid black lines) between nearest neighbor atoms \(f\) is retained. The colored arrows denote the lattice displacements coupled with the two elastic components \(f_{\parallel}\) (red) and \(f_{\perp}\) (green). (b),(c) AB and AA bilayer graphene, respectively. The two further interlayer elastic coupling are shown, \(f^{\prime}\) (dotted blue lines) and \(f^{\prime\prime}\) (dashed purple lines).
ture, these cones split into two, while only one survives in AB stacking and the other one is gapped due to the interlayer coupling. To determine the intralayer FC parameters \(f_{\parallel}\) and \(f_{\perp}\) we fix the energy of the single-layer Dirac point \(\omega_{0}=\sqrt{3(f_{\parallel}+f_{\perp})/2M}\), and their Dirac velocity \(v=\omega_{0}a(f_{\parallel}-f_{\perp})/4(f_{\parallel}+f_{\perp})\). The other interlayer elastic three parameters \(f_{\perp}^{\prime}\), \(f_{\parallel}^{\prime\prime}\), \(f_{\perp}^{\prime\prime}\) can be determined by fixing the energies of the two Dirac cones in the AA structure, \(\omega_{\rm AA,\pm}\)[33] and by the splitting energy of the single-degenerate levels in the AB stacking \(\omega_{\rm AB,\pm}\)[32]. Using first-principles calculations [34] (see SM [32]), we obtain \(f_{\parallel}=23.882\) meV/A\({}^{2}\), \(f_{\perp}=19.973\) meV/A\({}^{2}\), \(f_{\perp}^{\prime}=-0.143\) meV/A\({}^{2}\), \(f_{\parallel}^{\prime\prime}=0.090\) meV/A\({}^{2}\), \(f_{\perp}^{\prime\prime}=0.059\) meV/A\({}^{2}\).
Utilizing the dynamical matrix of two uncoupled layers and that of the AA and AB structures, we construct a continuum model in the twisted case. Here we investigate the effects of twist on the properties of few selected in-plane lattice modes, namely the Dirac phonons at the K point emerging from the LA and LO branches, the non-degenerate high-frequency TO mode at the K point. Furthermore, we study the degenerate LO and TO modes at the \(\Gamma\) point. For Dirac phonons, we restrict the analysis to the relevant 4-fold Hilbert sub-space containing the left-hand chiral displacements for the A1/A2 atoms, and the right-hand chiral displacements for the B1/B2 atoms. The \(4\times 4\) dynamical matrices so obtained read:
\[\hat{\mathcal{K}}_{\rm AA}(\tilde{\mathbf{q}}) =v\hat{\sigma}_{0}[\tilde{q}_{x}\tilde{\tau}_{x}+\tilde{q}_{y} \tilde{\tau}_{y}]+V_{\rm AA}\hat{\sigma}_{0}\tilde{\tau}_{0}-f_{\perp}^{\prime }\hat{\sigma}_{x}\tilde{\tau}_{0}, \tag{8}\] \[\hat{\mathcal{K}}_{\rm AB}(\tilde{\mathbf{q}}) =v\hat{\sigma}_{0}[\tilde{q}_{x}\tilde{\tau}_{x}+\tilde{q}_{y} \tilde{\tau}_{y}]+V_{\rm AB,0}\hat{\sigma}_{0}\tilde{\tau}_{0}+V_{\rm AB,z} \hat{\sigma}_{z}\hat{\tau}_{z}\] (9) \[-3(f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime\prime})/4[\hat {\sigma}_{x}\hat{\tau}_{x}+\hat{\sigma}_{y}\hat{\tau}_{y}],\]
where \(\tilde{q}_{i}\) are wave-vectors measured with respect to the K point and where the parameters \(V_{\rm AA}\), \(V_{\rm AB,0}\), \(V_{\rm AB,z}\) are ruled by the interlayer force constants (for an explicit expression see the SM [32]).
Eqs. (8)-(9) provide the basis for assessing the evolution of the Dirac phonons in TBG within a continuum model. Following a similar approach as for electrons, we describe the dynamical matrix for a twisted bilayer by interpolating the off-diagonal blocks of the AA and AB matrices in Eqs. (8)-(9) [6]. We find that the equivalent of an AA and AB interlayer tunneling are ruled by the terms:
\[t_{\rm AA} =-f_{\perp}^{\prime}/3, \tag{10}\] \[t_{\rm AB} =-(f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime\prime})/2. \tag{11}\]
Figure 2: Evolution with the twist angle of the phonon dispersion in the moiré Brillouin zone for the LO/LA modes at K (top panels), the TO mode at K (middle panels), and the TO modes at \(\Gamma\) (bottom panels). Left panels are for twist angle \(\theta=4^{\circ}\), central panels for \(\theta=1.05^{\circ}\). In panel (b) the relevant Dirac modes in the twisted cases are highlighted in red color. In the right panels we show the band renormalization factor \(R\) for each mode as a function of twist angle.
Furthermore, we notice that the diagonal elements of Eqs. (8)-(9) give rise to effective local potentials which are different for different local stackings, and hence in different regions in real space corresponding to an AA, AB or BA stacking [32]. These potentials can be expanded in reciprocal lattice vectors similarly to the way electrostatic potentials are incorporated into the continuum model for electronic bands of TBG [35]. Including in this scheme these local potentials, we show in Fig. 2a,b the evolution with the twist angle of the phonon dispersion close to the in-plane Dirac energies in the moire Brillouin zone. We can notice an overall upward energy shift of all the phonon frequencies, stemming from the presence of such local potentials. Moreover, the phonon dispersion still shows a dispersive Dirac behavior close to K for \(\theta=4^{\circ}\), with a linear dispersion velocity comparable with the single-layer. Interestingly, such dispersion appears much flatter at \(\theta=1.05^{\circ}\), signalizing a remarkable band renormalization. In order to get a qualitative estimate of a possible "phonon magic angle", we can employ the standard approach of truncating the interlayer tunneling only to the first set of three momenta \(\mathbf{Q}_{\nu}\)[5] and we get \(\bar{\theta}_{\mathrm{LO/LA}}\approx 2.1^{\circ}\) (see SM [32]). Such a picture is supported by a quantitative analysis based on multi-interlayer scattering, including local potentials. Within this framework, following Ref. [5], the flattening of the LO/LA phonon bands can be parameterized in terms of the renormalization factor \(R=V^{*}/V\) of the Dirac phonon velocity \(V^{*}\) in the twisted case with respect to the one in the single-layer, \(V\). The twist-angle dependence of \(R\) for the full multi-scattering continuum model is plotted in Fig. 2c, showing a marked depletion for twist angles \(\lesssim 2^{\circ}\). Nevertheless, such depletion, for these as for other lattice modes, never reach a perfect flattening because of the role of the local potentials, the qualitative estimate of the phonon magic angle can properly capture the correct range of twist angles where a strong phonon band renormalization occurs.
_TO phonon at K_. The analysis done for Dirac phonons at K can be also extended to the TO phonon at K, which induces intervalley scattering for the electrons [13]. Following the usual scheme, the phonon wavefunction is expanded in plane waves in the two layers, where the plane wave in one layer is transferred as a superposition of three plane waves in the neighboring layer (see SM [32] for more details). The main differences with respect to the LO/LA modes are: \(i)\) The monolayer TO phonon is not degenerate at K, so that the spinor (sublattice) degree of freedom disappears; \(ii)\) The dispersion in the monolayer is quadratic in \(\mathbf{\tilde{q}}\). The coupling between layers includes a diagonal restoring term, which changes from the AA to the AB and BA regions, and a single interlayer coupling, which also depends on the position within the unit cell. This coupling is finite in the AA region, and it vanishes in the AB and BA regions [32]. Hence, the model includes four parameters, which can be readily obtained from the FCs discussed above. The model used here resembles the ones used for the conduction band edge of MoS\({}_{2}\) (located at the K and K\({}^{\prime}\) points)[36; 37]. The representative plots of the TO phonon dispersion in the moire Brillouin zone are shown in Fig. 2d,e, and the angle dependence of the appropriate band renormalization for the TO modes is depicted in Fig. 2f. Using the standard approximate model restricted to the first star of Bloch waves and neglecting the diagonal restoring forces, we can obtain also for these modes an estimate for the magic angle at which the prefactor of the quadratic dispersion at K vanishes [32]. We obtain \(\bar{\theta}_{TO}\approx 1.0^{\circ}\), which is qualitatively consistent with the results shown in Fig. 2f.
_Optical phonons at \(\Gamma\)_. The continuum model for the optical phonons at \(\Gamma\) in TBG is particularly simplified by the fact that each plane-wave in one layer just tunnels into a single plane-wave in the other layer. As detailed in the SM [32], the interlayer forces thus couple separately the LO and the TO modes. One can further divide modes with even and odd symmetry with respect to the vertical axis. The LO and TO modes of the single-layer evolve thus in TBG into four independent bands with a quadratic dispersion which is ruled by different combinations of the FC parameters, and hence with four different behaviors for the band renormalization [32]. The model resembles electronic models used for the valence band edge of MoS\({}_{2}\) (located at the \(\Gamma\) point)[38; 39; 40]. The plots of the phonon dispersion of the TO modes with even and odd symmetry for different twist angles is shown in Fig. 2g,h, and the angle dependence of the effective band renormalization in Fig. 2i. Similar results (not shown) are obtained for the LO modes.
_Discussion_. We have analyzed the optical phonons of TBG, by introducing proper continuum models originally devised for the electronic structure. For all the three cases studied, LO/LA modes at K, TO modes at K and LO/TO modes at \(\Gamma\), we find a remarkable flattening of the superlattice phonon bands at low twist angles, starting at higher values than the "magic angles" where electronic flat bands appear. The onset of such flat phonon bands is expected to tune the optical properties of TBG in the infrared frequency range, providing a possible tool for twist characterization. LO/TO modes are directly probed by one-phonon Raman and infrared spectroscopy in bilayer graphene [41], with intensities and selection rules that depend crucially on the bilayer stacking order and on the \(z\)-axis symmetry [42; 43; 44; 45; 46], and hence on twisting [47; 48; 49; 50]. TO modes, and their dispersions close to the K point, are also commonly observed by means of double-resonance processes D and 2D [16; 17; 18; 19; 20]. Finally, although a direct contribution of the LO/LA modes at K to the Raman phonon spectroscopy is not well-assessed [16; 18; 23; 24], these modes bare a promising relevance for quantum devices since, obeying to a similar Dirac quantum-structure, they are expected to show a simi
lar rich complexity as the electronic degree of freedom. It is also worth mentioning that the same modes in the presence of mass disproportion (e.g. in h-BN) host chiral phonon states supporting a finite lattice angular momentum [25; 26; 27; 28], with possible application towards a suitable (lattice-based) quantum two-level systems [51; 52].
_Acknowledgments_ All the authors thank T. Cea for useful discussions. IMDEA Nanociencia acknowledges support from the "Severo Ochoa" Programme for Centres of Excellence in R&D (CEX2020-001039-S / AEI / 10.13039/501100011033). F.G. acknowledges funding from the European Commission, within the Graphene Flagship, Core 3, grant number 881603 and from grants NMAT2D (Comunidad de Madrid, Spain), SprQuMat (Ministerio de Ciencia e Innovacion, Spain) and financial support through the (MAD2D-CM)-MRR MATERIALS AVANZADOS-IMDEA-NC. E.C. acknowledges financial support from PNRR MUR project PE0000023-NQSTI. H.R. acknowledges the support from the Swedish Research Council (VR Starting Grant No. 2018-04252).
**Supplemental Material for:**
**Flat-band optical phonons in twisted bilayer graphene**
## I Force-constant model
In this Section, we provide details about the force-constant model employed in the present paper to describe the lattice dynamics in monolayer, bilayers and twisted bilayer graphenes. We focus here only on in-plane lattice displacements that, due to their mixing of \(x\) and \(y\) component, show the most interesting physics. A similar model can be employed for out-of-plane lattice displacements.
Building blocks of such model are forces between nearest neighbor pairs of atoms connected by a vector \(\mathbf{r}_{ij}\). Two main components can be thus identified, a parallel one with respect to \(\mathbf{r}_{ij}\) (central forces); and a perpendicular one to this vector (transverse forces) and lying in the graphene plane. A third component, orthogonal to the other two ones, can be also included, but it mainly rules the out-of-plane lattice dynamics and it does not play any relevant role in the present context.
### Single-layer graphene
For an isolated graphene layer the dynamical matrix is defined in terms of the atomic lattice displacements, \(\{x_{A},y_{A},x_{B},y_{B}\}\), were \(A\), \(B\) are the sublattice labels. Following the above notation, we introduce the central and transverse forces governed by the parameter \(f_{\parallel}\), \(f_{\perp}\) respectively. The resulting \(4\times 4\) matrix reads thus:
\[H(\vec{k})=\left(\begin{array}{cc}H_{AA}(\vec{k})&H_{AB}(\vec{k})\\ H_{AB}^{\dagger}(\vec{k})&H_{BB}(\vec{k})\end{array}\right),\] (S1)
where
\[H_{AA}(\vec{k})=H_{BB}(\vec{k})=\left(\begin{array}{cc}3\left[f_{\parallel}+ f_{\perp}\right]/2&0\\ 0&3\left[f_{\parallel}+f_{\perp}\right]/2\end{array}\right),\] (S2)
and
\[H_{AB}(\vec{k})=\left(\begin{array}{cc}3f_{\parallel}\left[e_{\vec{k},a}+e _{\vec{k},b}\right]/4+f_{\perp}\left[1+(e_{\vec{k},a}+e_{\vec{k},b})/4\right]& -\sqrt{3}\left[f_{\parallel}-f_{\perp}\right]\left(e_{\vec{k},a}-e_{\vec{k},b }\right)/4\\ \sqrt{3}\left[f_{\parallel}-f_{\perp}\right]\left(e_{\vec{k},a}-e_{\vec{k},b} \right)/4&f_{\parallel}\left[1+(e_{\vec{k},a}+e_{\vec{k},b})/4\right]+3f_{ \perp}\left(e_{\vec{k},a}+e_{\vec{k},b}\right)/4\end{array}\right).\] (S3)
Here \(e_{\vec{k},i}=\exp[i\vec{k}\cdot\vec{R}_{i}]\) (\(i=a,b\)), where \(\vec{R}_{a}=a(1/2,-\sqrt{3}/2)\), \(\vec{R}_{b}=a(-1/2,\sqrt{3}/2)\), and \(a\approx 2.46\) A is the lattice constant.
The model in Eqs. (S1)-(S3) has simple solutions at high symmetry points:
\[M\omega_{\alpha}^{2}(\Gamma) =\left\{0,0,3\left(f_{\parallel}+f_{\perp}\right),3\left(f_{ \parallel}+f_{\perp}\right)\right\},\] \[M\omega_{\alpha}^{2}(K) =\left\{3f_{\parallel},3f_{\perp},\frac{3\left(f_{\parallel}+f_{ \perp}\right)}{2},\frac{3\left(f_{\parallel}+f_{\perp}\right)}{2}\right\},\] \[M\omega_{\alpha}^{2}(M) =\left\{2f_{\parallel},2f_{\perp},3f_{\parallel}+f_{\perp},f_{ \parallel}+3f_{\perp}\right\},\] (S4)
where the index \(\alpha\) labels the phonon branch and the parameter \(M\) is the mass of the carbon atom.
We focus on characteristic modes at the high-symmetry points \(\Gamma\), K, relevant for twisted system.
The first ones are the the double-degenerate high-frequency states at the \(\Gamma\) point, which represent the longitudinal optical (LO) and the transverse optical (TO) modes. These modes determine the full phonon bandwidth, given by the energy
\[M\omega_{LO/TO}^{2}(\Gamma)=3\left(f_{\parallel}+f_{\perp}\right).\] (S5)
The corresponding eigenvectors for these modes are:
\[\mathbf{\epsilon}_{LO/TO,x}(\Gamma)=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1 \\ 0\\ -1\\ 0\end{array}\right),\hskip 28.452756pt\mathbf{\epsilon}_{LO/TO,y}(\Gamma)=\frac{1}{ \sqrt{2}}\left(\begin{array}{c}0\\ 1\\ 0\\ -1\end{array}\right).\] (S6)
Quite relevant is also the evolution of the TO branch at the K point, which leads to a the high-frequency mode at the K point, with energy
\[M\omega_{TO}^{2}(K)=3f_{\parallel},\] (S7)
and a typical eigenvector for transverse-optical displacements:
\[\boldsymbol{\epsilon}_{TO}(K)=\frac{1}{2}\left(\begin{array}{c}1\\ -i\\ -1\\ -i\end{array}\right).\] (S8)
The spectrum at the K point is further characterized by the doublet with energy
\[M\omega_{LO/LA}^{2}(K)=3\left(f_{\parallel}+f_{\perp}\right)/2.\] (S9)
The eigenstates for these modes at the K point can be written as:
\[\boldsymbol{\epsilon}_{LO/LA,+}(K)=\frac{1}{\sqrt{2}}\left(\begin{array}{c} 1\\ i\\ 0\\ 0\end{array}\right),\hskip 28.452756pt\boldsymbol{\epsilon}_{LO/LA,-}(K)=\frac{1}{ \sqrt{2}}\left(\begin{array}{c}0\\ 0\\ 1\\ -i\end{array}\right)\] (S10)
The eigenstates in the K\({}^{\prime}\) point can be obtained by reversing the sign in front of the imaginary terms. Using such eigenstates as reduced Hilbert space, and a \(\mathbf{k}\cdot\mathbf{p}\) expansion, the dynamical matrix \(\tilde{\mathcal{H}}\) restricted to the closeness of the \(K\) and \(K^{\prime}\) points can be thus approximated as:
\[\mathcal{H}_{LO/LA}(\vec{k})=\left(\begin{array}{cc}3\left(f_{\parallel}+f _{\perp}\right)/2&Va\left(\xi k_{x}+ik_{y}\right)\\ Va\left(\xi k_{x}-ik_{y}\right)&3\left(f_{\parallel}+f_{\perp}\right)/2\end{array}\right)\] (S11)
where \(\xi=\pm 1\) for the K, K\({}^{\prime}\) respectively. The phonon dispersion for these modes close to the K point can be written thus as:
\[E_{LO/LA}^{\pm}(\vec{k})=M\omega_{LO/LA}^{2}(K)=\frac{3}{2}\left(f_{\parallel} +f_{\perp}\right)\pm Va|\vec{k}|.\] (S12)
The term \(V=\sqrt{3}|f_{\parallel}-f_{\perp}|/4\) rules here the linear Dirac dependence of the dynamical matrix close to the K/K\({}^{\prime}\) point, not to be confused with the linear slope of the phonon dispersion.
### Graphene bilayers.
The force-constant model described above for single-layer graphene can be extended in a compelling way to graphene bilayers. We analyze in details in the following the AA and the AB stacking structures, whereas the model for BA stacking can be obtained from the AB by switching the sublattice space indeces. We include interlayer force constants only up to the second nearest neighbors carbon pairs. In both cases interlayer nearest neighbors are represented by carbon atoms lying directly on top of each other, at the interlayer distance, \(d\approx 3.4\)A. The only elastic term between these atoms is the transverse one, ruled by \(f_{\perp}^{\prime}\). Second interlayer nearest neighbors are represented by atoms at the interatomic distance \(\sqrt{d^{2}+a^{2}/3}\approx 3.5\)A. In the case both the two parallel and transverse components need to be taken into account, parametrized by the terms \(f_{\parallel}^{\prime\prime},f_{\perp}^{\prime\prime}\).
Using the 8-fold spinor represented by the lattice displacements for both layers, for a generic bilayer graphene system, the dynamical matrix can be written as:
\[H^{2L}(\vec{k})=H^{1L+1L}(\vec{k})+\delta H(\vec{k}),\] (S13)
where \(H^{1L+1L}(\vec{k})\) is the dynamical matrix for the two decoupled layers and \(\delta H(\vec{k})\) takes into account the interlayer forces. It is clear that \(8\times 8\)\(H^{1L+1L}(\vec{k})\) can be written as a block-diagonal matrix,
\[H^{1L+1L}(\vec{k})=\left(\begin{array}{cc}H(\vec{k})&0\\ 0&H(\vec{k})\end{array}\right),\] (S14)
whereas
\[\delta H(\vec{k})=\left(\begin{array}{cc}\delta H_{11}(\vec{k})&\delta H_{12}( \vec{k})\\ \delta H_{12}^{\dagger}(\vec{k})&\delta H_{22}(\vec{k})\end{array}\right),\] (S15)
which contains both block on-diagonal terms, resulting from the quadratic elastic contributions associated with a single-atom lattice displacements, and block off-diagonal terms which describe the effective interlayer elastic forces.
More explicitely, for the AA and AB structures we have respectively:
\[\delta H_{11}^{AA}(\vec{k})=\delta H_{22}^{AA}(\vec{k})=\left( \begin{array}{cccc}f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime \prime}+f_{\perp}^{\prime\prime}\right)}{2}&0&0&0\\ 0&f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^{ \prime\prime}\right)}{2}&0&0\\ 0&0&f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^ {\prime\prime}\right)}{2}&0\\ 0&0&0&f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp }^{\prime\prime}\right)}{2}\end{array}\right),\] (S16)
and
\[\delta H_{11}^{AB}(\vec{k})=\left(\begin{array}{cccc}f_{\perp}^ {\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^{\prime\prime} \right)}{2}&0&0&0\\ 0&f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^{ \prime\prime}\right)}{2}&0&0\\ 0&0&3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^{\prime\prime}\right)&0\\ 0&0&0&3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^{\prime\prime}\right) \end{array}\right)\] \[\delta H_{\vec{k}}^{AB,22}=\left(\begin{array}{cccc}3\left(f_{ \parallel}^{\prime\prime}+f_{\perp}^{\prime\prime}\right)&0&0&0\\ 0&3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^{\prime\prime}\right)&0&0\\ 0&0&f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^ {\prime\prime}\right)}{2}&0\\ 0&0&0&f_{\perp}^{\prime}+\dfrac{3\left(f_{\parallel}^{\prime\prime}+f_{\perp}^ {\prime\prime}\right)}{2}\end{array}\right)\] (S17)
In similar way, the interlayer forces for a generic bilayer structures read:
\[\delta H_{12}^{\alpha}(\vec{k})=\left(\begin{array}{cc}\delta H_{12,AA}^{ \alpha}(\vec{k})&\delta H_{12,AB}^{\alpha}(\vec{k})\\ \delta H_{12,BA}^{\alpha}(\vec{k})&\delta H_{12,BB}^{\alpha}(\vec{k})\end{array} \right).\] (S18)
Note that here the upper label \(\alpha=AA,AB\) denotes the stacking order, whereas the indeces AA, AB in the subscript represent the sublattice label for each layer. Using these notations, we obtain:
\[\delta H_{12,AA}^{AA}(\vec{k})=\delta H_{12,BB}^{AA}(\vec{k})= \left(\begin{array}{cc}-f_{\perp}^{\prime}&0\\ 0&-f_{\perp}^{\prime}\end{array}\right),\] (S19)
\[\delta H_{12,AB}^{AA}(\vec{k}) = \delta H_{12,AB}^{AA}(\vec{k})\] (S20) \[= \left(\begin{array}{cccc}-\frac{3}{4}f_{\parallel}^{\prime \prime}\left(e_{\vec{k},a}+e_{\vec{k},b}\right)+\frac{f_{\perp}^{\prime \prime}}{4}\left[1+(e_{\vec{k},a}+e_{\vec{k},b})\right]&\frac{\sqrt{3}}{4} \left[f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime\prime}\right]\left(e_{ \vec{k},a}-e_{\vec{k},b}\right)\\ \frac{\sqrt{3}}{4}\left[f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime\prime} \right]\left(e_{\vec{k},a}-e_{\vec{k},b}\right)&\frac{3}{4}f_{\parallel}^{ \prime\prime}\left(e_{\vec{k},a}^{*}+e_{\vec{k},b}\right)+\frac{f_{\perp}^{ \prime}}{4}\left[1+(e_{\vec{k},a}+e_{\vec{k},b})\right]\end{array}\right),\] \[\delta H_{12,AA}^{AB}(\vec{k}) = \delta H_{12,BB}^{AB}(\vec{k})\] (S21) \[= \left(\begin{array}{cccc}-\frac{3}{4}f_{\parallel}^{\prime \prime}\left(e_{\vec{k},a}^{*}+e_{\vec{k},b}^{*}\right)-\frac{f_{\perp}^{ \prime\prime}}{4}\left[1+(e_{\vec{k},a}^{*}+e_{\vec{k},b}^{*}\right)\right]& \frac{\sqrt{3}}{4}\left[f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime\prime }\right]\left(-e_{\vec{k},a}^{*}+e_{\vec{k},b}^{*}\right)\\ \frac{\sqrt{3}}{4}\left[f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime\prime} \right]\left(-e_{\vec{k},a}^{*}+e_{\vec{k},b}^{*}\right)&-\frac{3}{4}f_{ \parallel}^{\prime\prime}\left(e_{\vec{k},a}^{*}+e_{\vec{k},b}^{*}\right)- \frac{f_{\perp}^{\prime\prime}}{4}\left[1+(e_{\vec{k},a}^{*}+e_{\vec{k},b}^{*} )\right]\end{array}\right),\]
\[\delta H^{AB}_{12,AB}(\vec{k})=\left(\begin{array}{cc}-f^{\prime}_{\perp}&0\\ 0&-f^{\prime}_{\perp}\end{array}\right),\] (S22)
\[\delta H^{AB}_{12,BA}(\vec{k})=\left(\begin{array}{cc}-\frac{3}{4}f^{\prime \prime}_{\parallel}(e^{*}_{\vec{k},a}+e^{*}_{\vec{k},b})-\frac{f^{\prime\prime }_{\perp}}{4}\left[e^{*}_{\vec{k},a}e^{*}_{\vec{k},b}+(e^{*}_{\vec{k},a}+e^{* }_{\vec{k},b})\right]&-\frac{\sqrt{3}}{4}[f^{\prime\prime}_{\parallel}-f^{ \prime\prime}_{\perp}](e^{*}_{\vec{k},a}-e^{*}_{\vec{k},b})\\ -\frac{\sqrt{3}}{4}[f^{\prime\prime}_{\parallel}-f^{\prime\prime}_{\perp}](e^ {*}_{\vec{k},a}-e^{*}_{\vec{k},b})&-\frac{3}{4}f^{\prime\prime}_{\parallel}(e^ {*}_{\vec{k},a}+e^{*}_{\vec{k},b})-\frac{f^{\prime\prime}_{\perp}}{4}\left[e^ {*}_{\vec{k},a}e^{*}_{\vec{k},b}+(e^{*}_{\vec{k},a}+e^{*}_{\vec{k},b})\right] \end{array}\right).\] (S23)
On the ground of the knowledge of the full dynamical matrix for generic branch and generic momentum, we can now build up the effective model for selected modes in the bilayer structures, in the spirit of a \(\mathbf{k}\cdot\mathbf{p}\) expansion.
#### Dirac LO/LA modes at K point
We first address the LO/LA modes at the K point, which gives rise to a linear Dirac-like phonon dispersion.
Using the eigenstates (S10) in each layer, we can write the effective dynamical matrix for a bilayer with generic structure \(\alpha=AA,AB\) in a reduced \(4\times 4\) Hilbert space:
\[\mathcal{H}^{\alpha}_{LO/LA}(\vec{k})=\left(\begin{array}{cc}\mathcal{H}_{ LO/LA}(\vec{k})+\mathcal{V}^{\alpha}_{LO/LA,1}&\mathcal{H}^{\alpha}_{LO/LA,12} \\ \mathcal{H}^{\alpha,\dagger}_{LO/LA,12}&\mathcal{H}_{LO/LA}(\vec{k})+\mathcal{ V}^{\alpha}_{LO/LA,2}\end{array}\right),\] (S24)
where \(\mathcal{H}_{LO/LA}(\vec{k})\) is the \(\mathbf{k}\cdot\mathbf{p}\) expansion for the LO/LA modes at the K point for a single-layer, as reported in Eq. (S11), \(\mathcal{H}^{\alpha}_{LO/LA,12}\) is the \(2\times 2\) interlayer coupling matrix which is different for different stacking orders, namely:
\[\mathcal{H}^{AA}_{LO/LA,12}=\left(\begin{array}{cc}-f^{\prime}_{\perp}&0\\ 0&-f^{\prime}_{\perp}\end{array}\right),\] (S25)
\[\tilde{\mathcal{H}}^{AB}_{LO/LA,12}=\left(\begin{array}{cc}0&0\\ -\frac{3}{2}(f^{\prime\prime}_{\parallel}-f^{\prime\prime}_{\perp})&0\end{array}\right),\] (S26)
and where \(\mathcal{V}^{\alpha}_{LO/LA,i}\) are \(2\times 2\) diagonal matrices that represent the stacking-dependent onsite intra-atomic potential on each layer \(i=1,2\) due to interlayer elastic coupling, explicitely:
\[\mathcal{V}^{AA}_{LO/LA,1}=\mathcal{V}^{AA}_{LO/LA,2}=\left(\begin{array}{ cc}f^{\prime}_{\perp}+\frac{3}{2}(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{ \perp})&0\\ 0&f^{\prime}_{\perp}+\frac{3}{2}(f^{\prime\prime}_{\parallel}+f^{\prime\prime }_{\perp})\end{array}\right),\] (S27)
\[\mathcal{V}^{AB}_{LO/LA,1}=\left(\begin{array}{cc}\frac{3}{2}(f^{\prime \prime}_{\parallel}+f^{\prime\prime}_{\perp})+f^{\prime}_{\perp}&0\\ 0&3(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp})\end{array}\right),\] (S28)
\[\mathcal{V}^{AB}_{LO/LA,2}=\left(\begin{array}{cc}3(f^{\prime\prime}_{ \parallel}+f^{\prime\prime}_{\perp})&0\\ 0&\frac{3}{2}(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp})+f^{\prime }_{\perp}\end{array}\right).\] (S29)
The corresponding frequencies at the K point for each stacking structure read thus:
\[M\omega^{2}_{LO/LA,AA}(K)=\left\{\begin{array}{cc}\frac{3}{2}(f_{\parallel}+ f_{\perp})+2f^{\prime}_{\perp}+\frac{3}{2}(f^{\prime\prime}_{\parallel}+f^{ \prime\prime}_{\perp})&\text{(double degenerate)},\\ \frac{3}{2}(f_{\parallel}+f_{\perp})+\frac{3}{2}(f^{\prime\prime}_{\parallel}+f ^{\prime\prime}_{\perp})&\text{(double degenerate)},\end{array}\right.\] (S30)
\[M\omega^{2}_{LO/LA,AB}(K)=\left\{\begin{array}{cc}\frac{3}{2}(f_{\parallel}+ f_{\perp})+f^{\prime}_{\perp}+\frac{3}{2}(f^{\prime\prime}_{\parallel}+f^{ \prime\prime}_{\perp})&\text{(double degenerate)},\\ \frac{3}{2}(f_{\parallel}+f_{\perp})+\frac{3}{2}(3f^{\prime\prime}_{\parallel}+ f^{\prime\prime}_{\perp}),\end{array}\right.\] (S31)
Note that, in the AA stacking, the first state corresponds to out-of-phase lattice displacements in the two layers, whereas the second frequency described in-phase in-plane lattice vibrations.
In similar way as for the electronic structure, the interlayer coupling gives rise thus to different effects according the to the different stacking. More in particular, in strict similarity with the electronic dispersion, the Dirac-like LO/LA modes at the K point of the single-layer gives rise in the AA bilayer to two Dirac phonon cones split by the interlayer coupling; while in the AB structure just one Dirac phonon cone survives (check) whereas the other one is effectively gapped.
#### High-energy TO modes at K point
An effective \(\mathbf{k}\cdot\mathbf{p}\) model can be built also for the high-energy TO mode at the K point. Starting point is this case will be the eigenstate \(\mathbf{\epsilon}_{TO}(K)\) as expressed in Eq. (S8). Using such state in each layer, we can build up a \(2\times 2\)\(\mathbf{k}\cdot\mathbf{p}\) model close to the K point for these modes in the bilayer systems as:
\[\mathcal{H}^{\alpha}_{TO,K}(\vec{k})=\left(\begin{array}{cc}\mathcal{H}_{TO, K}(\vec{k})+\mathcal{V}^{\alpha}_{TO,K}&\mathcal{H}^{\alpha}_{TO,K}\\ \mathcal{H}^{\alpha}_{TO,K}&\mathcal{H}_{TO,K}(\vec{k})+\mathcal{V}^{\alpha}_ {TO,K}\end{array}\right),\] (S32)
where \(\mathcal{H}_{TO,K}(\vec{k})\) is the dynamical matrix at the quadratic order of this mode in the single-layer,
\[\mathcal{H}_{TO,K}(\vec{k})=3f_{\parallel}+\frac{f_{\parallel}f_{\perp}}{2(f_ {\parallel}-f_{\perp})}(k_{x}^{2}+k_{y}^{2})a^{2},\] (S33)
\(\mathcal{H}^{\alpha}_{TO,K}\) is the inter-layer coupling,
\[\mathcal{H}^{AA}_{TO,K} =-f^{\prime}_{\perp}+\frac{3(f^{\prime\prime}_{\parallel}-f^{ \prime\prime}_{\perp})}{2},\] (S34) \[\mathcal{H}^{AB}_{TO,K} =0,\] (S35)
and \(\mathcal{V}^{\alpha}_{TO,K}\) represents the _onsite_ intra-atomic potential:
\[\mathcal{V}^{AA}_{TO,K} =f^{\prime}_{\perp}+\frac{3(f^{\prime\prime}_{\parallel}+f^{ \prime\prime}_{\perp})}{2},\] (S36) \[\mathcal{V}^{AB}_{TO,K} =\frac{f^{\prime}_{\perp}}{2}+\frac{9(f^{\prime\prime}_{\parallel} +f^{\prime\prime}_{\perp})}{4}.\] (S37)
Such analysis shows that the high-frequency TO modes at the K point in the AB structure remain degenerate with a resulting frequency
\[M\omega^{2}_{TO,AB}(K)=3f_{\parallel}+\frac{1}{2}f^{\prime}_{\perp}+\frac{9}{ 4}(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp})\ \ \ \ \ \text{( double degenerate)},\] (S38)
whereas the interlayer coupled leads to a lift of the degeneracy in the AA stacking, resulting in the frequencies
\[M\omega^{2}_{TO,AA}(K)=\left\{\begin{array}{cc}3f_{\parallel}+3f^{\prime \prime}_{\parallel},\\ 3f_{\parallel}+2f^{\prime}_{\perp}3f^{\prime\prime}_{\perp}.\end{array}\right.\] (S39)
#### High-energy LO/TO modes at the \(\Gamma\) point
Finally, an effective \(\mathbf{k}\cdot\mathbf{p}\) model can be built also for the high-energy LO/TO modes at the \(\Gamma\) point. Since the single-layer shows two degenerate modes at the \(\Gamma\) point, also in this case the effective model will be described by a \(4\times 4\) dynamical matrix resulting by the interlayer coupling of \(2\times 2\) blocks in each layer. We can thus write:
\[\mathcal{H}^{\alpha}_{LO/TO,\Gamma}(\vec{k})=\left(\begin{array}{cc}\mathcal{ H}_{LO/TO,\Gamma}(\vec{k})+\mathcal{V}^{\alpha}_{LO/TO,\Gamma}&\mathcal{H}^{ \alpha}_{LO/TO,\Gamma}\\ \mathcal{H}^{\alpha,\dagger}_{LO/TO,\Gamma}&\mathcal{H}_{LO/TO,\Gamma}(\vec{ k})+\mathcal{V}^{\alpha}_{LO/TO,\Gamma}\end{array}\right),\] (S40)
where \(\mathcal{H}_{LO/LT,\Gamma}(\vec{k})\) is the dynamical matrix at the quadratic order for the single-layer:
\[\mathcal{H}_{LO/TO,\Gamma}(\vec{k})=\left(\begin{array}{cc}3(f_{\parallel}+ f_{\perp})-\frac{f_{\perp}(3f_{\parallel}+f_{\perp})}{8(f_{\parallel}+f_{ \perp})}|\vec{k}|^{2}&0\\ 0&3(f_{\parallel}+f_{\perp})-\frac{f_{\parallel}(f_{\parallel}+3f_{\perp})}{8 (f_{\parallel}+f_{\perp})}|\vec{k}|^{2}\end{array}\right),\] (S41)
\(\mathcal{H}^{\alpha}_{LO/TO,\Gamma}\), as usual, is the \(2\times 2\) interlayer coupling for each stacking order:
\[\mathcal{H}^{AA}_{LO/TO,\Gamma} =\left(\begin{array}{cc}-f^{\prime}_{\perp}+\frac{3(f^{\prime \prime}_{\parallel}+f^{\prime\prime}_{\perp})}{2}&0\\ 0&-f^{\prime}_{\perp}+\frac{3(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_ {\perp})}{2}\end{array}\right),\] (S42) \[\mathcal{H}^{AB}_{LO/TO,\Gamma} =\left(\begin{array}{cc}\frac{f^{\prime}_{\perp}}{2}-\frac{3(f^ {\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp})}{4}&0\\ 0&\frac{f^{\prime}_{\perp}}{2}-\frac{3(f^{\prime\prime}_{\parallel}+f^{\prime \prime}_{\perp})}{4}\end{array}\right),\] (S43)
and \(\mathcal{V}^{\alpha}_{LO/TO,\Gamma}\) describe the onsite intra-atomic potential:
\[\mathcal{V}^{AA}_{LO/TO,\Gamma} =\left(\begin{array}{cc}f^{\prime}_{\perp}+\frac{3(f^{\prime \prime}_{\parallel}+f^{\prime\prime}_{\perp})}{2}&0\\ 0&f^{\prime}_{\perp}+\frac{3(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_ {\perp})}{2}\end{array}\right),\] (S44) \[\mathcal{V}^{AB}_{LO/TO,\Gamma} =\left(\begin{array}{cc}\frac{f^{\prime}_{\perp}}{2}+\frac{9(f^ {\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp})}{4}&0\\ 0&\frac{f^{\prime}_{\perp}}{2}+\frac{9(f^{\prime\prime}_{\parallel}+f^{\prime \prime}_{\perp})}{4}\end{array}\right).\] (S45)
In both stackings, the LO/TO modes at \(\Gamma\) in bilayer systems present two couples of degenerate modes, with frequencies:
\[M\omega^{2}_{LO/TO,AA}(\Gamma) =\left\{\begin{array}{cc}3(f_{\parallel}+f_{\perp})+3(f^{\prime \prime}_{\parallel}+f^{\prime\prime}_{\perp})&\text{(double degenerate)},\\ 3(f_{\parallel}+f_{\perp})+2f^{\prime}_{\perp}&\text{(double degenerate)}, \end{array}\right.\] (S46) \[M\omega^{2}_{LO/TO,AB}(\Gamma) =\left\{\begin{array}{cc}3(f_{\parallel}+f_{\perp})+f^{\prime}_ {\perp}+\frac{3}{2}(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp})& \text{(double degenerate)},\\ 3(f_{\parallel}+f_{\perp})+3(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_ {\perp})&\text{(double degenerate)}.\end{array}\right.\] (S47)
## Appendix D Dynamical matrix in twisted bilayers
In the above section, we have summarized the effective \(\mathbf{k}\cdot\mathbf{p}\) model identifying, for each phonon mode under investigation, the inter-layer elastic force terms and the onsite intra-atomic potentials. A compact view of such local potential is summarized in Table S1 for the relevant degenerate modes LO/LA at the K point, LO/TO modes at the \(\Gamma\) point, and TO at the K point.
Equipped with the knowledge of the role of the interlayer coupling for the characteristic bilayer structures AA and AB, we can now estimate the dynamical matrix for twisted bilayer systems within the framework of a continuum model.
The analysis follows slightly different procedures for each mode, accounting for the different characteristic vectors (K vs. \(\Gamma\)), and for the different size of the Hilbert space [doublet modes for LO/LA(K) and LO/TO(\(\Gamma\)) vs. single non degenerate mode for TO(K)]
#### D Dirac LO/LA modes at the K point
The continuum model for the phonon Dirac spinor associated with the LO/LA modes at the K points follows a standard approach as employed for the electronic dispersion. Within this framework, we consider first two decoupled single-layer systems in the AA stacking, upon which we apply a twist with angle \(\theta\). Such geometric configuration defines three characteristic momenta \(\mathbf{Q}_{1}=k_{\theta}(0,1)\), \(\mathbf{Q}_{2}=k_{\theta}(\sqrt{3}/2,1/2)\), \(\mathbf{Q}_{3}=k_{\theta}(-\sqrt{3}/2,1/2)\), where \(k_{\theta}=2k_{\mathrm{BZ}}\sin(\theta/2)\), \(k_{\mathrm{BZ}}\) being the absolute value of the momentum of the Brillouin zone edge. The \(\mathbf{Q}_{\nu}\) momenta rule the relevant tunneling processes between layers \(\alpha\) and \(\beta\) by means of the interlayer couplings:
\[T^{\alpha\beta}(\mathbf{r})=\bar{t}\sum_{\nu}T^{\alpha\beta}_{\nu}\mathrm{e}^{ i\mathbf{Q}_{\nu}\cdot\mathbf{r}},\] (S48)
where as usual (assuming translational invariance with respect to the relative shift of the two layers)
\[\hat{T}_{1}=\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right),\hskip 14.226378pt\hat{T}_{2}=\left(\begin{array}{cc} \mathrm{e}^{-2\pi i/3}&1\\ \mathrm{e}^{2\pi i/3}&\mathrm{e}^{-2\pi i/3}\end{array}\right),\hskip 14.226378pt\hat{T}_ {3}=\left(\begin{array}{cc}\mathrm{e}^{2\pi i/3}&1\\ \mathrm{e}^{-2\pi i/3}&\mathrm{e}^{2\pi i/3}\end{array}\right).\] (S49)
Following the procedure in Ref. [6], the parameter
\[\bar{t}=\sqrt{t_{AA,i}^{2}+t_{AB,i}^{2}},\] (S50)
is here an effective energy scale obtained by interpolating the AA and the AB/BA interlayer coupling matrices \(\mathcal{H}_{LO/LA}^{AA}\) and \(\mathcal{H}_{LO/LA}^{AB}\).
More in particular, following a standard procedure based on a perturbation analysis, using Eqs. (S25)-(S26). we get:
\[t_{AA,LO/LA}(K) =-\frac{f_{\perp}^{\prime}}{3},\] (S51) \[t_{AB,LO/LA}(K) =-\frac{1}{2}(f_{\parallel}^{\prime\prime}-f_{\perp}^{\prime \prime}).\] (S52)
Besides the interlayer tunnelling processes, the interlayer elastic coupling between the twisted layers gives rise for each mode to local atomic potentials which are described by the diagonal matrices \(\mathcal{V}_{\alpha}^{AA}\) and \(\mathcal{V}_{\alpha}^{AB}\), \(\mathcal{V}_{\alpha}^{BA}\), as they are summarized in Table S1 for the AA and AB structures. In twisted bilayer systems, these potentials can be expanded in reciprocal lattice vectors, in the same way as electrostatic potentials are incorporated into the continuum model of the electron band structure of twisted bilayer graphene[35].
More in particular, for each mode \(\alpha\) we can define an average potential \(\bar{V}_{\alpha}^{AB}\) and a potential difference \(\Delta V_{\alpha}^{BA}\) for the AB and BA structures,
\[\bar{V}_{\alpha}^{AB} =\frac{V_{\alpha}^{AB}+V_{\alpha}^{BA}}{2},\] (S53) \[\Delta V_{\alpha}^{AB} =\frac{V_{\alpha}^{AB}-V_{\alpha}^{BA}}{2}.\] (S54)
Furthermore we can define a potential difference between the average potential in the AA and AB regions:
\[\Delta V_{\alpha}=V_{\alpha}^{AA}-\tilde{V}_{\alpha}^{AB}.\] (S55)
Such difference between \(AA\) and \(AB\) regions can be described in terms of moire harmonics. Expanding into the first star of moire reciprocal lattice vectors, we obtain:
\[V_{\alpha}(\vec{r})=\frac{\Delta V_{\alpha}}{9}\sum_{i=1,2,3}\cos(\vec{G}_{i} \vec{r}).\] (S56)
In similar way, the layer dependent modulation of the potentials at the \(AB\) and \(BA\) regions can be written as:
\[\Delta V_{\alpha}(\vec{r})=\pm\frac{2\Delta V_{\alpha}^{AB}}{3\sqrt{3}}\sum_{ i=1,2,3}\sin(\vec{G}_{i}\vec{r})\] (S57)
The potentials in Eqs. (S56)-(S57) can be incorporated into a continuum model in the same way as the electrostatic potential are added to the electronic continuum model of twisted bilayer graphene.[35] We keep only the first star of reciprocal lattice vectors, and include these potentials into the continuum model in a similar way to the inclusion of the (scalar but sublattice independent) Hartree potential in Ref. [35].
The full phonon dispersion of the LO/LA modes in the reduced moire Brillouin zone can be thus computed, as shown for instance in Fig. 2a,b of the main text. A comparison between the LO/LA dispersion in the twisted case with \(\theta=4^{\circ}\) and the reference case of two decoupled layers is shown in Fig. S1. The Dirac bands (marked in red color) are easily identified. One can notice two main features: an overall upwards energy shift \(\Delta\omega_{LO/LA}\) of the main dispersion, of about \(\sim 1.8\) cm\({}^{-1}\); and a renormalization of the linear Dirac dispersion, just like for the electronic case. For given angle, we evaluate the coefficient \(v^{*}\) of the linear dispersion \(\hbar\omega_{LO/LA}(\vec{k})=\hbar\omega_{LO/LA}(K)\pm v^{*}|\vec{p}|\) of the Dirac mode in the twisted case in comparison with the linear coefficient \(v\) of the uncoupled single-layer. The parameter \(R=v^{*}/v\) provides thus the "renormalization" band factor of the twisted Dirac phonons dispersion, in similar was as shown in the inset of Fig. 4 in Ref. [5]. The angle dependence of the renormalization band factor is shown in Fig. 2c of the main text, showing a remarkable trend toward flat Dirac bands for \(\theta\lesssim 2^{\circ}\). Also interesting, is the analysis of the phonon band-shift \(\Delta\omega_{LO/LA}\), reported in Fig. S2, which shows a negligible dependence on \(\theta\) for large twiste angles, but a sizable drop at low angle, in the region where the band renormalization is also more marked. Such angle dependence of the phonon band-shift \(\Delta\omega_{LO/LA}\), is expected to be reflected in an observable angle-dependence mode softening.
If we neglect for the moment the role of the onsite potentials, we can use a perturbative analysis of the interlayer tunneling terms (see for instance Ref. [5]) in order to obtain an estimate of the first "magic" angle at which the phonon dispersion of the twisted bilayer vanishes.
For the case of the Dirac-like phonons LO/LA at the K point, such condition occurs when
\[2\sin\left(\frac{\bar{\theta}_{LO/LA}}{2}\right)=\frac{3}{4\pi}\frac{\sqrt{3 \bar{t}^{2}}}{V}=\frac{\sqrt{3}}{\pi(f_{\parallel}-f_{\perp})}\sqrt{3\left[( f^{\prime})_{\perp}^{2}+\frac{9\left(f^{\prime\prime}_{\parallel}-f^{\prime\prime}_{ \perp}\right)^{2}}{4}\right]}.\] (S58)
Using the force-constant parameters extracted from the comparison with ab-initio calculations (see next Section), this equation gives \(\bar{\theta}_{LO/LA}(K)\approx 2.1^{\circ}\).
#### High-energy TO modes at the K point
The TO phonons at the K point in twisted systems can also be described as a variation of the continuum model discussed in Ref. [6], where we expand the dynamical matrix in Bloch wavefunctions at each layer. Just as in the previous case, the interlayer scattering is still governed by the three characteristic momenta \(\mathbf{Q}_{1}=k_{\theta}(0,1)\), \(\mathbf{Q}_{2}=k_{\theta}(\sqrt{3}/2,1/2)\), \(\mathbf{Q}_{3}=k_{\theta}(-\sqrt{3}/2,1/2)\). With respect to the LO/LA modes, there are however two main differences. One hand, the phonon dispersion in the singe-layer does not obey to a linear behavior but to a quadratic one as shown in Eq.(S33). On the other hand, as this mode in single-layer is not degenerate, the \(2\times 2\) spinor structure of Eq. (S49) is lifted, and for each momentum only one wave function per layer is needed. The terms which define the interlayer tunneling are thus reduced to \(1\times 1\) numbers, as in Eq. (S34)-(S35). A block state
in one layer is coupled by these terms to three Bloch states in the second layers, and the three terms just acquire a phase modulation, \(e^{i\phi_{j}},j=1,2,3\) (see Ref. [36] for a related approach to the electronic states in twisted homo-bilayer dichalcogenides). Finally we include the intra-layer sublattice potentials \(V_{1},V_{2}\) as shown in Table S1. Just as in the case of the Dirac LO/LA modes, these local potentials are expanded using the first star of reciprocal lattice vectors.
The phonon dispersion so computed is shown in panels d-e of Fig. 2 of the main text. Also in this case, a renormalization band factor can be evaluate by the ratio of the quadratic dispersion \(\hbar\omega_{TO}(\vec{k})=\hbar\omega_{TO}(K)+\alpha^{*}|\vec{p}|^{2}\) in the twisted bilayer and in the single layer cases, \(R=\alpha^{*}/\alpha\). The angle dependence of the such renormalization band factor is also shown in panel f of Fig. 2 of the main text, and the angle dependence of the band-shift \(\Delta\omega_{TO}\) for this mode in Fig. S2.
Also for the TO modes at the K point, using the same perturbative approach for the interlayer tunneling, we can provide a qualitative estimate of the magic angle were the positive quadratic dispersion at K is renormalized to zero. Using the appropriate expression for a quadratic dispersion we find:
\[2\sin\left(\frac{\bar{\theta}_{TO}}{2}\right)=\frac{3}{4\pi}\times\sqrt{ \frac{2\sqrt{3}\left|f_{\perp}^{\prime}+\frac{3\left(f_{\perp}^{\prime\prime} -f_{\perp}^{\prime\prime}\right)}{2}\right|\left(f_{\parallel}-f_{\perp} \right)}{f_{\parallel}f_{\perp}}}.\] (S59)
Note that this result depends only on the prefactor of the quadratic dispersion of the TO mode of a monolayer, and on the set of interlayer force-constant parameters which describes the force between layers in the AA structure, as
defined in Eq.(S34).
Using the force-constant parameters extracted from the comparison with ab-initio calculations, we get \(\bar{\theta}_{TO}(K)\approx 1.0^{\circ}\).
#### High-energy LO/TO modes at the \(\Gamma\) point
The analysis of the twisting on the LO/TO modes at the \(\Gamma\) point is somehow much simpler than the previous cases discussed for phonons at the K point. The phonons at the \(\Gamma\) point in the individual layers of a twisted bilayer are indeed mapped onto the \(\Gamma\) point of the twisted bilayer Brillouin zone, unlike the modes at the \(K\) point, which are instead mapped onto the K or K\({}^{\prime}\) point of the twisted bilayer \(\Gamma\) point, depending on the layer index. Moreover, as shown in Eqs. (S42), (S43),(S44), (S45), and in Table S1, all the interlayer coupling terms are just multiple of the unit matrix in the \(2\times 2\) space, as specified in Eqs. (S42)-(S43). Hence, the LO and TO modes can be treated independently. The continuum model for the phonons at the \(\Gamma\) point can be written in terms of Bloch waves defined on the reciprocal lattices for each layer, where the two lattices lie on top of each other, unlike the case of the modes at \(K\), where the two reciprocal lattices are displaced with respect to each other.[5] A similar model has been employed for electrons near the \(\Gamma\) point of the valence band in transition metal dichalcogenides.[38; 40] Furthermore, the interlayer coupling terms allow for the separation of the two LO and the two TO modes into even and odd combinations which also are decoupled, so that the continuum model for the dynamical matrix of the phonons at \(\Gamma\) can be split into four independent blocks.
Using such procedure, the computed phonon dispersion and the angle dependence of the renormalization band factor for the TO modes close to the \(\Gamma\) point are shown in panels g-i of Fig. 2 of the main text. We also show in Fig. S2 the angle dependence of the band-shifts \(\Delta\omega_{LO/TO}\) of these modes for both even and odd symmetries. Note that, since LO and TO are degenerate at the \(\Gamma\) point, a similar band-shift applies for both LO and TO phonon dispersions.
Obeying to the above simplifications, also the estimates of magic angles where the dispersion of the TO/LO modes at the \(\Gamma\) point is renormalized to zero (\(R\to 0\)) is significantly simplified.
Since, as discussed above, the continuum model for the phonons at \(\Gamma\) can be split into four independent hamiltonian blocks, corresponding to even and odd layer combinations, we obtain four magic angles, one for each LO vs. TO and even vs. odd combinations. Using the interlayer coupling reported in Eqs. (S42)-(S43). the quadratic dispersion of Eq.(S41), and using non local coupling between Bloch waves separated by a reciprocal lattice vector in the first star,
the perturbation theory results in the following relations:
\[2\sin\left(\frac{\bar{\theta}}{2}\right) =\frac{\sqrt{3}}{4\pi}\sqrt{\frac{|\tilde{f}_{1}\pm\tilde{f}^{\prime }_{1}|}{v_{2}^{LO}}},\] \[2\sin\left(\frac{\bar{\theta}}{2}\right) =\frac{\sqrt{3}}{4\pi}\sqrt{\frac{|\tilde{f}_{1}\pm\tilde{f}^{ \prime}_{1}|}{v_{2}^{TO}}},\] (S60)
where:
\[\tilde{f}_{1} =\frac{f^{\prime}_{\perp}-\frac{3}{2}\left(f^{\prime\prime}_{ \parallel}+f^{\prime\prime}_{\perp}\right)}{18},\] \[\tilde{f}^{\prime}_{1} =\frac{-3f^{\prime}_{\perp}+\frac{3}{2}\left(f^{\prime\prime}_{ \parallel}+f^{\prime\prime}_{\perp}\right)}{18},\] \[v_{2}^{LO} =\frac{f_{\perp}(3f_{\parallel}+f_{\perp})}{8(f_{\parallel}+f_{ \perp})},\] \[v_{2}^{TO} =\frac{f_{\parallel}(f_{\parallel}+3f_{\perp})}{8(f_{\parallel}+f _{\perp})}.\] (S61)
Using the force-constant parameters extracted from the comparison with ab-initio calculations, we find \(\bar{\theta}_{LO}(\Gamma,+)\approx 0.44^{\circ}\), \(\bar{\theta}_{TO}(\Gamma,+)\approx 0.42^{\circ}\), \(\bar{\theta}_{LO}(\Gamma,-)\approx 0.83^{\circ}\), \(\bar{\theta}_{TO}(\Gamma,-)\approx 0.79^{\circ}\).
## VI Mapping ab-initio calculations onto force-constant model
In order to achieve a realistic modelling of the lattice dynamics in twisted bilayer graphene, we use ab-initio calculations in order to extract the appropriate parameters for the force-costant model.
Density functional theory calculations (DFT) were performed using Quantum Espresso (QE)[53; 54; 34]. For the electronic calculations, we use the Generalized Gradient Approximation (GGA), especifically, the functional of Perdew, Burke and Ernzerhof [55]. We set the energy cutoff for the wavefunctions to 240 Ry and the cutoff for the density to 1400 Ry. In order to obtain the correct value for the interlayer spacing in the case of bilayer graphene, we use the Grimme approximation[56]. The Brillouin zone was sampled using the Monkhorst-Pack scheme [57] with a grid of \(32\times 32\times 1\) k-points. We have optimized the lattice vectors and relaxed the atomic positions to forces lowers than 1 eV/A. The phonon band structure was calculated using Density Functional Perturbation Theory (DFPT)[58] as implemented in QE.
The force-constant (FC) model here employed for the phonon dispersion in bilayer graphene depends on five independent elastic parameters, i.e. \(f_{\parallel}\), \(f_{\perp}\), \(f^{\prime}_{\perp}\), \(f^{\prime\prime}_{\parallel}\), and \(f^{\prime\prime}_{\perp}\). Given the pivotal role in our discussion of the Dirac-like LO/LA modes at the K point of single-layer and bilayer structures, we calibrate our FC parameters in order to reproduce in the best way these Dirac-like features.
A first crucial feature is in the single-layer the Dirac-like linear dispersion of the LO/LA modes at the K point,
\[\hbar\omega_{LO/LA}(\vec{k})=\hbar\omega_{LO/LA}(K)\pm v|\vec{p}|,\] (S62)
where \(\vec{p}=\vec{k}-K\). Our DFT calculations find
\[v=7.25\times 10^{4}\,\text{cm/s}=4.77\ \text{meV}\ \text{\AA}.\] (S63)
Further relevant ab-initio inputs are the LO/LA frequencies in the single-layer as well as AA and AB stackings. Their values are reported in Table S2
These first-principle inputs can be now employed in order to estimate proper force-constant parameters. More in details, from the relation:
\[M\omega_{LO/LA}^{2}(K)=\frac{3}{2}(f_{\parallel}+f_{\perp}),\] (S64)
we get the value of the linear combination \(f_{\parallel}+f_{\perp}\):
\[f_{\parallel}+f_{\perp}=43.85\ \text{eV/A}^{2}.\] (S65)
The first-principles value of the Dirac velocity \(v\) of these modes close to K provide further analytical constraints. From Eq. (S12), which refers to the dynamical matrix, we can obtain an analytical expression for \(v\):
\[v=\sqrt{\frac{3(f_{\parallel}+f_{\perp})}{2}}\frac{f_{\parallel}-f_{\perp}}{f_{ \parallel}+f_{\perp}}\frac{\hbar a}{4\sqrt{3}},\] (S66)
Using these inputs we can determine thus the values of the in-plane force-constant parameters \(f_{\parallel}\) and \(f_{\perp}\).
The interlayer force-constant parameters \(f^{\prime}_{\perp}\), \(f^{\prime\prime}_{\parallel}\), and \(f^{\prime\prime}_{\perp}\) can be estimated from the spectrum of the LO/LA modes at the K point in the AA and AB structures. Using Eqs. (S30), from the splitting of the computed frequencies of the Dirac LO/LA modes in the AA structure, we can extract the value of \(2f^{\prime}_{\perp}\). In similar way, using Eqs. (S31), we can extract the linear combination \(f^{\prime\prime}_{\parallel}-f^{\prime\prime}_{\perp}\) from the splitting of the single-degenerate LO/LA levels at K in the AB structure. In order to have a complete set of force-constant parameters, we have to further extract from the ab-initio calculations the linear combination \(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp}\). This can be obtained, using Eqs. (S9) and (S30), by comparing the frequency shift of the symmetric LO/LA modes in the AA bilayer with respect to the reference frequency of the LO/LA modes in the single-layer case. The force-constant parameters so extracted from the ab-initio input are listed in Table S3.
It is worth to mentioning that, while the parameters \(f_{\parallel}\), \(f_{\perp}\), \(f^{\prime}_{\perp}\), and the linear combination \(f^{\prime\prime}_{\parallel}-f^{\prime\prime}_{\perp}\) are extracted in a compelling way from the analysis of the LO/LA modes at the K point in single-layer and bilayer structure, the determination of the last condition, namely \(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp}\), is less univocally. As an alternative procedure, we could estimate the linear combination \(f^{\prime\prime}_{\parallel}+f^{\prime\prime}_{\perp}\), using Eq. (S31), from the analysis of the the relative frequency shift of the LO/LA modes in the AB bilayer with respect to the reference frequency of the LO/LA modes in the single-layer case. Along this derivation, one would extract slightly different values of \(f^{\prime\prime}_{\parallel}\), \(f^{\prime\prime}_{\perp}\), namely \(f^{\prime\prime}_{\parallel}=0.070\) eV/A\({}^{2}\), \(f^{\prime\prime}_{\perp}=0.039\) eV/A\({}^{2}\). Note however that such slight uncertainty on the quantity \(f^{\prime\prime}_{\parallel}-f^{\prime}_{\perp}\) does not affect sensibly the twisted phonon dispersion since the relevant interlayer tunneling processes, for the LO/LA modes at the K point, as well as for the TO at K and TO/LO at \(\Gamma\), are essentially ruled [see Eqs. (S51), (S52) (S34), (S35), (S42), and (S43)] only by the parameters \(f^{\prime}_{\perp}\), and \((f^{\prime\prime}_{\parallel}-f^{\prime\prime}_{\perp})\) which can be determined without ambiguity from the first-principle calculations. In Fig. S3 we show the phonon dispersions for \(\theta=1.05^{\circ},4^{\circ}\) and the angle dependence of the band renormalization factor \(R\) for the force-constant parameter set with \(f^{\prime\prime}_{\parallel}=0.070\) eV/A\({}^{2}\), \(f^{\prime\prime}_{\perp}=0.039\) eV/A\({}^{2}\). Both features appear essentially identical to the results shown in the main text with the parameters listed in Table S3.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline \(f_{\parallel}\) & \(f_{\perp}\) & \(f^{\prime}_{\perp}\) & \(f^{\prime\prime}_{\parallel}\) & \(f^{\prime\prime}_{\perp}\) \\
23.882 & 19.973 & \(-0.143\) & 0.090 & 0.059 \\ \hline \hline \end{tabular}
\end{table}
Table S3: Force-constant parameters extracted from ab-initio calculations, in units of eV/Å\({}^{2}\).
|
2304.09081 | Shift invariant subspaces in growth spaces and sets of finite entropy | We investigate certain classes of shift invariant subspaces in growth spaces
on the unit disc of the complex plane determined by a majorant $w$, which
include the classical Korenblum growth spaces. Our main result provides a
complete description of shift invariant subspaces generated by Nevanlinna class
functions in growth spaces, where we show that they are of Beurling-type. In
particular, our result generalizes the celebrated Korenblum-Roberts Theorem. It
turns out that singular inner functions play the decisive role in our
description, phrased in terms of certain $w$-entropy conditions on the carrier
sets of the associated singular measures, which arise in connection to boundary
zero sets for analytic functions in the unit disc having modulus of continuity
not exceeding $w$ on the unit circle. Furthermore, this enables us to establish
an intimate link between shift invariant subspace generated by inner functions
and the containment of the above mentioned analytic function spaces in the
corresponding model spaces. | Adem Limani | 2023-04-18T15:53:16Z | http://arxiv.org/abs/2304.09081v2 | # \(M_{z}\)-invariant subspaces in growth spaces, boundary zero sets and model spaces
###### Abstract
We investigate certain classes of \(M_{z}\)-invariant subspaces for a wide range of growth spaces on the unit disc \(\mathbb{D}\) determined by a majorant \(w\), which include the classical Korenblum growth spaces. Our main result generalizes the classical Korenblum-Roberts Theorem on the description of \(M_{z}\)-invariant subspaces generated by bounded analytic functions, in terms of the corresponding Nevanlinna measure. It turns out that sets of finite \(w\)-entropy, which are boundary zero sets for analytic functions in \(\mathbb{D}\) having modulus of continuity not exceeding \(w\) on \(\overline{\mathbb{D}}\), play the decisive role in this setting. This further enables us to establish an intimate link between \(M_{z}\)-invariant subspace generated by inner functions \(\Theta\) and the containment of the above mentioned analytic function spaces in the corresponding model spaces \(K_{\Theta}\).
## 1 Introduction
Let \(w:[0,1]\to[0,1]\) be a continuous non-decreasing function with \(w(0)=0\). Denote by \(\mathcal{G}_{w}\) the growth space of analytic functions \(g\) on the unit disc \(\mathbb{D}\) satisfying
\[\lim_{|z|\to 1-}w(1-|z|)g(z)=0.\]
Alternatively, \(\mathcal{G}_{w}\) is obtained by taking the closure of analytic polynomials in the norm
\[\|g\|_{\mathcal{G}_{w}}:=\sup_{z\in\mathbb{D}}w(1-|z|)|g(z)|<\infty.\]
Equipped with the above norm \(\mathcal{G}_{w}\) becomes a separable Banach space for which analytic polynomials form a dense subset. Typical examples of growth spaces are provided by the classical Korenblum growth spaces, corresponding to weights of the form \(w(t)=t^{\alpha}\) with \(\alpha>0\), for instance see [14]. We denote by \(M_{z}\) the multiplication operator \(M_{z}(f)(z)=zf(z)\), which is easily seen to act as a contraction on \(\mathcal{G}_{w}\). Our principal goal is to investigate an important class of singly generated \(M_{z}\)-invariant subspaces on \(\mathcal{G}_{w}\). To this end, we shall
denote by \(\left[g\right]_{\mathcal{G}_{w}}\) the smallest closed \(M_{z}\)-invariant subspace containing \(g\in\mathcal{G}_{w}\), which we often shall refer to as the \(M_{z}\)-invariant subspace generated by \(g\). A function \(g\) is said to be cyclic in \(\mathcal{G}_{w}\) if \(\left[g\right]_{\mathcal{G}_{w}}=\mathcal{G}_{w}\). In other words, \(g\) is cyclic in \(\mathcal{G}_{w}\) if there exists a sequence of analytic polynomials \(\{Q_{n}\}_{n}\) such that
\[\lim_{n}\lVert gQ_{n}-1\rVert_{\mathcal{G}_{w}}=0.\]
Since the analytic polynomials are dense in \(\mathcal{G}_{w}\), once we can approximate the constant function \(1\) by polynomial multiples of \(g\), any function in \(\mathcal{G}_{w}\) can be reached. In fact, it is not difficult to see that the sequence \(\{Q_{n}\}_{n}\) can be replaced by a sequence of bounded analytic functions. Given a positive finite Borel singular measure \(\mu\) on the unit circle \(\partial\mathbb{D}\), we denote by \(S_{\mu}\) the associated singular inner function
\[S_{\mu}(z):=\exp\left(-\int_{\partial\mathbb{D}}\frac{\zeta+z}{\zeta-z}d\mu( \zeta)\right)\qquad z\in\mathbb{D}.\]
The Korenblum-Roberts Theorem provides a complete characterization of the singular inner functions which are cyclic the classical Korenblum growth spaces, see [24], [18]. More precisely, it asserts that \(S_{\mu}\) is cyclic in some/any Korenblum growth space if and only if \(\mu\) does not charge any Beurling-Carleson set \(E\), that is, a closed set \(E\subset\partial\mathbb{D}\) of Lebesgue measure zero satisfying
\[\int_{\partial\mathbb{D}}\log\operatorname{dist}\left(\zeta,E\right)dm(\zeta )>-\infty,\]
where \(\operatorname{dist}(\zeta,E)\) denotes the unit-normalized distance from \(\zeta\) to \(E\) along \(\partial\mathbb{D}\) and \(m\) denotes the unit-normalized Lebesgue measure on \(\partial\mathbb{D}\). In this note, we shall investigate a Korenblum-Roberts Theorem in the context of growth spaces \(\mathcal{G}_{w}\) associated to weights \(w\) which typically decay slower than \(t^{\alpha}\) with \(\alpha>0\). The existence of cyclic singular inner function in any growth space \(\mathcal{G}_{w}\) has been known for a long time (see [25], and also [22] for a more recent work in this direction). Our objective is to take these investigations further and classify all cyclic singular inner function in \(\mathcal{G}_{w}\). We declare a continuous non-decreasing function \(w:[0,1]\to[0,1]\) with \(w(0)=0\) to be a _majorant_ if
1. \(w\) is sub-additive: for any \(0\leq s,t\leq 1\) with \(s+t\leq 1\), one has \[w(t+s)\leq w(t)+w(s)\]
2. \(w(t)/t\) almost non-increasing: for any \(0<s\leq t<1\) one has \[\frac{w(s)}{s}\leq 2\frac{w(t)}{t}.\]
Given a positive finite Borel measure \(\nu\) on \(\partial\mathbb{D}\), the modulus of continuity of \(\nu\) is defined as
\[\omega_{\nu}(\delta):=\left\{\nu(I):m(I)\leq\delta,\ \text{arcs}\ I\subseteq \partial\mathbb{D}\right\}.\]
It is not difficult to verify that the modulus of continuity of a continuous measure \(\nu\) (no atoms) is a majorant, and conversely, any majorant arises as the modulus of continuity of a continuous measure (for instance, see [4]. We shall now introduce a notion which generalizes the classical Beurling-Carleson sets. Given a majorant \(w\), we declare a closed set \(E\subset\partial\mathbb{D}\) of Lebesgue measure zero to have finite \(w\)-entropy, or simply to be a \(w\)-set if
\[\int_{\partial\mathbb{D}}\log w\left(\operatorname{dist}(\zeta,E)\right)dm( \zeta)>-\infty. \tag{1}\]
It is important to note that any closed subset of a \(w\)-set is again a \(w\)-set, and that \(w\)-sets are closed under the formation of finite unions. Indeed, the later can be easily deduced from the identity
\[\operatorname{dist}(\zeta,\cup_{k=1}^{n}E_{k})=\min_{1\leq k\leq n} \operatorname{dist}(\zeta,E_{k}).\]
In a similar way as the classical Beurling-Carleson sets are boundary zero sets of analytic functions in \(\mathbb{D}\) with smooth extensions to \(\partial\mathbb{D}\) (see [7] and [28]), it turns out that \(w\)-sets correspond to boundary zero sets of bounded analytic functions in \(\mathbb{D}\) with continuous extension to \(\partial\mathbb{D}\), having modulus of continuity not exceeding \(w\) on \(\partial\mathbb{D}\), see [26] and the definition of \(\mathcal{A}_{w}\) in (3) below. We are now ready to phrase our first main result on cyclicity in any growth space \(\mathcal{G}_{w}\).
**Theorem 1.1**.: _Let \(w\) be a majorant and \(\mu\) be a positive finite singular Borel measure on \(\partial\mathbb{D}\). Then \(S_{\mu}\) is cyclic in \(\mathcal{G}_{w}\), if \(\mu\) does not charge any \(w\)-set._
We remark that a simple embedding shows that Theorem 1.1 also holds for spaces of analytic functions \(f\) in \(\mathbb{D}\) equipped with an integral metric of the form
\[\int_{\mathbb{D}}\lvert f(z)\rvert^{p}w(1-\lvert z\rvert)dA(z)\]
for any \(0<p<\infty\). In fact, since \(E\) is a \(w\)-set if and only if \(E\) is a \(w^{p}\)-set for any \(p>0\), our results may also be phrased in the setting of
\[\mathcal{G}_{w}^{\infty}:=\bigcup_{p>0}\mathcal{G}_{w^{p}}\]
equipped with the locally convex inductive-limit topology, that is, the largest locally convex topology for which each growth space \(\mathcal{G}_{w^{p}}\), \(p>0\) is continuously embedded into \(\mathcal{G}_{w}^{\infty}\). We mention that a version of the theorem above has been known (see [13]) under a couple of additional regularity assumptions on \(w\), using different methods which originate back to the work of Korenblum in [17], which in fact enables the author to describe cyclic vectors beyond inner functions. The novelty here is that no additional assumptions on the majorants \(w\) are needed. See also [10] for a recent result on cyclic vectors in growth spaces with weights \(w\) growing faster than polynomials, again requiring no regularity assumptions on \(w\).
Moving forward, we shall now make the following assumption on the majorants \(w\). There exists constants \(\alpha,\beta\in(0,1)\) and \(C>0\) such that
\[w(t)\leq Cw(t^{1+\alpha})^{\beta},\qquad 0\leq t\leq 1.\]
Condition \((A_{1})\) ensures that \(w\) does not decay to zero too fast, and examples of eligible majorants are \(t^{\min(c,1)}\), \(\log^{-c}(e/t)\), \((\log\log(e^{e}/t))^{-c}\), \((\log\log\log(e^{e^{e}}/t))^{-c},\dots\), \(c>0\) and so on. Although the above assumption on \(w\) is fairly mild and barely puts any serious decay restriction on \(w\), we shall in subsection 3.5 illustrate that it in fact can be further weakened. However, in order to a avoid undesirable technicalities, we shall phrase our result under the assumption of \((A_{1})\) and defer the technical improvements to subsection 3.5 under the assumption \((A_{1}^{\prime})\) therein. Fix a majorant \(w\) and let \(\mu\) be a positive finite singular Borel measure \(\mu\) on \(\partial\mathbb{D}\). Consider the quantity
\[\sup\left\{\mu(E):E\subset\partial\mathbb{D}\ w\text{-set}\right\},\]
and pick a sequence of \(w\)-sets \(\{E_{N}\}_{N}\), which we may assume to be increasing, such that \(\mu(E_{N})\) converges to the supremum above. It is straightforward to verify that \(\mu\) uniquely (up to \(\mu\)-null sets) decomposes according to
\[\mu=\mu_{P}+\mu_{C} \tag{2}\]
where \(\mu_{P}\) is concentrated on \(\cup_{N}E_{N}\), a countable union of \(w\)-sets and \(\mu_{C}\) does not charge any \(w\)-set. With this decomposition at hand, we may now phrase the following generalization of the Korenblum-Roberts theorem for the growth spaces \(\mathcal{G}_{w}\).
**Theorem 1.2**.: _Let \(w\) be a majorant satisfying condition \((A_{1})\) and \(\mu\) be a positive finite Borel singular measure on \(\partial\mathbb{D}\), decomposed according to (2). Then for any bounded analytic function \(f\) factorized as \(f=FS_{\mu}B\), where \(F\) is outer and \(B\) is a Blaschke product, the following holds:_
**(i)**: \(FS_{\mu_{C}}\) _is cyclic in_ \(\mathcal{G}_{w}\)_._
**(ii)**: \(BS_{\mu_{P}}\) _generates a proper_ \(M_{z}\)_-invariant subspace of_ \(\mathcal{G}_{w}\) _satisfying the following permanence property:_
\[\left[BS_{\mu_{P}}\right]_{\mathcal{G}_{w}}\cap H^{\infty}\subseteq BS_{\mu_{ P}}H^{\infty}.\]
_Moreover, the \(M_{z}\)-invariant subspace generated by \(f\) equals \(\left[BS_{\mu_{P}}\right]_{\mathcal{G}_{w}}\)._
Indeed, for Holderian majorants \(w(t)=t^{\alpha}\), one retains the Korenblum-Roberts Theorem (see Theorem 2 in [24]). We remark that \((ii)\) is essentially an immediate corollary of Theorem 1.1 and actually holds for any majorant \(w\), however Theorem 1.2 allows us to upgrade Theorem 1.1 to an if and only if-statement. An important consequence of Theorem 1.2 is that a function \(f\in H^{\infty}\) is cyclic in \(\mathcal{G}_{w}\) if and only if \(f\) has no zeros in \(\mathbb{D}\) and its associated singular inner factor does not charge any \(w\)-set. Inspired from the work in [5] and in [3], we may also describe the \(M_{z}\)-invariant subspaces on \(\mathcal{G}_{w}\) generated by functions in the Nevanlinna class \(\mathcal{N}\).
**Corollary 1.3**.: _Let \(w\) be a majorant satisfying condition \((A_{1})\) and let \(f\in\mathcal{G}_{w}\cap\mathcal{N}\) with Nevanlinna factorization \(f=BS_{\mu}F/S_{\nu}H\), where \(B\) is a Blaschke product, \(F,H\in H^{\infty}\) are outer, and \(S_{\mu},S_{\nu}\) singular inner with \(\mu\) and \(\nu\) mutually singular. Then the following holds:_
1. \(\nu\) _charges no_ \(w\)_-set, that is,_ \(S_{\nu}=S_{\nu_{C}}\) _is cyclic in_ \(\mathcal{G}_{w}\)_,_
2. \(f\) _is cyclic in_ \(\mathcal{G}_{w}\) _if and only if it is of the form_ \(f=S_{\mu_{C}}F/S_{\nu_{C}}H\)_._
_Moreover, any \(M_{z}\)-invariant subspace of \(\mathcal{G}_{w}\) generated by \(f\) equals \(\left[BS_{\mu_{P}}\right]_{\mathcal{G}_{w}}\), provided that \(S_{\mu_{C}}F/S_{\nu_{C}}H\in\mathcal{G}_{w}\)._
Observe that \((i)\) actually asserts that the (minimal) singular inner factor \(S_{\nu}\) appearing in denominator of an element in \(\mathcal{G}_{w}\cap\mathcal{N}\) must necessarily be cyclic in \(\mathcal{G}_{w}\), that is, \(\nu\) cannot not charge any \(w\)-set.
In our next result, we shall apply the descriptions on \(M_{z}\)-invariant subspaces generated by inner functions \(\Theta\) to demonstrate an intimate connection to the membership of functions in the associated model spaces \(K_{\Theta}\) enjoying certain \(w\)-smoothness properties on \(\partial\mathbb{D}\). Given an inner function \(\Theta\), we recall that the model space \(K_{\Theta}\) is defined as the orthogonal complement of \(\Theta H^{2}=\{\Theta h:h\in H^{2}\}\) in \(H^{2}\), that is
\[K_{\Theta}:=H^{2}\ominus\Theta H^{2}.\]
As a consequence of Beurling's Theorem, the model spaces are the only closed invariant subspaces for adjoint shift \(M_{z}^{*}\) on \(H^{2}\) and play a fundamental role in complex function theory and operator theory, see [8]. Given a majorant \(w\), we define by \(\mathcal{A}_{w}\) the Banach space of bounded analytic functions \(f\) on \(\mathbb{D}\) with continuous extension to \(\partial\mathbb{D}\) and whose modulus of continuity is controlled by \(w\) on \(\partial\mathbb{D}\), equipped with the norm
\[\left\|f\right\|_{\mathcal{A}_{w}}:=\left\|f\right\|_{H^{\infty}}+\sup_{\xi, \zeta\in\partial\mathbb{D}}\frac{\left|f(\xi)-f(\zeta)\right|}{w(\left|\xi- \zeta\right|)}. \tag{3}\]
In other words, \(\mathcal{A}_{w}:=H^{\infty}\cap C_{w}(\partial\mathbb{D})\), where \(C_{w}(\partial\mathbb{D})\) denotes the space of continuous functions on \(\partial\mathbb{D}\) having modulus of continuity no worse than \(w\). It follows from the work of [27] that functions in \(\mathcal{A}_{w}\) actually have modulus of continuity not exceeding \(w\) in all of \(\overline{\mathbb{D}}\) (see also [6] for a short proof). It was recently established (see Theorem 1.3 in [22]) that for any majorant \(w\), there exists a singular inner function \(\Theta\) such that \(\mathcal{A}_{w}\cap K_{\Theta}=\{0\}\). The mentioned result should be put in contrast to the classical density theorem of A.B. Aleksandrov in [1] (see also [19] for a slight improvement), which asserts that functions which extend continuously to \(\partial\mathbb{D}\) in any model space \(K_{\Theta}\) are always dense in \(K_{\Theta}\). In the case when \(w(t)=t^{\alpha}\) is Holderian, K. Dyakonov and D. Khavinson proved that the intersection \(\mathcal{A}_{w}\cap K_{\Theta}\) is non-trivial if and only if either \(\Theta\) has a zero in \(\mathbb{D}\) or the associated singular measure charges a Beurling-Carleson set on \(\partial\mathbb{D}\), see [9]. More recently, the author in collaboration with B. Malman showed that \(\mathcal{A}_{w}\cap K_{\Theta}\) is dense in \(K_{\Theta}\) if and only if the singular measure associated to \(\Theta\) is concentrated on a countable union of Beurling-Carleson sets, see [21].
We shall now make yet another assumption on the majorant \(w\): there exists a constant \(0<\gamma<1\), such that
\((A_{2})\)
\[\int_{0}^{1}w^{\gamma}(t)\frac{dt}{t}<\infty.\]
In other words, we ask for the majorant \(w\) to be just slightly better than _Dini-continuous_. Besides the class of Holderian majorants, this also includes majorants of logarithmic decay with typical examples provided by \(w(t)=\log^{-c}(e/t)\), for any \(c>1\). Under this additional assumption we retrieve the following complete structure theorem for model spaces.
**Theorem 1.4**.: _Let \(w\) be a majorant satisfying the conditions \((A_{1})\) and \((A_{2})\), and let \(\Theta=BS_{\mu}\) be an inner function where \(B\) denotes its Blaschke factor. Then the following statements hold:_
_Existence:_: \(\mathcal{A}_{w}\cap K_{\Theta}=\{0\}\) _if and only if_ \(\Theta=S_{\mu}\) _and_ \(\mu\) _assigns no mass to any_ \(w\)_-set._
_Density:_: \(\mathcal{A}_{w}\cap K_{\Theta}\) _is dense in_ \(K_{\Theta}\) _if and only if_ \(\Theta=BS_{\mu}\) _where_ \(\mu\) _is concentrated on a countable union of_ \(w\)_-sets._
_Moreover, decomposing \(\mu\) as in (2), the closure of \(\mathcal{A}_{w}\cap K_{\Theta}\) in \(K_{\Theta}\) equals \(K_{\Theta_{P}}\), where \(\Theta_{P}=BS_{\mu_{P}}\), and \(\mathcal{A}_{w}\cap K_{S_{\mu_{C}}}=\{0\}\)._
The above result in conjunction with Theorem 1.2 confirms an intimate connection between \(M_{z}\)-invariant subspaces generated by inner functions \(\Theta\) in \(\mathcal{G}_{w}\), the containment of \(\mathcal{A}_{w}\)-functions in the model spaces \(K_{\Theta}\), and boundary zero sets of \(\mathcal{A}_{w}\)-functions described by \(w\)-entropy. We remark that using Theorem 1.1 in [19], one can also deduce \(H^{p}\)-versions of Theorem 1.4.
This paper is organized as follows. Section 2 is devoted to proving Theorem 1.1, which chiefly revolves around adapting the techniques developed by J. Roberts in [24] to our setting, using a decomposition for singular measure and applying the Corona Theorem. In section 3, we develop a machinery deeply influenced by the work in [3], allowing us to describe \(M_{z}\)-invariant subspaces in \(\mathcal{G}_{w}\) via embeddings into Hardy spaces on certain Privalov star-domains. The proofs of Theorem 1.2 and Corollary 1.3 will be contained in section 3, and the last subsection therein is devoted to strengthening Theorem 1.2 by means of weakening the assumption \((A_{1})\). Section 4 is devoted to the proof of Theorem 1.4, where we identify Cauchy dual elements of the growth spaces \(\mathcal{G}_{w}\), utilize results on how the Cauchy projection distorts modulus of continuity, and derive modify a technical construction by N.A. Shirokov in [26].
## 2 Cyclic singular inner functions
In this section, we shall adapt the methods developed by J. Roberts in [24] to our setting.
### Describing \(w\)-sets via complementary arcs
**Lemma 2.1**.: _Let \(w\) be a majorant. Then a closed set \(E\subset\partial\mathbb{D}\) of Lebesgue measure zero is a \(w\)-set if and only if_
\[\sum_{k}m(I_{k})\log w(m(I_{k}))>-\infty.\]
_where \(\{I_{k}\}_{k}\) are the connected components of \(\partial\mathbb{D}\setminus E\)._
This lemma appears in [10] under slightly stronger assumptions on \(w\), but for the readers convenience, we include a short proof-sketch which holds in our setting.
Proof.: Observe that
\[\int_{\partial\mathbb{D}}\log w\left(\operatorname{dist}(\zeta,E)\right)dm( \zeta)=\sum_{k}\int_{0}^{m(J_{k})/2}\log w(t)dt\]
Now since \(w\) is non-decreasing, we have
\[\int_{0}^{m(J_{k})/2}\log w(t)dt\leq\frac{m(J_{k})}{2}\log w(m(J_{k})).\]
On the other hand, using the fact that \(w(t)/t\) is almost non-increasing, it is not difficult to show that
\[\int_{0}^{m(J_{k})/2}\log w(t)dt\geq\frac{m(J_{k})}{2}\log w(m(J_{k}))-cm(J_{k }),\]
for some numerical constant \(c>0\). This is enough to establish the claim.
### Dyadic grids adapted to \(w\)
In this section, we shall consider dyadic grids which are suitably adapted to a specific majorant \(w\). Let \(\mathcal{D}_{n}\) denote the collection of \(2^{n}\) dyadic arcs \(I\) with \(m(I)=2^{-n}\). Inspired from the recent work in [16], we declare a collection of dyadic arcs \(\mathcal{D}_{w}=\cup_{k\geq 0}\mathcal{D}_{n_{k}}\) to be a dyadic \(w\)-grid, if the sequence of positive integers \(\{n_{k}\}_{k\geq 0}\) satisfies the following condition: there exists a positive number \(L>0\), such that
\[\sup_{k\geq 0}\frac{w^{L}(2^{-n_{k}})}{w(2^{-n_{k+1}})}<\infty. \tag{4}\]
Our main result in this subsection is a lemma which allows us to exhibit for any majorant \(w\), a dyadic \(w\)-grid with a certain special property, important for our further developments.
**Lemma 2.2**.: _For any majorant \(w\) and any positive integer \(n_{0}>0\), there exists a dyadic \(w\)-grid \(\mathcal{D}_{w}=\cup_{k\geq 0}\mathcal{D}_{n_{k}}\) with the additional property_
\[w(2^{-n_{k+1}})\leq\prod_{j=0}^{k}w(2^{-n_{j}}),\qquad k\geq 0. \tag{5}\]
Proof.: For the sake of abbreviation, we set \(\lambda(t)=\log 1/w(t)\). We shall construct the sequence \(\{n_{k}\}_{k\geq 0}\) recursively by means of induction. To this end, fix an arbitrary \(n_{0}>0\) with \(w(2^{-n_{0}})<1/4\) and observe that by monotonicity and sub-additivity of \(w\), one has
\[\frac{1}{2}\lambda(t/2)\leq\lambda(t)\leq\lambda(t/2) \tag{6}\]
whenever \(t\leq 2^{-n_{0}}\). Fix a constant \(C>1\) to be specified later. Since \(\lambda\) is non-increasing and continuous, there exists \(0<\delta_{0}<2^{-n_{0}}\) such that
\[C\leq\frac{\lambda(\delta_{0})}{\lambda(2^{-n_{0}})}<5C.\]
Pick \(n_{1}>n_{0}\) to be the smallest positive integer with \(2^{-n_{1}}\leq\delta_{0}<2^{-(n_{1}-1)}\). It follows by the property of \(\lambda\) in (6) that
\[C\leq\frac{\lambda(2^{-n_{1}})}{\lambda(2^{-n_{0}})}<10C.\]
Now by means of induction, suppose that \(n_{k}>0\) has been chosen, and let \(0<\delta_{k}<2^{-n_{k}}\) with
\[C\leq\frac{\lambda(\delta_{k})}{\lambda(2^{-n_{k}})}<5C.\]
We then choose \(n_{k+1}>n_{k}\) be the smallest positive integer with \(2^{-n_{k+1}}\leq\delta_{k}<2^{-(n_{k+1}-1)}\). Again, using (6) it follows that
\[C\leq\frac{\lambda(2^{-n_{k+1}})}{\lambda(2^{-n_{k}})}<10C.\]
Consequently, we obtain an increasing sequence of positive integers \(\{n_{k}\}_{k\geq 0}\), which by construction satisfies (4), and thus we conclude that \(\mathcal{D}_{w}=\cup_{k\geq 0}\mathcal{D}_{n_{k}}\) is dyadic \(w\)-grid. It remains to verify (5), which is equivalent to
\[\sum_{j=0}^{k}\lambda(2^{-n_{j}})\leq\lambda(2^{-n_{k+1}})\qquad k\geq 0.\]
Observe that a straightforward iteration shows that
\[\lambda(2^{-n_{j}})\leq C^{-1}\lambda(2^{-n_{j+1}})\leq\left(C^{-1}\right)^{k -j+1}\lambda(2^{-n_{k+1}}),\qquad 0\leq j\leq k.\]
Using these estimates term-wise and summing up the geometric sum, we arrive at
\[\sum_{j=0}^{k}\lambda(2^{-n_{j}})\leq\lambda(2^{-n_{k+1}})\sum_{j=0}^{k}\left( C^{-1}\right)^{k+1-j}\leq\frac{1}{C-1}\lambda(2^{-n_{k+1}}).\]
Choosing \(C>2\) established the desired claim.
### A structure theorem for singular measures
Our main objective is to establish a Roberts-type decomposition of singular measures wrt general \(w\)-sets. To this end, we shall need to introduce the notion of modulus of continuity and list some of its basic properties. Denote by
\[\omega_{\nu}(\delta):=\sup\left\{\nu(I):m(I)\leq\delta,\ I\subset\partial\mathbb{D }\ \mathrm{arc}\right\}\]
the modulus of continuity of a positive finite Borel measure \(\nu\) on \(\partial\mathbb{D}\). It straightforward to verify that for any positive finite Borel measure \(\nu\) on \(\partial\mathbb{D}\) (not necessarily continuous) the associated modulus of continuity \(\omega_{\nu}:[0,1]\to[0,\|\nu\|]\) is non-decreasing with \(w(0)=0\) and sub-additive:
\[\omega_{\nu}(\delta_{1}+\delta_{2})\leq\omega_{\nu}(\delta_{1})+\omega_{\nu}( \delta_{2}),\]
whenever \(\delta_{1},\delta_{2}>0\) and \(\delta_{1}+\delta_{2}\leq 1\). Moreover, the function \(\delta\mapsto\frac{\omega_{\nu}(\delta)}{\delta}\) is almost non-increasing on \((0,1]\) in the sense that whenever \(0<\delta_{1}\leq\delta_{2}\leq 1\), one has
\[\frac{\omega_{\nu}(\delta_{2})}{\delta_{2}}\leq 2\frac{\omega_{\nu}(\delta_{2}) }{\delta_{1}}. \tag{7}\]
To see this, pick a positive integer \(m\geq 1\) with the property that \(2^{m-1}\leq\delta_{2}/\delta_{1}<2^{m}\) and note that by monotonicity and sub-additivity of \(\omega_{\nu}\), we get
\[\frac{\omega_{\nu}(\delta_{2})}{\delta_{2}}\leq\frac{\omega_{\nu}(2^{m}\delta _{1})}{\delta_{2}}\leq 2^{m}\frac{\omega_{\nu}(\delta_{1})}{\delta_{2}}\leq 2 \frac{\omega_{\nu}(\delta_{1})}{\delta_{1}}.\]
We are now ready to phrase the main result in this subsection.
**Proposition 2.3** (Roberts decomposition).: _Let \(\mu\) be a positive finite Borel singular measure on \(\partial\mathbb{D}\). For any integer \(n_{0}>0\), for any constant \(c>0\) and for any dyadic \(w\)-grid \(\mathcal{D}_{w}=\cup_{k\geq 0}\mathcal{D}_{n_{k}}\), there exists positive measures \(\{\mu_{k}\}_{k\geq 0},\nu_{0}\) on \(\partial\mathbb{D}\) such that \(\mu\) decomposes as_
\[\mu=\sum_{k\geq 0}\mu_{k}+\nu_{0},\]
_where each \(\mu_{k}\) satisfies_
\[\mu_{k}(I)\leq cm(I)\log\frac{1}{w(m(I))},\qquad I\in\mathcal{D}_{n_{k}} \tag{8}\]
_and \(\nu_{0}\) is supported on a \(w\)-set. Moreover, if \(\mu\) does not charge any \(w\)-set then \(\nu_{0}\equiv 0\) for any choice of parameters \(N_{0}\), \(c\) and dyadic \(w\)-grid \(\mathcal{D}_{w}\) as above._
Note each decomposition provided by Proposition 2.3 depends on the parameters \(N_{0},c\) and the dyadic \(w\)-grid \(\mathcal{D}_{w}\). Proposition 2.3 provides a different perspective to the abstract decomposition given in (2), by describing the part of \(\mu\) which does not charge any \(w\)-set.
Proof of Proposition 2.3.: The proof is very similar to the decomposition provided in [24], hence we shall mainly provide an overall sketch of the proof and add details to certain steps which deviates from the mentioned work. Fix \(c>0\) and let \(\mathcal{D}_{w}=\cup_{k\geq 0}\mathcal{D}_{n_{k}}\) be a dyadic \(w\)-grid. A dyadic arc \(I\in\mathcal{D}_{n_{0}}\) is declared to be _light_ for \(\mu\) if
\[\mu(I)\leq cm(I)\log\frac{1}{w(m(I))}=c\,2^{-n_{0}}\log\frac{1}{w(2^{-n_{0}})},\]
and _heavy_ otherwise. Define the singular measure \(\mu_{0}\) for Borel subsets \(F\subseteq I\in\mathcal{D}_{n_{0}}\) by
\[\mu_{0}(F)=\begin{cases}\mu(F)\qquad\text{if $I$ is light},\\ \frac{\mu(F)}{\mu(I)}c\,2^{-n_{0}}\log\frac{1}{w(2^{-n_{0}})}\qquad\text{if $I$ is heavy}.\end{cases}\]
The measure \(\mu_{0}\) constructed in this way is said to be the \(\mathcal{D}_{n_{0}}\)-grating of \(\mu\). Observe that by construction \(\mu-\mu_{0}\) is a non-negative singular measure, whose support lies in \(H_{0}\): the union of the heavy intervals in \(\mathcal{D}_{n_{0}}\). Moreover, one has
\[\mu_{0}(I)=c\,2^{-n_{0}}\log\frac{1}{w(2^{-n_{0}})},\qquad\text{if $I$ is heavy},\]
and
\[\omega_{\mu_{0}}(2^{-n_{0}})\leq c\,2^{-n_{0}}\log\frac{1}{w(2^{-n_{0}})}.\]
To proceed forward, the idea is now to define the measure \(\mu_{1}\) as the \(\mathcal{D}_{n_{1}}\)-grating of \(\mu-\mu_{0}\). In fact, iterating this procedure indefinitely by means of induction, we obtain a sequence of singular measures \(\{\mu_{k}\}_{k\geq 0}\), where each \(\mu_{k}\) is a \(\mathcal{D}_{n_{k}}\)-grating of \(\mu-\sum_{j=0}^{k-1}\mu_{j}\), which are supported inside \(H_{k}\), the union of all heavy arcs in \(\mathcal{D}_{n_{k}}\), and satisfy
\[\mu_{k}(I)=c2^{-n_{k}}\log\frac{1}{w(2^{-n_{k}})},\qquad\text{if $I$ is heavy},\]
and
\[\omega_{\mu_{k}}(2^{-n_{k}})\leq c\,2^{-n_{k}}\log\frac{1}{w(2^{-n_{k}})}.\]
Now set \(\nu_{0}:=\sum_{k\geq 0}\mu_{k}\) and note that by construction \(H_{k}\supseteq H_{k+1}\) for all \(k\geq 0\) and the difference \(\mu-\sum_{j=0}^{k}\mu_{j}\) has its support inside \(H_{k}\), thus \(\mu-\nu\) has its support in the closed set \(F:=\cap_{k}H_{k}\). To see that \(F\) has Lebesgue measure \(0\), observe that by the identity
\[\mu_{k}(I)=c\,m(I)\log\frac{1}{w(m(I))}\]
for heavy arcs \(I\) in \(\mathcal{D}_{n_{k}}\), we get
\[cm(H_{k})\log\frac{1}{w(2^{-n_{k}})}=\mu_{k}(H_{k})\leq\mu_{k}(\partial\mathbb{ D})\leq\mu(\partial\mathbb{D}). \tag{9}\]
Hence \(m(F)=\lim_{k}m(H_{k})=0\). Fix \(k\geq 1\) and let \(\mathcal{L}_{k}\) denote the set of interiors of the light arcs in \(\mathcal{D}_{n_{k}}\), which are contained in \(H_{k-1}\). Set \(F_{0}:=\partial\mathbb{D}\setminus\cup_{k}\cup_{J\in\mathcal{L}_{k}}J\). By definition \(F_{0}\) is a closed set containing \(F\). In fact, \(F_{0}\setminus F\) is a countable set which solely consists of endpoints of adjacent light arcs. Hence in order to show that \(F\) is a \(w\)-set, it suffices to verify that \(F_{0}\) is a \(w\)-set, which on account of Lemma 2.1 is equivalent to
\[\sum_{k\geq 1}\sum_{J\in\mathcal{L}_{k}}m(J)\log\left(\frac{1}{w(m(J))}\right) <\infty.\]
Set \(L_{k}:=\cup_{J\in\mathcal{L}_{k}}J\) and note that
\[\sum_{k\geq 1}\sum_{J\in\mathcal{L}_{k}}m(J)\log\left(\frac{1}{w(m(J))} \right)=\sum_{k\geq 1}m(L_{k})\log\left(\frac{1}{w(2^{-n_{k}})}\right)\leq \sum_{k\geq 2}m(H_{k-1})\log\left(\frac{1}{w(2^{-n_{k}})}\right).\]
Now using the assumption that \(\mathcal{D}_{w}=\cup_{k\geq 0}\mathcal{D}_{n_{k}}\) is a dyadic \(w\)-grid, there exists a constant \(L>0\), independent of \(k\), such that
\[\log\left(\frac{1}{w(2^{-n_{k}})}\right)\leq L\log\left(\frac{1}{w(2^{-n_{k-1} })}\right),\qquad k\geq 1.\]
Using this in conjunction with (9), we obtain
\[\sum_{k\geq 1}\sum_{J\in\mathcal{L}_{k}}m(J)\log\left(\frac{1}{w(m(J))} \right)\leq\frac{L}{c}\sum_{k\geq 1}\mu_{k}(\partial\mathbb{D})\leq\frac{L}{c} \mu(\partial\mathbb{D}).\]
This proves that \(\nu_{0}\) is supported on a \(w\)-set, hence finishes the proof.
We remark that the description in Proposition 2.3 in fact characterizes the singular measure \(\mu\) which do not charge any \(w\)-set. This observation will not be relevant for our further developments and we refer the reader to [16] for result in a similar direction but different context.
### Cyclicity via the Corona Theorem
The following quantitative version of the classical Corona Theorem appearing in [24] will be crucial for our further developments.
**Theorem 2.4** (Corona Theorem).: _There exists an absolute constant \(K>0\), such that whenever \(f_{1},f_{2}\in H^{\infty}\) with \(\left\|f_{j}\right\|_{H^{\infty}}\leq 1\) for \(j=1,2\), and_
\[\inf_{z\in\mathbb{D}}\left(\left|f_{1}(z)\right|+\left|f_{2}(z)\right|\right) \geq\delta,\]
_for some \(0<\delta<1/2\), then there exists \(h_{1},h_{2}\in H^{\infty}\) with \(\left\|h_{j}\right\|_{H^{\infty}}\leq\delta^{-K}\) for \(j=1,2\), such that the Bezout equation_
\[f_{1}h_{1}+f_{2}h_{2}=1\]
_holds on \(\mathbb{D}\)._
The initial step towards establish Theorem 1.1 is to consider the singular inner functions \(S_{\mu_{k}}\), where \(\mu_{k}\) is a singular measure appearing in the Roberts decomposition Proposition 2.3 and find suitable Corona mates \(f_{k}\) in the unit-ball of \(H^{\infty}\), in such a way that the solutions \(h_{k},g_{k}\) enjoy the property that \(1-S_{\mu_{k}}h_{k}=f_{k}g_{k}\) have very small \(\mathcal{G}_{w}\)-norm in terms of \(n_{k}\). It turns out that certain monomials will do as Corona mates \(f_{k}\) to \(S_{\mu_{k}}\), hence we shall need a lemma which quantifies their \(\mathcal{G}_{w}\)-norm.
**Lemma 2.5**.: _For any majorant \(w\), the \(\mathcal{G}_{w}\)-norm of the monomials enjoy the bound_
\[\|z^{n}\|_{\mathcal{G}_{w}}\leq 3w(1/n)\]
_for any large positive integer \(n\)._
Proof.: Note that since \(w\) is non-decreasing, we trivially have the estimate
\[\sup_{1-|z|<1/n}w(1-|z|)|z|^{n}\leq w(1/n).\]
Now using the fact that \(w(t)/t\) is almost non-increasing, we have for any \(1-|z|\geq 1/n\)
\[w(1-|z|)|z|^{n}\leq 2\frac{w(1/n)}{1/n}(1-|z|)|z|^{n}.\]
Since \((1-t)t^{n}\) reaches its maximum at \(t_{n}=1-1/(n+1)\) on the unit-interval \([0,1]\), we obtain
\[\sup_{1-|z|>1/n}w(1-|z|)|z|^{n}\leq 2w(1/n)\left(1-\frac{1}{n+1}\right)^{n+1} \leq 2w(1/n).\]
This proves the desired claim.
Before turning to the proof of Theorem 1.1, we shall need yet another lemma on convergence in \(\mathcal{G}_{w}\), which essentially asserts that \(H^{\infty}\) equipped with the weak-star topology inherited from \(L^{\infty}(\partial\mathbb{D})\) is continuously embedded in \(\mathcal{G}_{w}\).
**Lemma 2.6**.: _Let \(\{g_{k}\}_{k}\subset H^{\infty}\) be a sequence which converges uniformly on compacts to a bounded analytic function \(g\) and \(\sup_{k}\lVert g_{k}\rVert_{H^{\infty}}<\infty\). Then \(g_{n}\) converges to \(g\) in the norm of \(\mathcal{G}_{w}\)._
Proof.: Fix an arbitrary \(\varepsilon>0\) and set \(C:=\sup_{k}\lVert g_{k}-g\rVert_{H^{\infty}}\), which is guaranteed to be finite by assumption. Observe that since \(w\) is non-decreasing, we have
\[\sup_{1-|z|<\varepsilon}w(1-|z|)|g_{k}(z)-g(z)|\leq Cw(\varepsilon).\]
Now since \(g_{k}\) converges uniformly to \(g\) on compacts, we get
\[\lim_{k\to\infty}\sup_{1-|z|\geq\varepsilon}w(1-|z|)|g_{k}(z)-g(z)|=0.\]
Combining both these estimates, we arrive at
\[\limsup_{k\to\infty}\lVert g_{k}-g\rVert_{\mathcal{G}_{w}}\leq Cw(\varepsilon).\]
The claim now follows by taking infimum of \(\varepsilon>0\) and using the fact that \(w(\varepsilon)\to 0\) as \(\varepsilon\to 0+\).
### The Theorem on cyclicity in \(\mathcal{G}_{w}\)
With the preparations in the previous subsections at hand, we now turn to the main purpose.
Proof of Theorem 1.1.: Fix the parameter \(c>0\) in such a way that \(48cK<1\), where \(K>0\) denotes the Corona constant appearing in Theorem 2.4, and let \(n_{0}>0\) be an arbitrary positive integer with \(w(2^{-n_{0}})^{12c}<\min(1/4,3^{-1/K})\). Suppose \(\mu\) is a positive finite singular Borel measure on \(\partial\mathbb{D}\) which does not charge any \(w\)-set. Applying Lemma 2.2 and Proposition 2.3 with parameters \(c\), \(n_{0}\) and \(\{n_{k}\}_{k\geq 0}\) as above, we obtain the decomposition
\[\mu=\sum_{k\geq 0}\mu_{k}\]
where each \(\mu_{k}\) satisfies (8). The following simple observation is crucial for our further development: Any positive finite Borel measure \(\nu\) on \(\partial\mathbb{D}\) satisfies the following estimate
\[|S_{\nu}(z)|\geq\exp\left(-6\,\frac{\omega_{\nu}(1-|z|)}{1-|z|}\right)\qquad z \in\mathbb{D}. \tag{10}\]
For instance, see Theorem 2 in [2] for a simple proof of this fact.
_Step 1: Initial estimate using the Corona Theorem:_
Fix an arbitrary \(k\geq 0\) and note that when \(1-|z|\geq 2^{-n_{k}}\), the property (7) for modulus of continuity gives
\[\frac{\omega_{\mu_{k}}(1-|z|)}{1-|z|}\leq 2\frac{\omega_{\mu_{k}}(2^{-n_{k}})}{2 ^{-n_{k}}}\leq 2c\log\frac{1}{w(2^{-n_{k}})}.\]
Using this in conjunction with (10) applied to \(\nu=\mu_{k}\), we obtain the following bound from below:
\[|S_{\mu_{k}}(z)|\geq w(2^{-n_{k}})^{12c},\qquad\qquad|z|\leq 1-2^{-n_{k}}. \tag{11}\]
Now choosing the Corona-mate of \(S_{\mu_{k}}\) to be the monomial \(z^{2^{n_{k}}}\) it is easy to check that when \(|z|>1-2^{-n_{k}}\) (here we utilize the assumption that \(w(2^{-n_{0}})^{12c}<1/4\)), one has
\[|z|^{2^{n_{k}}}\geq(1-2^{-n_{k}})^{2^{n_{k}}}\geq 1/4\geq w(2^{-n_{k}})^{12c}.\]
This implies that
\[\inf_{z\in\mathbb{D}}\left(|S_{\mu_{k}}(z)|+|z|^{2^{n_{k}}}\right)\geq w(2^{- n_{k}})^{12c},\]
hence applying the Corona Theorem, we can find functions \(g_{k},h_{k}\in H^{\infty}\) enjoying the estimates \(\left\|g_{k}\right\|_{H^{\infty}},\left\|h_{k}\right\|_{H^{\infty}}\leq w(2^{-n_ {k}})^{-12cK}\), which solve the equation \(S_{\mu_{k}}h_{k}+z^{2^{n_{k}}}g_{k}=1\) on \(\mathbb{D}\). Using this in conjunction with Lemma 2.5, we obtain
\[\left\|S_{\mu_{k}}h_{k}-1\right\|_{\mathcal{G}_{w}}\leq\left\|g_{k}\right\|_{H ^{\infty}}\left\|z^{2^{n_{k}}}\right\|_{\mathcal{G}_{w}}\leq 3w(2^{-n_{k}})^{ 1-12cK}.\]
In fact, utilizing the assumption on \(c\) and \(n_{0}\) once again, we actually get
\[\left\|S_{\mu_{k}}h_{k}-1\right\|_{\mathcal{G}_{w}}\leq 3w(2^{-n_{k}})^{1-12cK }\leq w(2^{-n_{k}})^{1-24cK}\leq w(2^{-n_{k}})^{24cK}. \tag{12}\]
_Step 2: The iterative procedure:_
The idea is now to combine the \(S_{\mu_{k}}\)'s all together and iterate the estimate in (12). To this end, fix a very large positive integer \(N>0\) and set \(\nu_{N}:=\sum_{k=0}^{N}\mu_{k}\). Denote by \(h_{k}\) the Corona solution-pair associated to \(S_{\mu_{k}}\) as in (12) and set \(H_{N}=h_{0}\cdot\cdots\cdot h_{N}\). With this at hand, we may combine the terms in the following way:
\[1-S_{\nu_{N}}H_{N}=(1-S_{\mu_{0}}h_{0})+S_{\mu_{0}}h_{0}(1-S_{\mu_{1}}h_{1})+ \cdots+\left(\prod_{k=0}^{N-1}S_{\mu_{k}}h_{k}\right)(1-S_{\mu_{N}}h_{N}).\]
Using this, and applying (12) in conjunction with the estimates \(\left\|h_{k}\right\|_{H^{\infty}}\leq w(2^{-n_{k}})^{-12cK}\) for each \(k\geq 0\), we obtain
\[\left\|S_{\nu_{N}}H_{N}-1\right\|_{\mathcal{G}_{w}}\leq w(2^{-N_{ 0}})^{24cK}+\sum_{k=1}^{N}\left(\frac{w(2^{-n_{k}})^{2}}{w(2^{-n_{0}})\ldots w (2^{-n_{k-1}})}\right)^{12cK}\leq\\ \sum_{k=0}^{N}w(2^{-n_{k}})^{12cK}\leq\sum_{k=0}^{\infty}w(2^{-n_ {0}})^{(k+1)12cK}\leq Cw(2^{-n_{0}})^{12cK}.\]
Here we used Lemma 2.2 and the assumption on \(n_{0}\), which ensures the existence of \(C>0\) above, independent of \(n_{0},c,N\).
_Step 3: The final limiting procedure:_
Now the observation that the above estimate holds for any large \(N>0\) will be crucial in order to upgrade this estimate to hold for \(\mu\) instead of each \(\nu_{N}\). To this end, note that for any large \(N\), we have the following estimate:
\[\left\|S_{\mu}H_{N}-1\right\|_{\mathcal{G}_{w}}\leq\left\|S_{\nu_{N}}H_{N}-1 \right\|_{\mathcal{G}_{w}}+\left\|S_{\mu-\nu_{N}}-1\right\|_{\mathcal{G}_{w}} \leq Cw(2^{-n_{0}})^{12cK}+\left\|S_{\mu-\nu_{N}}-1\right\|_{\mathcal{G}_{w}}.\]
Now since the measures \(\nu_{N}\) increase up to \(\mu\) it is not difficult to prove that \(S_{\mu-\nu_{N}}\) converges uniformly to \(1\) on compacts and obviously has uniformly bounded \(H^{\infty}\)-norm, hence on account of Lemma 2.6\(S_{\mu-\nu_{N}}\) converges to \(1\) in the norm of \(\mathcal{G}_{w}\). Letting \(N\ \to\infty\) in the equation above, we thus conclude that
\[\inf_{h\in H^{\infty}}\left\|S_{\mu}h-1\right\|_{\mathcal{G}_{w}}\leq Cw(2^{-n _{0}})^{12cK}.\]
Since \(n_{0}>0\) can be chosen arbitrary large in the decomposition Proposition 2.3 and in Lemma 2.2, we finally conclude that \(S_{\mu}\) is cyclic in \(\mathcal{G}_{w}\) and thus finishing the proof of Theorem 1.1.
## 3 Proper \(M_{z}\)-invariant subspaces
This section is chiefly devoted to the proof of the permanence principle phrased in \((ii)\) of Theorem 1.2 and to proving Corollary 1.3. To this end, we shall closely follow the techniques developed in [3] with certain adaptions and deviations suited for our general setting.
### Identifying the singular part
Let \(\Omega\subseteq\mathbb{D}\) be a Jordan domain containing the origin, and let \(\phi:\mathbb{D}\to\Omega\) be a conformal mapping with \(\phi(0)=0\). For \(0<p\leq\infty\) we recall that the Hardy space \(H^{p}(\Omega)\) consists of analytic functions \(f\) on \(\Omega\) with
\[\left\|f\right\|_{H^{p}(\Omega)}:=\sup_{0<r<1}\left(\int_{\partial\mathbb{D}} \lvert(f\circ\phi)(r\zeta)\rvert^{p}dm(\zeta)\right)^{\min(1,1/p)}<\infty.\]
In other words, \(f\) belongs to \(H^{p}(\Omega)\) if and only if \(f\circ\phi\) belongs to the classical Hardy space \(H^{p}\) on \(\mathbb{D}\). In this setting, we always have the continuous embedding \(H^{p}\subseteq H^{p}(\Omega)\), hence it is natural to ask what the corresponding Nevanlinna factorization of \(H^{p}\) functions correspond to in \(H^{p}(\Omega)\). In this section, our main objective is to describe the singular inner factors in \(H^{p}(\Omega)\) of functions in \(H^{p}\). In other words, we are interested in the singular inner factor \(f\circ\phi\) of a function \(f\in H^{p}\). To this end, let \(f\in H^{p}\) with Nevanlinna factorization \(f=FS_{\mu}B\), where \(F\) is outer, \(S_{\mu}\) singular inner and \(B\) denotes the Blaschke factor. The following was observed in [3]. Since \(F\) is outer in \(H^{p}\), Beurling's theorem on \(H^{p}\) implies that \(F\) is cyclic in \(H^{p}\), on account of the continuous inclusion it is therefore also cyclic in \(H^{p}(\Omega)\). However, an application of Beurling's theorem for simply connected domains implies that \(F\) must also be outer in \(H^{p}(\Omega)\) and thus we conclude that \(F\) cannot give rise to any singular inner factor of \(f\) in \(H^{p}(\Omega)\). Since conformal maps never contain singular inner factors and any Blaschke product \(B\) on \(\mathbb{D}\) is a product of normalized conformal self-maps of the unit disc, \(B\circ\phi\) cannot note give rise to a singular inner factor either. From this discussion, we conclude that the singular inner factor in \(H^{p}(\Omega)\) of \(f\) is located in the factor \(S_{\mu}\circ\phi\). Now let \(\zeta_{0}\in\partial\mathbb{D}\) and denote by \(\sigma_{x}\) the corresponding Aleksandrov-Clark measure of the conformal map \(\phi:\mathbb{D}\to\Omega\) with \(\phi(0)=0\), that is
\[\frac{1-\left|\phi(z)\right|^{2}}{\left|\zeta_{0}-\phi(z)\right|^{2}}=\int_{ \partial\mathbb{D}}\frac{1-\lvert z\rvert^{2}}{\left|\zeta-z\right|^{2}}d \sigma_{x}(\zeta)\qquad z\in\mathbb{D}.\]
Now if \(\zeta_{0}\notin\phi(\partial\mathbb{D})\), then it is not difficult to see that \(d\sigma_{x}\) is absolutely continuous wrt the Lebesgue arc-length measure \(dm\) on \(\partial\mathbb{D}\) (for instance, see Lemma 2.1 in [3]). In other case, we have \(\zeta_{0}=\phi(\eta_{0})\) for a unique \(\eta_{0}\in\partial\mathbb{D}\) and one can then show that
\[\frac{1-\left|\phi(z)\right|^{2}}{\left|\zeta_{0}-\phi(z)\right|^{2}}=\frac{1 -\lvert z\rvert^{2}}{\left|\phi^{-1}(\zeta_{0})-z\right|^{2}}\frac{1}{\left| \phi^{\prime}(\phi^{-1}(\zeta_{0}))\right|}+\int_{\partial\mathbb{D}}\frac{1- \lvert z\rvert^{2}}{\left|\zeta-z\right|^{2}}d\tau_{x}(\zeta),\qquad z\in \mathbb{D}\]
where \(\tau_{x}(\{\eta_{0}\})=0\). Here \(\phi^{\prime}(\phi^{-1}(\zeta_{0}))=\phi^{\prime}(\eta_{0})\) denotes the _angular derivative_ of \(\phi\) at \(\eta_{0}\), which is conventionally declared to be \(\infty\) if it does not exist. Since \(\partial\Omega\) is rectifiable, it follows from the F. and M. Riesz theorem that \(\phi\) has angular derivative at almost every point on \(\partial\mathbb{D}\) wrt the Lebesgue measure. As previous, one can show the \(d\tau_{x}\) is absolutely continuous wrt \(dm\). Using the above expression, we can write
\[-\log\lvert(S_{\mu}\circ\phi)(z)\rvert=\int_{\phi(\partial\mathbb{D})}\frac{1- \lvert z\rvert^{2}}{\lvert\phi^{-1}(\zeta)-z\rvert^{2}}\frac{d\mu(\zeta)}{ \lvert\phi^{\prime}(\phi^{-1}(\zeta))\rvert}+\int_{\partial\mathbb{D}\setminus \phi(\partial\mathbb{D})}\int_{\partial\mathbb{D}}\frac{1-\lvert z\rvert^{2}} {\lvert\zeta-z\rvert^{2}}d\tau_{x}(\zeta)d\mu(\zeta).\]
The discussion above indicates that singular part should be contained within the first term. Indeed, this is proven to be the case in Theorem 2.1 of [3]. In fact, carefully following the argument provided there one actually obtains the following precise description of the singular inner factor of \(S_{\mu}\circ\phi\), which we now phrase for future reference.
**Theorem 3.1** (Theorem 2.1, [3])).: _Let \(\Omega\subseteq\mathbb{D}\) be a domain containing the origin and enclosed by a Jordan curve and let \(\phi:\mathbb{D}\to\Omega\) be a conformal mapping with \(\phi(0)=0\). Suppose \(\mu\) is a positive finite singular Borel measure on \(\partial\mathbb{D}\) and denote by \(S_{\mu}\) the associated singular inner function. Then the singular inner factor of \(S_{\mu}\circ\phi\) is given by_
\[\exp\left(-\int_{\phi^{-1}(\Gamma)}\frac{\zeta+z}{\zeta-z}\frac{1}{\lvert\phi^ {\prime}(\zeta)\rvert}d(\sigma\circ\phi)(\zeta)\right),\qquad z\in\mathbb{D}\]
_where \(\Gamma:=\{\zeta\in\partial\Omega\cap\partial\mathbb{D}:\lvert\phi^{\prime}( \zeta)\rvert<\infty\}\) and \((\sigma\circ\phi)(E):=\sigma(\phi(E))\) for Borel sets \(E\subseteq\partial\mathbb{D}\)._
We also refer the reader to [15] for recent investigations on a related topic.
### Construction of Privalov star-domains
We shall now construct certain Privalov star-domains with sufficiently regular boundaries. To this end, fix \(\alpha>0\) and let \(E\subset\partial\mathbb{D}\) be a closed set of Lebesgue measure zero. We define the simply connected domain
\[\Omega^{\alpha}_{E}:=\left\{r\zeta\in\mathbb{D}:\zeta\in\partial\mathbb{D},\ 0 \leq r<1-\operatorname{dist}(\zeta,E)^{1+\alpha}\right\}. \tag{13}\]
Alternatively, \(\Omega^{\alpha}_{E}\) can be described as the simply connected domain enclosed by the Jordan curve \(\gamma_{E}:\partial\mathbb{D}\to\mathbb{D}\) defined by
\[\gamma_{E}(\zeta)=\zeta\left(1-\operatorname{dist}(\zeta,E)^{1+\alpha}\right),\qquad\zeta\in\partial\mathbb{D}. \tag{14}\]
By construction it is straightforward to verify that \(\gamma_{E}\) is a \(C^{1+\alpha}\)-smooth Jordan curve with \(C^{\alpha}\)-smooth tangents. An application of Kellogg's Theorem (for instance, see Theorem 4.3 in [12]) implies that any conformal map \(\phi:\mathbb{D}\to\Omega^{\alpha}_{E}\) extends to \(\partial\mathbb{D}\) of \(C^{1+\alpha}\)-smoothness with \(\phi^{\prime}\neq 0\) on \(\overline{\mathbb{D}}\).
### Embeddings into Hardy spaces on Privalov star-domains
We now make use of the Privalov star-domains \(\Omega_{E}^{\alpha}\) constructed in the previous subsection, in order to establish a continuous embedding from \(\mathcal{G}_{w}\) into Hardy space \(H^{\infty}(\Omega_{E}^{\alpha})\). Given a \(w\)-set \(E\subset\partial\mathbb{D}\) and a number \(N>0\) to be specified in a moment, we define an associated outer function
\[G_{E}(z):=\exp\left(N\int_{\partial\mathbb{D}}\frac{\zeta+z}{\zeta-z}\log w( \operatorname{dist}(\zeta,E))dm(\zeta)\right),\qquad z\in\mathbb{D}. \tag{15}\]
Note that the \(w\)-set assumption on \(E\) is ensures us that \(\log w(\operatorname{dist}(\zeta,E))\) belongs to \(L^{1}(\partial\mathbb{D})\), and thus \(G_{E}\) is indeed well-defined. Moreover, it is straightforward to check that \(G_{E}:\mathbb{D}\to\mathbb{D}\) with
\[|G_{E}(\zeta)|=w\left(\operatorname{dist}(\zeta,E)\right)^{N}\qquad\zeta\in \partial\mathbb{D},\]
hence it extends continuously to \(\partial\mathbb{D}\) with \(G_{E}=0\) precisely on \(E\). Fix \(z\in\mathbb{D}\) and let \(z^{*}=z/|z|\) denote the radial projection of \(z\). Denote by \(J_{z}\) the arc centered at \(z^{*}\) of length \(1-|z|\) and note that whenever \(\zeta\in J_{z}\)
\[\operatorname{dist}(\zeta,E)\leq|\zeta-z^{*}|+\operatorname{dist}(z^{*},E) \leq(1-|z|)+\operatorname{dist}(z^{*},E)\leq 2\operatorname{dist}(z^{*},E).\]
Using this and in conjunction with the fact that \(w\) is sub-additive, we obtain
\[|G_{E}(z)|\leq\exp\left(2m\int_{J_{z}}P_{z}(\zeta)\log w(\operatorname{dist}(z ^{*},E))dm(\zeta)\right)\leq w\left(\operatorname{dist}(z^{*},E)\right)^{2Nc^ {\prime}} \tag{16}\]
for all \(z\in\mathbb{D}\), where \(c^{\prime}>0\) is an absolute constant such that \(\int_{J_{z}}P_{z}dm\geq c^{\prime}\). Now fix an arbitrary point \(z\in\Omega_{E}^{\alpha}\) and recall that by definition
\[1-|z|=\operatorname{dist}(z^{*},E)^{1+\alpha}.\]
Now applying this observation to the estimate of \(G_{E}\) in conjunction with the assumption \((A_{1})\), where \(\alpha,\beta\in(0,1)\) and \(C>0\) are the specified parameters, we obtain
\[|G_{E}(z)|\leq Cw(1-|z|)^{2\beta Nc^{\prime}}\leq Cw(1-|z|),\qquad z\in \partial\Omega_{E}^{\alpha}\cap\mathbb{D}, \tag{17}\]
if \(N>0\) is chosen sufficiently large. An important consequence of (17) is that multiplication by \(G_{E}\) acts as a continuous linear embedding from \(\mathcal{G}_{w}\) into the Hardy spaces on \(\Omega_{E}\). For the sake of future references, we phrase the result below:
**Proposition 3.2**.: _Let \(M_{E}(g)(z)=G_{E}(z)g(z)\) denote the multiplication operator by \(G_{E}\). Then the linear map \(M_{E}\) maps \(\mathcal{G}_{w}\) into \(H^{\infty}(\Omega_{E})\) continuously._
Proof.: Let \(Q\) be an analytic polynomial and observe that by (17), we have that
\[\sup_{z\in\partial\Omega_{E}\cap\mathbb{D}}|G_{E}(z)Q(z)|\leq C\|Q\|_{\mathcal{ G}_{w}}.\]
Now since \(G_{E}\) continuous on \(\partial\mathbb{D}\), the maximum principle implies that
\[\sup_{z\in\Omega_{E}}|G_{E}(z)Q(z)|\leq C\|Q\|_{\mathcal{G}_{w}}.\]
However, since the analytic polynomials are dense in \(\mathcal{G}_{w}\), it follows that \(M_{E}(g)=G_{E}g\) uniquely extends to a continuous linear operator on all of \(\mathcal{G}_{w}\) which maps into \(H^{\infty}(\Omega_{E})\)
### The permanence property
Combining the tools developed in the previous subsections, we now turn to the proof of Theorem 1.2, which to the large extent revolves around establish the permanence property in \((ii)\) of the statement.
_Proof of Theorem_ 1.2.: Notice that the proof of part \((i)\) follows from Theorem 1.1 in conjunction with the well-known fact that outer functions in \(H^{\infty}\) are weak-star sequentially cyclic in \(H^{\infty}\) (see Theorem 7.4 in Chap. II of [11]) and Lemma 2.6. It thus remains to prove \((ii)\) and that \(\left[f\right]_{\mathcal{G}_{w}}=\left[BS_{\mu_{P}}\right]_{\mathcal{G}_{w}}\).
_Step 1: Support on a single \(w\)-set:_
We shall first show that if \(\nu\) is a positive finite singular Borel measure supported on a \(w\)-set \(E\subset\partial\mathbb{D}\), then \(S_{\nu}\) satisfies the permanence property in \(\mathcal{G}_{w}\), that is
\[\left[S_{\nu}\right]_{\mathcal{G}_{w}}\cap H^{2}\subseteq S_{\nu}H^{2}.\]
Let \(f\) be a member of \(\left[S_{\nu}\right]_{\mathcal{G}_{w}}\cap H^{2}\) and pick a sequence of analytic polynomials \(Q_{n}\) such that \(S_{\nu}Q_{n}\to f\) in \(\mathcal{G}_{w}\). Now by Proposition 3.2, we have that
\[Q_{n}(z)G_{E}(z)S_{\nu}(z)\to G_{E}(z)f(z)\]
uniformly on \(\Omega_{E}\) and thus also in \(H^{2}(\Omega)\). However, this means that the function \(G_{E}f\) belongs to the \(M_{z}\)-invariant subspace of \(H^{2}(\Omega)\) generated by \(G_{E}S_{\nu}\). Recall that \(G_{E}\) is outer in \(H^{\infty}\), hence applying Beurling's theorem to \(H^{2}(\Omega_{E})\), we conclude that \(f\circ\phi_{E}\) is divisible by the singular inner factor \(S_{\widetilde{\nu}}\) of \(S_{\nu}\circ\phi_{E}\), explicitly given in Theorem 3.1. Composing with \(\phi^{-1}\) to the right, we get that \(f\) is divisible by the singular inner factor of \(S_{\widetilde{\nu}}\circ\phi^{-1}\). Now applying Theorem 3.1 to \(\widetilde{\nu}\) with \(\phi^{-1}\) replacing \(\phi\), we actually get that \(f\) is divisible by \(S_{\nu}\), hence the desired claim is established.
_Step 2: Support on unions of \(w\)-sets:_
Let \(E=\cup_{n}E_{n}\) be a carrier of \(\mu_{P}\), where \(E_{n}\subset E_{n+1}\) and each \(E_{n}\) is a \(w\)-set. For the sake of abbreviation, set \(\nu_{n}=\mu_{P}|_{E_{n}}\) and note that the above argument readily shows that each singular inner function \(S_{\nu_{n}}\) satisfies the permanence property. Combining with the fact that each \(S_{\nu_{n}}\) divides \(S_{\mu_{P}}\), we get the set inclusion
\[\left[S_{\mu_{p}}\right]_{\mathcal{G}_{w}}\cap H^{2}\subseteq\bigcap_{n} \left[S_{\nu_{n}}\right]_{\mathcal{G}_{w}}\cap H^{2}\subseteq\bigcap_{n}S_{ \nu_{n}}H^{2}.\]
Hence for any \(f\in\left[S_{\mu_{p}}\right]_{\mathcal{G}_{w}}\cap H^{2}\) and for any \(n\), there exists \(h_{n}\in H^{2}\) such that \(f=S_{\nu_{n}}h_{n}\). But since the sequence \(\{h_{n}\}_{n}\) has uniformly bounded in the \(H^{2}\)-norm:
\[\left\|h_{n}\right\|_{H^{2}}^{2}=\int_{\partial\mathbb{D}}\left|h_{n}\right|^{ 2}dm=\int_{\partial\mathbb{D}}\left|f\right|^{2}dm=\left\|f\right\|_{H^{2}}^{2},\]
we may extract a subsequence \(\{h_{n_{k}}\}_{k}\) which converges weakly to some \(h\) in \(H^{2}\). But since the sequence \(\{\nu_{n}\}_{n}\) increases up to \(\mu_{P}\), so does any subsequence and we conclude
that \(f=S_{\mu_{P}}h\). This proves that \(S_{\mu_{P}}\) satisfies the permanence property. Of course, by multiplying \(S_{\mu_{P}}\) with any Blashcke product \(B\) the permanence property remains intact. In fact, this is a simple consequence of the fact that normal families preserve zeros inside \(\mathbb{D}\). This completes the proof of \((ii)\).
_Step 3: Showing that \([f]_{\mathcal{G}_{w}}=[BS_{\mu_{P}}]_{\mathcal{G}_{w}}\):_
The inclusion \(\subseteq\) is trivial, and for the reverse inclusion we may on account of \((i)\) pick a sequence of analytic polynomials \(\{Q_{n}\}_{n}\) such that \(FS_{\mu_{C}}Q_{n}\) converges to \(1\) in \(\mathcal{G}_{w}\). Multiplying with \(BS_{\mu_{P}}\in H^{\infty}\), we conclude that \(fQ_{n}\) converge to \(BS_{\mu_{P}}\) in \(\mathcal{G}_{w}\), which completes the proof.
With Theorem 1.2 at hand, we now turn to the proof of Corollary 1.3.
Proof of Corollary 1.3.: _Step 1: Proof of \((i)\):_
Let \(f\in\mathcal{N}\) be an element belonging to \(\mathcal{G}_{w}\) with Nevanlinna factorization \(f=BS_{\mu}F/S_{\nu}H\), where \(B\) denotes its Blaschke factor and \(F,H\in H^{\infty}\) are outer and \(\mu,\nu\) mutually singular. Since analytic polynomials are dense in \(\mathcal{G}_{w}\), we can find a sequence of analytic polynomials \(\{Q_{n}\}_{n}\) which converge to \(f\) in \(\mathcal{G}_{w}\). Multiplying by \(S_{\nu}H\in H^{\infty}\) using Theorem 1.2 we conclude that \(BS_{\mu}F\) belongs to the \(M_{z}\)-invariant subspace \([HS_{\nu}]_{\mathcal{G}_{w}}=[S_{\nu_{P}}]_{\mathcal{G}_{w}}\). But then permanence property in \((ii)\) of Theorem 1.2 implies that \(S_{\nu_{P}}\) divides \(S_{\mu}\), which by the assumption of \(\mu,\nu\) being mutually singular forces \(\nu_{P}\equiv 0\). This shows that \(\nu=\nu_{C}\) and thus \(S_{\nu}\) is cyclic in \(\mathcal{G}_{w}\) by Theorem 1.1.
_Step 2: Proof of \((ii)\):_
Suppose \(f\in\mathcal{N}\cap\mathcal{G}_{w}\) is cyclic in \(\mathcal{G}_{w}\) and let \(\{Q_{n}\}_{n}\) be a sequence of analytic polynomials such that \(fQ_{n}\) converges to \(1\) in \(\mathcal{G}_{w}\). Multiplying by \(S_{\nu}H\) and using the identity \(fS_{\nu}H=BS_{\mu}F\), we get that \(S_{\nu}H\in[BS_{\mu}F]_{\mathcal{G}_{w}}\). An application of Theorem 1.2 gives
\[S_{\nu}H\in[BS_{\mu}F]_{\mathcal{G}_{w}}\cap H^{\infty}=[BS_{\mu_{P}}]_{ \mathcal{G}_{w}}\cap H^{\infty}\subseteq BS_{\mu_{P}}H^{\infty}.\]
But since \(\nu,\mu\) were assumed to be mutually singular we conclude that \(BS_{\mu_{P}}\equiv 1\), and thus \(f=S_{\mu_{C}}F/S_{\nu_{C}}H\) is of the required form. Conversely, suppose now that \(f=S_{\mu_{C}}F/S_{\nu}H\) with \(\nu=\nu_{C}\). Since \(S_{\nu}H\in H^{\infty}\), there exists a sequence \(\{q_{k}\}_{k}\) of analytic polynomials which converges to \(S_{\nu}H\) in weak-star of \(H^{\infty}\). We now claim that this implies
\[\lim_{k}\lVert fq_{k}-fS_{\nu}H\rVert_{\mathcal{G}_{w}}=0. \tag{18}\]
Fix an arbitrary \(0<\varepsilon<1\) and note that since \(q_{k}\) converges to \(S_{\nu}H\) uniformly on compacts, we have
\[\sup_{|z|\leq 1-\varepsilon}w(1-|z|)|f(z)q_{k}(z)-f(z)S_{\nu}(z)H(z)|\leq \lVert f\rVert_{\mathcal{G}_{w}}\sup_{|z|\leq 1-\varepsilon}\lvert q_{k}(z)-S_{ \nu}(z)H(z)\rvert\to 0,\]
as \(k\to\infty\). On the other hand, since \(\sup_{k}\lVert q_{k}\rVert_{H^{\infty}}\leq C<\infty\), we also have
\[\sup_{|z|>1-\varepsilon}w(1-|z|)|f(z)q_{k}(z)-f(z)S_{\nu}(z)H(z)|\leq(C+\lVert H \rVert_{H^{\infty}})\sup_{|z|>1-\varepsilon}w(1-|z|)|f(z)|.\]
Letting \(\varepsilon\to 0+\) established the desired claim in (18). Using this in conjunction with the identity \(fS_{\nu}H=S_{\mu_{C}}F\), we conclude that
\[\left[S_{\mu_{C}}F\right]_{\mathcal{G}_{w}}\subseteq\left[f\right]_{\mathcal{G}_ {w}}\]
But according to \((i)\) of Theorem 1.2\(S_{\mu_{C}}F\) is cyclic in \(\mathcal{G}_{w}\), thus we conclude that \(f\) must also be cyclic in \(\mathcal{G}_{w}\).
_Step 3: Proof of \(\left[f\right]_{\mathcal{G}_{w}}=\left[BS_{\mu_{P}}\right]_{\mathcal{G}_{w}}\):_
Note that the containment \(\supseteq\) is trivial by the assumption that \(U:=S_{\mu_{C}}F/S_{\nu_{C}}H\) belongs to \(\mathcal{G}_{w}\). On the other hand, since \(U\) is cyclic in \(\mathcal{G}_{w}\) by the previous step in the proof, a straightforward argument involving the identity \(f=BS_{\mu_{P}}U\) settles the reverse containment \(\left[BS_{\mu_{P}}\right]_{\mathcal{G}_{w}}\subseteq\left[f\right]_{\mathcal{ G}_{w}}\). This finishes the entire proof.
### A technical improvement
Our purpose in this section is to weaken the assumption \((A_{1})\) on the majorant \(w\), in such a way that Theorem 1.2 remains intact. Let \(\Omega\subset\mathbb{D}\) be a domain which is enclosed by a Jordan curve and contains the origin and denote by \(\phi:\mathbb{D}\to\Omega\) a conformal map with \(\phi(0)=0\). We now make our first crucial observation. Recall that according to Theorem 3.1, the singular inner factor of \(S_{\mu}\circ\phi\) is concentrated at points \(e^{is}=\phi^{-1}(e^{it})\) with \(e^{it}\in\partial\Omega\cap\partial\mathbb{D}\) for which \(\phi\) has finite angular derivatives, that is, \(\phi^{\prime}\) has finite non-tangential limit at \(e^{is}\). Equivalently, the singular inner factor of \(S_{\mu}\circ\phi\) is concentrated at points \(e^{it}\in\partial\Omega\cap\partial\mathbb{D}\) where the inverse conformal map \(\psi:=\phi^{-1}:\Omega\to\mathbb{D}\) has non-zero angular derivative at. It is well-known that the existence of non-zero angular derivatives is an intrinsic property of the domain \(\Omega\) and the specific point \(e^{it}\in\partial\Omega\cap\partial\mathbb{D}\), thus it does actually not depend on the specific choice of conformal map \(\phi\). Therefore, we declare that \(\Omega\) is "thick" at \(e^{it}\in\partial\Omega\cap\partial\mathbb{D}\), if any conformal map \(\psi:\mathbb{D}\to\Omega\) has non-zero angular derivative at \(e^{it}\). The terminology "thick" stems from the fact that non-zero angular derivatives can only occur at points \(e^{it}\in\partial\Omega\cap\partial\mathbb{D}\) for which the following holds: for any angle \(0<\vartheta<\pi\), the domain \(\Omega\) encompasses a truncated Stolz-angle of opening \(\vartheta\) with vertex at \(e^{it}\). For a more elaborate treatment of this topic, we refer the reader to the recent work in [15] and references therein.
Our principal goal is to modify the special Privalov star-domains \(\Omega_{E}\) in subsection 3.2, in such a way that the Jordan curve enclosing \(\Omega_{E}\) has minimal regularity possible, but still the edges emanating at \(E\) remain "thick" for \(\Omega_{E}\). To this end, let \(\rho:[0,1]\to[0,1]\) be a majorant satisfying the Dini-condition
\[\int_{0}^{1}\rho(t)\frac{dt}{t}<\infty,\]
and consider the continuous function \(h:\partial\mathbb{D}\to[0,1/2]\) defined by
\[h(\zeta)=\frac{1}{2}\text{dist}(\zeta,E)\cdot\rho\left(\text{dist}(\zeta,E) \right),\qquad\zeta\in\partial\mathbb{D}.\]
Given a closed set \(E\subset\partial\mathbb{D}\) of Lebesgue measure zero, we now form the \(h\) modified Privalov star-domain associated to \(E\) by
\[\Omega^{h}_{E}:=\{r\zeta\in\mathbb{D}:\zeta\in\partial\mathbb{D},\ 0\leq r<1-h( \zeta)\}\,.\]
The following result originates back to the work of Rodin and Warchawski on the characterization of "thick" points (for instance, see Theorem V.5.7 in [12]), which takes the following accessible form for star-like domains.
**Theorem 3.3** ([15], Theorem 2.5 ).: _Let \(\Omega:=\{r\zeta\in\mathbb{D}:\zeta\in\partial\mathbb{D},\ 0\leq r<1-h(\zeta)\}\) be a domain where \(h:\partial\mathbb{D}\to[0,1/2]\) is continuous and satisfies the doubling condition:_
\[h(\zeta)\geq ch(\eta),\qquad\text{whenever}\ |\zeta-\eta|<ch(\zeta),\]
_for some \(c>0\). Then the domain \(\Omega\) is "thick" at \(\zeta_{0}\in\partial\Omega\cap\partial\mathbb{D}\) if and only if_
\[\int_{\partial\mathbb{D}}\frac{h(\zeta)}{\left|\zeta-\zeta_{0}\right|^{2}}dm (\zeta)<\infty.\]
Now it is straightforward to verify that for majorants \(\rho\), the associated function \(h\) satisfies the doubling condition appearing in Theorem 3.3. Applying the above Theorem and using the Dini-condition on \(\rho\), we conclude that \(\Omega^{h}_{E}\) is "thick" at every point in \(\partial\Omega^{h}_{E}\cap\partial\mathbb{D}=E\). With this at hand, let \(G_{E}\) be the outer function defined in (15) and recall that from (16) in subsection 3.2, we deduced that there exists an absolute constant \(c>0\), such that
\[|G_{E}(z)|\leq w\left(\operatorname{dist}(z^{*},E)\right)^{Nc},\qquad z\in \mathbb{D},\]
where \(z^{*}=z/|z|\) denotes the radial projection of \(z\). We now make the following assumption, which shall serve as a substitute for \((A_{1})\). There exists constants \(\beta,C>0\) such that
\[w(t)\leq Cw(t\rho(t))^{\beta},\qquad 0<t<1.\]
For instance, one can make the specific choice of \(\rho(t)=\log^{-(1+\varepsilon)}(e/t)\) with \(\varepsilon>0\), which evidently weakens the previous assumption \((A_{1})\). Now fix an arbitrary point \(z\in\partial\Omega^{h}_{E}\cap\mathbb{D}\) and note that by construction of the modified Privalov star-domain, we have
\[1-|z|=\operatorname{dist}(z^{*},E)\rho(\operatorname{dist}(z^{*},E)).\]
With this at hand, applying the assumption \((A^{\prime}_{1})\) in conjunction with the estimate of \(G_{E}\) in the previous paragraphs, we obtain
\[|G_{E}(z)|\leq Cw(1-|z|)^{\beta Nc}\leq w(1-|z|),\qquad z\in\partial\Omega^{h} _{E}\cap\mathbb{D} \tag{19}\]
whenever \(N>0\) is sufficiently large. Using (19) we derive that the multiplication operator \(M_{E}(g)(z):=G_{E}(z)g(z)\) maps \(\mathcal{G}_{w}\) continuously into \(H^{\infty}(\Omega^{h}_{E})\) (see the previous Proposition 3.2). We now conclude the section by the following final observations. Since \(\Omega^{h}_{E}\) is "thick" at any point \(\partial\Omega^{h}_{E}\cap\partial\mathbb{D}\), we may apply Theorem 3.1 and run the same argument provided in the proof of Theorem 1.2 verbatim, to conclude that Theorem 1.2 (and consequently Corollary 1.3 as well) holds under the weaker assumption \((A^{\prime}_{1})\). As a summary we obtain the following.
**Corollary 3.4**.: _Theorem 1.2 and Corollary 1.3 remain true for majorants \(w\) satisfying condition \((A_{1}^{\prime})\), determined by any Dini-continuous majorant \(\rho\)._
## 4 Regular functions in model spaces
This section is devoted to establishing the results announced in Theorem 1.4. Throughout this subsection, we assume that all majorants \(w\) satisfy condition \((A_{2})\), unless explicitly stated otherwise, and we let \(0<\gamma<1\) denote the constant appearing in condition \((A_{2})\).
### Cauchy dual reformulations in terms of model spaces
We shall need to identify the Cauchy dual \((\mathcal{G}_{w})^{\prime}\) of the growth space \(\mathcal{G}_{w}\). Since \(\mathcal{G}_{w}\) is a Banach space one can obviously always generate an abstract Banach space dual, but the purpose is to identify these continuous linear functionals as a certain class of analytic functions on \(\mathbb{D}\) considered in the Cauchy \(H^{2}\)-pairing on \(\partial\mathbb{D}\). In other words, we seek a class of analytic functions \(f\) on \(\mathbb{D}\), for which the limit
\[\ell_{f}(g):=\lim_{r\to 1-}\int_{\partial\mathbb{D}}g(r\zeta)\overline{f(r \zeta)}\,dm(\zeta) \tag{20}\]
exists for any \(g\in\mathcal{G}_{w}\) and defines a bounded linear functional on \(\mathcal{G}_{w}\). To this end, note that if \(f\) is analytic in \(\mathbb{D}\) and \(g\in\mathcal{G}_{w}\), an application of Green's formula gives
\[\int_{\partial\mathbb{D}}g(r\zeta)\overline{f(r\zeta)}\,dm(\zeta)=g(0) \overline{f(0)}+r\int_{\mathbb{D}}g(rz)\overline{f^{\prime}(rz)}dA(z),\qquad 0 <r<1.\]
We claim that any analytic function \(f\) in \(\mathbb{D}\) satisfying
\[\|f\|_{\mathcal{F}_{w}}:=|f(0)|+\int_{\mathbb{D}}|f^{\prime}(z)|\frac{dA(z)}{ w(1-|z|)}<\infty\]
induces a bounded linear functional on \(\mathcal{G}_{w}\) in the Cauchy \(H^{2}\)-pairing given by (20). We shall denote this class of functions by \(\mathcal{F}_{w}\). To see this, fix an \(f\in\mathcal{F}_{w}\) and apply Green's formula to the difference of dilations \(g_{r}-g_{\rho}\) with \(g\in\mathcal{G}_{w}\) to obtain
\[\left|\int_{\partial\mathbb{D}}\left(g(r\zeta)-g(\rho\zeta)\right)\overline{f (r\zeta)}\,dm(\zeta)\right|\leq C\|g_{r}-g_{\rho}\|_{\mathcal{G}_{w}}\|f\|_{ \mathcal{F}_{w}}.\]
Since dilations are continuous in \(\mathcal{G}_{w}\), the Cauchy criterion shows that \(\ell_{f}(g)\) exists for any \(f\in\mathcal{G}_{w}\) and whenever \(f\in\mathcal{F}_{w}\). Summarizing we conclude that
\[\mathcal{F}_{w}\subseteq\left(\mathcal{G}_{w}\right)^{\prime}.\]
This inclusion will be enough for our purposes, but one can in fact show that \(\mathcal{F}_{w}\) is isomorphic to \(\left(\mathcal{G}_{w}\right)^{\prime}\) with equivalent norms. Now the next results allows us to rephrase the problem of cyclicity for inner functions \(\Theta\) and the existence of certain function in the associated model spaces \(K_{\Theta}\).
**Lemma 4.1**.: _Let \(X\supset H^{\infty}\) be Banach space for which the analytic polynomials are dense, and let \(X^{\prime}\) denote its Cauchy dual. Then an inner function \(\Theta\) is cyclic in \(X\) if and only if \(X^{\prime}\cap K_{\Theta}=\{0\}\)._
Proof.: We primarily note that \(X\supset H^{\infty}\) implies that \(X^{\prime}\subseteq H^{2}\), hence Cauchy-dual elements of \(X\) have well-defined boundary values \(m\)-a.e on \(\partial\mathbb{D}\). Now if \(\Theta\) is cyclic in \(X\), then whenever \(f\in X^{\prime}\) with
\[\int_{\partial\mathbb{D}}f(\zeta)\overline{\Theta(\zeta)\zeta^{n}}dm(\zeta)=0,\qquad\forall n\geq 0, \tag{21}\]
we must have \(f\equiv 0\), which is equivalent to \(X^{\prime}\cap K_{\Theta}=\{0\}\). On the other hand, if \(\Theta\) is not cyclic in \(X\), then by the Hahn-Banach separation theorem, there exists a non-trivial function \(f\in X^{\prime}\) satisfying (21). But the containment \(X^{\prime}\subseteq H^{2}\) in conjunction with the F. and M. Riesz Theorem readily implies that \(f\in K_{\Theta}\), and consequently the intersection \(X^{\prime}\cap K_{\Theta}\) contains \(f\) and is thus non-trivial.
With similar techniques, one can also establish the following density version of Lemma 4.1: An inner function \(\Theta\) satisfies the permanence property in \(X\), that is
\[\left[\Theta\right]_{X}\cap H^{\infty}\subseteq\Theta H^{\infty}\]
if and only if \(X^{\prime}\cap K_{\Theta}\) is dense in \(K_{\Theta}\). See Theorem 1.3 in [19] for elaborate details on this matter.
The next lemma in line is a special case of a general result in the theory of reproducing kernel Hilbert spaces, which will be useful for our purposes. For instance, see Lemma 7.2 in [20] for a detailed proof.
**Lemma 4.2**.: _Let \(\{\Theta_{n}\}_{n\geq 0}\) be a sequence of inner functions with the properties that each \(\Theta_{n}\) is a divisor of \(\Theta_{0}\) and \(\lim_{n\to\infty}\Theta_{n}(z)=\Theta_{0}(z)\) for all \(z\in\mathbb{D}\). Then \(\cup_{n\geq 1}K_{\Theta_{n}}\) is a dense subset of \(K_{\Theta_{0}}\). Moreover, if \(S\subseteq H^{2}\) is set with the property that \(S\cap K_{\Theta_{n}}\) is dense in \(K_{\Theta_{n}}\) for each \(n\), then \(S\cap K_{\Theta_{0}}\) is dense in \(K_{\Theta_{0}}\)._
### Regularity properties of functions in \(\mathcal{A}_{w}\)
The following lemma shows that if \(w\) is a majorant for which some power \(w^{A}\) of \(w\) with \(A>0\) is Dini-continuous, then the Cauchy projection does not distort the modulus of continuity too much.
**Lemma 4.3**.: _Let \(w\) be a majorant satisfying_
\[\int_{0}^{1}w^{A}(t)\frac{dt}{t}<\infty,\]
_some \(A>0\). Then the Cauchy projection \(P_{+}\) maps \(C_{w^{A+1}}(\partial\mathbb{D})\) continuously into \(\mathcal{A}_{w}\)._
Proof.: Fix \(\lambda>0\) and recall that the Cauchy projection \(P_{+}\) maps \(C_{w^{A+1}}(\partial\mathbb{D})\) into \(\mathcal{A}_{\widetilde{w}}\), where \(\widetilde{w}\) is a majorant satisfying
\[\widetilde{w}(\delta)\leq C\int_{0}^{\delta}\frac{w^{A+1}(t)}{t}dt+\delta\int_{ \delta}^{1}\frac{w^{A+1}(t)}{t^{2}}dt,\qquad 0\leq\delta\leq 1\]
for some constant \(C>0\) independent of \(\delta\) (for instance, see Theorem 1.3, Chap. III of [11]). It immediately follows from the Dini-condition of \(w^{A}\) and the assumption that \(w(t)/t\) is non-decreasing that
\[\int_{0}^{\delta}\frac{w^{A+1}(t)}{t}dt+\delta\int_{\delta}^{1}\frac{w^{A+1}(t )}{t^{2}}dt\leq w(\delta)\int_{0}^{1}\frac{w^{A}(t)}{t}dt=:C^{\prime}w(\delta).\]
This shows the inclusion \(\mathcal{A}_{\widetilde{w}}\subseteq\mathcal{A}_{w}\) and thus \(P_{+}:C_{w^{A+1}}(\partial\mathbb{D})\to\mathcal{A}_{w}\) continuously.
Next we shall need a lemma which asserts that the derivative of a function in \(\mathcal{A}_{w}\) satisfies a certain radial growth restriction.
**Lemma 4.4**.: _Let \(w\) be a majorant (not necessary satisfying \((A_{1})\) and \((A_{2})\)). Then there exists a universal constant \(C>0\), such that any function \(f\in\mathcal{A}_{w}\) enjoys the growth restriction_
\[|f^{\prime}(z)|\leq C\|f\|_{\mathcal{A}_{w}}\frac{w(1-|z|)}{1-|z|},\qquad z\in \mathbb{D}. \tag{22}\]
A neat and short proof of this fact is provided by Lemma 2 in [23]. With this lemma at hand, we now verify that functions in \(\mathcal{A}_{w}\) are Cauchy dual-element of growth spaces \(\mathcal{G}_{w^{p}}\) for a certain range of \(p>0\).
**Lemma 4.5**.: _Let \(w\) be a majorant satisfying condition \((A_{2})\) and \(0<p<1-\gamma\). Then we have the containment \(\mathcal{A}_{w}\subseteq\mathcal{F}_{w^{p}}\)._
Proof.: Pick an arbitrary function \(f\in\mathcal{A}_{w}\). According to Lemma 4.4, we have
\[|f^{\prime}(z)|\leq C\|f\|_{\mathcal{A}_{w}}\frac{w(1-|z|)}{1-|z|},\qquad z\in \mathbb{D}.\]
Using this we get
\[\int_{\mathbb{D}}\lvert f^{\prime}(z)\rvert\frac{dA(z)}{w(1-|z|)^{p}}\leq C \|f\|_{\mathcal{A}_{w}}\int_{\mathbb{D}}\frac{w^{1-p}(1-|z|)}{1-|z|}dA(z)\leq C ^{\prime}\|f\|_{\mathcal{A}_{w}}\int_{0}^{1}\frac{w^{1-p}(t)}{t}dt.\]
The assumption \(\gamma<1-p\) and condition \((A_{2})\) ensure that the integral is finite, hence we conclude that \(P_{+}\) maps \(\mathcal{A}_{w}\subseteq\mathcal{F}_{w^{p}}\).
### A fine construction by Shirokov
Here we shall need a modified version of a construction due to N.A. Shirokov in [26], which will allow us to construct \(\mathcal{A}_{w}\)-functions in model spaces \(K_{S_{\mu}}\) for singular measures \(\mu\) supported on \(w\)-sets. To this end, let \(E\subset\partial\mathbb{D}\) be a \(w\)-set and suppose \(\mu\) is a positive finite singular Borel measure supported on \(E\). Denote by \(\{J_{n}\}_{n}\) the connected components of \(\partial\mathbb{D}\setminus E\) and let \(\xi_{n}\) the center of each arc \(I_{n}\). The principal idea by Shirokov is to construct a sequence of positive numbers \(\{\lambda_{n}\}_{n}\) with the following properties:
1. \(0<\lambda_{n}\leq\min\left\{w(m(J_{n})),w(1/\big{|}S^{\prime}_{\mu}(\xi_{n}) \big{|})\right\}\) for each \(n\),
2. \[\sum_{n}m(J_{n})\log\frac{1}{\lambda_{n}}<\infty,\]
3. There exists a constant \(c>0\), independent of \(n\), such that \[c\cdot m(J_{n})^{2}\leq\lambda_{n}\leq\log\frac{1}{v_{n}},\] where \[v_{n}=\int_{\partial\mathbb{D}\setminus J_{n}}\frac{v(\zeta)}{\left|\zeta-\xi_ {n}\right|^{2}}dm(\zeta)\] and the function \(v\) is defined by the formula \(v(\zeta)=\log(1/\lambda_{k})+\log\frac{m(J_{k})^{2}}{\big{|}(\zeta-e^{ia_{k}}) (e^{ib_{k}}-\zeta)\big{|}}\) for \(\zeta\in J_{k}:=\{e^{it}:a_{k}<t<b_{k}\}\).
Using this, Shirokov then goes ahead and defines the outer function \(F_{E}\), whose modulus on \(\partial\mathbb{D}\) equals
\[\left|F_{E}(\zeta)\right|=\lambda_{n}\frac{\left|(\zeta-e^{ia_{k}})(e^{ib_{k} }-\zeta)\right|^{2}}{m(J_{k})^{4}},\qquad\zeta\in J_{k},\]
for each \(k\), and then verifies that \(F_{E}\in\mathcal{A}_{w}\) with \(F_{E}=0\) precisely on \(E\). In fact, Shirokov proves that \(f:=F_{E}S_{\mu}\) belongs to \(\mathcal{A}_{w}\) by showing that \(f\) belongs to \(C_{w}(\partial\mathbb{D})\) and applying Tamrazov's Theorem [27]. In fact, following the argument provided in [26] almost verbatim, one can actually establish that \(F_{E}\overline{S_{\mu}}\) also belongs to \(C_{w}(\partial\mathbb{D})\), and thus so does \(\overline{F_{E}}S_{\mu}\) by means of taking complex conjugates. For the sake of future references, we now summarize the conclusion of this discussion.
**Theorem 4.6** (N. A. Shirokov, [26]).: _Let \(w\) be a majorant and \(E\subset\partial\mathbb{D}\) be a \(w\)-set. Then for any positive finite Borel singular measure \(\nu\) supported on \(E\), there exists an outer function \(F_{E}\in\mathcal{A}_{w}\) which vanishes only at \(E\), such that \(\overline{F_{E}}S_{\mu}\) belongs to \(C_{w}(\partial\mathbb{D})\)._
### Connection to model spaces
_Proof of Theorem 1.4._ The proof will be divided in different steps.
_Step 1: Existence of \(\mathcal{A}_{w}\) functions:_
We first prove that if \(\mu\) does not charge any \(w\)-set, then \(\mathcal{A}_{w}\cap K_{S_{\mu}}=\{0\}\). Recall \(E\) is a \(w\)-set if and only if it is \(w^{p}\)-set for any \(p>0\), hence by Theorem 1.1\(S_{\mu}\) is cyclic in \(\mathcal{G}_{w^{p}}\) for any \(p>0\). Now by Lemma 4.1 this is equivalent to
\[(\mathcal{G}_{w^{p}})^{\prime}\cap K_{S_{\mu}}=\{0\},\qquad p>0.\]
Choosing \(0<p<1-\gamma\), it now follows from the containment \(\mathcal{A}_{w}\subseteq\mathcal{F}_{w^{p}}\) in Lemma 4.5 that \(\mathcal{A}_{w}\cap K_{S_{\mu}}=\{0\}\) as well.
Conversely, suppose that \(\mu\) charges a \(w\)-set \(E\subset\partial\mathbb{D}\) and set \(\nu:=\mu|_{E}\). We first show that there exists a non-trivial function \(f\in\mathcal{A}_{w}\cap K_{S_{\nu}}\). Once we do that \((i)\) of Theorem 1.4 is established. Again, observe that since \(E\) is a \(w\)-set is also a \(w^{p}\)-set for any \(p>0\). Fix \(p>\gamma+1\) and recall that by Theorem 4.6 there exists an outer function \(f:\mathbb{D}\to\mathbb{D}\) in \(\mathcal{A}_{w^{p}}\) such that \(\overline{f}S_{\mu}\in C_{w^{p}}(\partial\mathbb{D})\). In particular, if \(k_{\lambda}(\zeta):=1/(1-\overline{\zeta}\lambda)\) denotes the Cauchy kernels with \(\lambda\in\mathbb{D}\), we have that each function \(g_{\lambda}:=\overline{f}S_{\nu}k_{\lambda}\) belongs to \(C^{w^{p}}(\partial\mathbb{D})\). According to Lemma 4.3, it follows that \(P_{+}(g_{\lambda})\in\mathcal{A}_{w^{p-\gamma}}\subset\mathcal{A}_{w}\), for each \(\lambda\in\mathbb{D}\). Now since Toeplitz operators with co-analytic symbol \(f\) preserve smooth functions on \(\partial\mathbb{D}\), we have that \(T_{\overline{f}}(k_{\lambda})\) is also smooth on \(\partial\mathbb{D}\), hence in particular belongs to \(\mathcal{A}_{w}\). Now if \(\kappa_{S_{\nu}}(z,\lambda)=\frac{1-\overline{S_{\nu}(\lambda)}S_{\nu}(z)}{1 -\overline{\lambda}z}\) with \(z,\lambda\in\mathbb{D}\) denotes the reproducing kernels of the model space \(K_{S_{\nu}}\), we conclude that for any \(\lambda\in\mathbb{D}\),
\[T_{\overline{f}}(\kappa_{S_{\nu}}(\cdot,\lambda))(z)=T_{\overline{f}}(k_{ \lambda})(z)-\overline{S_{\nu}(\lambda)}P_{+}(g_{\lambda})(z)\]
is function belonging to \(\mathcal{A}_{w}\cap K_{S_{\nu}}\). This establishes that \(\mu\) does not charge any \(w\)-set if and only if \(\mathcal{A}_{w}\cap K_{S_{\mu}}=\{0\}\).
_Step 2: Density of \(\mathcal{A}_{w}\) functions:_
We now turn to proof of the statement about density. To this end, we first establish the claim that the linear span of \(\{T_{\overline{f}}(\kappa_{S_{\nu}}(\cdot,\lambda))\}_{\lambda\in\mathbb{D}}\) defined in the previous paragraphs is in fact a dense subset of \(K_{S_{\nu}}\). To see this, let \(g\in K_{S_{\mu}}\) be an annihilator of set, that is
\[\int_{\partial\mathbb{D}}f(\zeta)g(\zeta)\overline{\kappa_{S_{\nu}}(\zeta, \lambda)}dm(\zeta)=0,\qquad\forall\lambda\in\mathbb{D}.\]
But this implies that \(fg\in(K_{S_{\nu}})^{\perp}=S_{\nu}H^{2}\), and since \(f\) is outer, we conclude that the function \(g\in K_{S_{\nu}}\cap S_{\nu}H^{2}=\{0\}\). This proves that \(\mathcal{A}_{w}\cap K_{S_{\nu}}\) is dense in \(K_{S_{\nu}}\). In fact, since any finite Blaschke product \(B_{0}\) extends analytically across \(\partial\mathbb{D}\), the same argument as before actually shows that \(\mathcal{A}_{w}\cap K_{B_{0}S_{\nu}}\) is dense in \(K_{B_{0}S_{\nu}}\) for any finite Blaschke product \(B_{0}\) and singular measure \(\nu\) supported on a \(w\)-set.
_Step 3: \(\mathcal{A}_{w}\ \cap K_{BS_{\mu_{P}}}\) dense in \(K_{BS_{\mu_{P}}}\):_
To pass to the general claim, fix an arbitrary Blashcke product \(B\) and let \(\{B_{N}\}_{N}\) denote a sequence of finite truncations of \(B\) which converge to \(B\) uniformly on compact subsets
of \(\mathbb{D}\). Recall that by definition of \(\mu_{P}\), there exists an increasing union of \(w\)-sets \(\{E_{N}\}_{N}\) (a finite union of \(w\)-sets is again a \(w\)-set), such that \(\nu_{N}:=\mu_{E_{N}}\) increase up to \(\mu_{P}\) in total variation norm. We now form the sequence of inner function \(\{\Theta_{N}\}_{N}\) defined by \(\Theta_{N}=B_{N}S_{\nu_{N}}\), and easily verify that \(\Theta_{N}\) converges to \(BS_{\mu_{P}}\) pointwise on \(\mathbb{D}\) and that each \(\Theta_{N}\) is by construction a divisor of \(BS_{\mu_{P}}\). By the development in Step 2, \(\mathcal{A}_{w}\cap K_{\Theta_{N}}\) is dense in \(K_{\Theta_{N}}\) for each \(N\), hence invoking Lemma 4.2 we conclude that \(\mathcal{A}_{w}\ \cap K_{BS_{\mu_{P}}}\) dense in \(K_{BS_{\mu_{P}}}\), which establishes one direction of the statement about density.
_Step 4: Showing that the closure of \(\mathcal{A}_{w}\cap K_{\Theta}\) equals \(K_{BS_{\mu_{P}}}\):_
In order to show the converse of the previously established claim, it remains to deduce the statement phrased in the heading of this step. For the sake of abbreviation, set \(\Theta_{P}:=BS_{\mu_{P}}\) and \(\Theta_{C}=S_{\mu_{C}}\) and observe the orthogonal decomposition
\[K_{\Theta}=K_{\Theta_{P}}\oplus\Theta_{P}K_{\Theta_{C}}.\]
Now pick an arbitrary element \(g\in H^{\infty}\cap K_{\Theta_{C}}\) (for instance a reproducing kernel in \(K_{\Theta_{C}}\) and note that by \((i)\) of Theorem 1.2, we can find a sequence of analytic polynomials \(\{Q_{n}\}_{n}\) such that \(Q_{n}\Theta_{C}\) converges to \(g\) in \(\mathcal{G}_{w^{p}}\) for any \(p>0\). Multiplying by \(\Theta_{P}\), we conclude that \(\Theta Q_{n}\) converges to \(\Theta_{P}\) in \(\mathcal{G}_{w^{p}}\). Now using Lemma 4.5, we get that \(\mathcal{A}_{w}\subset(\mathcal{G}_{w^{p}})^{\prime}\) for \(0<p<1-\gamma\), and hence for any \(f\in\mathcal{A}_{w}\cap K_{\Theta}\) we have
\[\int_{\partial\mathbb{D}}\Theta_{P}(\zeta)g(\zeta)\overline{f(\zeta)}dm(\zeta )=\lim_{n}\int_{\partial\mathbb{D}}\Theta(\zeta)Q_{n}(\zeta)\overline{f(\zeta )}dm(\zeta)=0.\]
This argument shows that for any \(g\in H^{\infty}\cap K_{\Theta_{C}}\) the product \(\Theta_{P}g\) is contained in \(\left(\mathcal{A}_{w}\cap K_{\Theta}\right)^{\perp}\). Taking closures and using the fact that reproducing kernels are dense, we obtain the containment \((K_{\Theta_{P}})^{\perp}=\Theta_{P}K_{\Theta_{C}}\subseteq\left(\mathcal{A}_{ w}\cap K_{\Theta}\right)^{\perp}\), which implies that the closure of \(\mathcal{A}_{w}\cap K_{\Theta}\) in \(K_{\Theta}\) is contained in \(K_{\Theta_{P}}\). Using this in conjunction with the previously established claim that \(\mathcal{A}_{w}\cap K_{\Theta_{P}}\subseteq\mathcal{A}_{w}\cap K_{\Theta}\) is dense in \(K_{\Theta_{P}}\), we actually conclude the closure of \(\mathcal{A}_{w}\cap K_{\Theta}\) equals \(K_{\Theta_{P}}\). This completes the proof of the theorem.
We remark that in order to construct a dense subset of \(\mathcal{A}_{w}\cap K_{\Theta_{P}}\) in \(K_{\Theta_{P}}\) it is sufficient to require that the majorant \(w\) satisfies
\[\int_{0}^{1}w^{A}(t)\frac{dt}{t}<\infty\]
for some \(A>0\). Under this weaker assumption and using the embedding provided by Lemma 4.5 one can also establish that the intersection \(\mathcal{A}_{w^{A^{\prime}}}\cap K_{S_{\mu_{C}}}=\{0\}\) is trivial for any \(A^{\prime}>A\).
## Acknowledgements
This work has benefited profoundly from the encouragement and close conversations with Artur Nicolau, and from the authors insightful discussions and previous collaborative work with Bartosz Malman. It is therefore a great pleasure to thank them both for their support. |
2301.03414 | Multimodal Transportation Pricing Alliance Design: Large-Scale
Optimization for Rapid Gains | Transit agencies have the opportunity to outsource certain services to
established Mobility-on-Demand (MOD) providers. Such alliances can improve
service quality, coverage, and ridership; reduce public sector costs and
vehicular emissions; and integrate the passenger experience. To amplify the
effectiveness of such alliances, we develop a fare-setting model that jointly
optimizes fares and discounts across a multimodal network. We capture
commuters' travel decisions with a discrete choice model, resulting in a
large-scale, mixed-integer, non-convex optimization problem. To solve this
challenging problem, we develop a two-stage decomposition with the pricing
decisions in the first stage and a mixed-integer linear optimization of fare
discounts and passengers' travel decisions in the second stage. To solve the
decomposition, we develop a new solution approach combining tailored coordinate
descent, parsimonious second-stage evaluations, and interpolations using
special ordered sets. This approach, enhanced by acceleration techniques based
on slanted traversal, randomization and warm-start, significantly outperforms
algorithmic benchmarks. Different alliance priorities result in qualitatively
different fare designs: flat fares decrease the total vehicle-miles traveled,
while geographically-informed discounts improve passenger happiness. The model
responds appropriately to equity-oriented and passenger-centric priorities,
improving system utilization and lowering prices for low-income and
long-distance commuters. Our profit allocation mechanism improves outcomes for
both types of operators, thus incentivizing profit-oriented MOD operators to
adopt transit priorities. | Kayla Cummings, Vikrant Vaze, Özlem Ergun, Cynthia Barnhart | 2023-01-09T15:09:19Z | http://arxiv.org/abs/2301.03414v2 | Multimodal Transportation Alliance Design with Endogenous Demand: Large-Scale Optimization for Rapid Gains
###### Abstract
Transit agencies have the opportunity to outsource certain services to well-established platform-based Mobility on Demand (MOD) providers. Such alliances can improve service quality, coverage, and ridership; reduce public sector costs and vehicular emissions; and integrate the passenger experience. To amplify the effectiveness of such alliances, we develop a fare-setting model that jointly optimizes discounted farex across a multimodal network. We capture commuters' travel choices with a discrete choice model, resulting in a large-scale, mixed-integer, non-convex optimization problem. To solve this challenging problem, we develop a two-stage decomposition with the pricing decisions in the first stage and a mixed-integer linear optimization problem optimizing fare discounts and the induced passenger behaviors in the second stage. To solve the decomposition, we develop a new solution approach combining tailored coordinate descent, parsimonious second-stage evaluations, and interpolations using special ordered sets. This approach, enhanced by acceleration techniques based on slanted traversal, randomization and warm-start, significantly improves system-wide practical outcomes over algorithmic benchmarks. Different alliance priorities result in qualitatively different fare designs: flat fares decrease the total vehicle-miles traveled, while geographically-informed discounts improve passenger happiness. The model responds appropriately to equity-oriented and passenger-centric priorities, improving system utilization and lowering prices for low-income residents and long-distance commuters. Finally, our revenue allocation mechanism improves outcomes for both operators, thus incentivizing profit-oriented MOD operators to adopt transit priorities.
Public transit, transportation pricing, alliance design, mixed-integer non-convex optimization 1
## 1 Introduction
Cities face critical challenges in the quest to improve urban mobility. Prior to the pandemic, congestion was steadily rising, translating to $160 billion annual costs to U.S. cities and record-breaking contributions to greenhouse gas emissions (Schrank et al., 2015). Recent declines in transit ridership demonstrate the inability of transit's static infrastructure to accommodate rapidly evolving commuting patterns (The Economist, 2018). Private ride-sharing apps from Transportation Network Companies (TNCs) like Uber and Lyft have
challenged this fixed-infrastructure status quo. TNCs transported 2.6 billion passengers in 2017, more than doubling the ride-sharing market since 2012 (Schaller 2018). The majority of urban TNC patrons admit that they would have otherwise walked, biked, taken public transit, or not made the trip, coinciding with tens of millions in annual transit revenue losses, worsening congestion, higher emissions, lower navigability of cities, and reduced accessibility to affordable public options (Gehrke and Reardon 2018, Schaller 2018).
Mobility-on-demand (MOD) services have the potential to service _transit deserts_--low-density areas disconnected from public transit. However, cost presents a key barrier: while all public transit modes operate at a loss, MOD services administered by transit agencies incur the highest average per-trip costs ($23.10 vs. the next-highest $11.19 for commuter rail) (Kane, Tomer, and Puentes 2016). High labor needs, outdated technology, and coordination difficulties lead to inefficient, expensive operations. Notably, average TNC trip costs $13, a full $10 less than agency-sponsored MOD trips. Outsourcing all 223 million on-demand transit trips to TNCs could hypothetically save billions of dollars for US transit agencies (Kane, Tomer, and Puentes 2016). Thus, pricing alliances between TNCs and transit agencies have the potential to improve service quality and coverage, while reducing costs and decreasing citywide vehicle-miles travelled (VMT).
### Pricing Alliances in the Real World
TNCs have the infrastructure to provide more cost-effective MOD services supplementing fixed-route transit. Microtransit platforms like BRIDJ and Via--differentiated from TNCs due to fleets comprising mini-vans or shuttles as opposed to sedans--have oriented their business model toward complementing transit (Via Transp. 2023, BRIDJ 2023). The Federal Transit Administration's MOD Sandbox program has provided millions in funding to transit agencies in US cities such as Dallas, San Francisco, and Los Angeles to develop on-demand pilots that fill service gaps in their service regions (Federal Transp. Administration 2016). Rather than designing a complementary MOD system from scratch and incurring high fixed costs, transit agencies could outsource MOD services to TNC platforms that are well-established, highly connected, and widely trusted; or to microtransit platforms that more closely align with transit agency goals.
This section formalizes such alliances within a rigorous conceptual framework. We define a _pricing alliance_ as a cooperative pricing scheme between a transit agency and an MOD operator with independently operated infrastructures serving overlapping or adjacent regions. A pricing alliance seeks to improve each operator's own prioritized metrics whilst also improving system-wide benefits through integration. Indeed, real-world pricing alliance pilots have shown great promise in improving regional mobility, providing alternatives with higher service levels, lower fares and increased ridership (The Boston Globe 2022, Mag 2021).
Pricing alliances are characterized by the intended service populations and the relationship of the MOD operator's system to the fixed-route transit network. Service population may be a targeted demographic, e.g. persons with limited mobility, low-income people, or senior citizens; the service population might also constitute residents of a particular geographic area, e.g. residents of a transit desert or people traveling
within a given radius of a transit hub. The MOD infrastructure may complement, substitute, or extend fixed-route options. Once participating operators establish the nature of the pricing alliance, the alliance can select a joint pricing scheme. Carefully designed fares influence passenger behavior, incentivizing choices that benefit the entire system. Table 1 surveys these traits of recent pricing alliances:
* _MOD operator_: This can be a TNC like Uber or Lyft, or a microtransit platform like Via.
* _Service population_: Recent alliances have served people with limited mobility, seniors, essential workers, or simply everyone.
* _Fare structure_: For passengers traveling within the system, some alliances charge a flat fare and/or a variable fare based on distance travelled. Selectively applied, interpretable discount structures for jointly offered routes can encourage multimodal travel and engineer outcomes desired by the operators.
* _Route structure_: Alliances often require specific trip geography: point-to-point (PTP) (trips must occur within a given geographic region), zone-based (partitions a larger region into small zones and requires intra-zonal trips), and hub-based (at least one trip endpoint must be anchored at specified locations). Figure 1 illustrates zone-based and hub-based route structures.
* _Integration into public transit network:_ The MOD portion of the network might integrate into the transit network in several ways: complementary (provides another mode option to improve service quality), substitutive (replaces existing fixed-route transit), first-/last-mile (FLM) (connects travelers to the fixed-route network), and extension (serves transit desert regions).
### Literature Review
This work sits at the intersection of literature on FLM system design and operations management, integrated multimodal transport system design, and horizontal cooperation among competing transportation operators.
Figure 1: Route structures of recent pricing alliances. In Plano (Dallas Area Rapid Transit 2020), passengers spend up to $3 to travel anywhere within a color-blocked region. In the _NewMo_ Pilot (City of Newton, MA 2021), passengers could travel anywhere within town limits for $2, as long as either trip endpoint was one of seven hubs. _NewMo_ now allows passengers to travel anywhere within Newton.
**FLM system design and operations**: Research on demand responsive connector (DRC) systems develops analytical models to evaluate service quality and determine first-mile system parameters. In particular, such work specifies optimal zone size and headways, identifies transition points between regions best serviced by fixed-route vs. flexible services, and establishes best practices for inter-zone transfer coordination (Chandra and Quadrifoglio, 2013; Kim et al., 2019; Kim and Schonfeld, 2014; Lee and Savelsbergh, 2017; Li and Quadrifoglio, 2010; Lu et al., 2017; Lu et al., 2014). The tactical question of how to operate a first-mile system is also well-studied. The Dial-A-Ride Problem (DARP) encompasses the vehicle routing problem faced by transit agencies, given a set of trip requests and a vehicle fleet (Ho et al., 2018; Molenbruch et al., 2017). The Integrated DARP (IDARP) designs vehicle routes and schedules to meet trip requests, allowing transfers with fixed-route timetabled service (Posada et al., 2017). Closely related to IDARP is the problem of matching individual carpoolers and integrating their trips with transit timetables (Stiglic et al., 2018). Finally, many studies design strategies for routing and scheduling (Wang, 2019), pricing (Chen and Wang, 2018), and trip request acceptance (Agussurja et al., 2019) for FLM transportation systems.
**Multimodal network optimization with endogenous demand**: Our work is related to literature on optimal design and operation of transportation systems that acknowledges and leverages endogenous demand. Past research has modeled decision-making travelers with preferences. One-to-one and many-to-one assignment problems among travelers and suppliers have been addressed with preference-based stable matchings to prevent participants from leaving ride-sharing systems (Wang et al., 2018) or transit systems (Rasulkhani and Chow, 2019). Passenger decisions are also often captured by discrete choice models. Bertsimas et al. (2020) jointly determine frequencies and prices for multimodal transit to minimize wait times, subject to passenger mode and route choices. Cadarso et al. (2017) optimize airline scheduling, fleet assignment, and fares while capturing the effects of competing high-speed rail service, taking passengers'
\begin{table}
\begin{tabular}{l l l l l l l l} \multicolumn{6}{l}{**COVID-19 pandemic.**} \\
**Program** & **City** & **Transit agency** & **MOD Op.** & **Service population** & **Fares** & **Routes** & **Integration** \\ \hline _GoLink_ & Dallas, TX & DART & Via & Everyone & Mode & Zone & Complementary, FLM \\ _RTC On-demand Pilot_ & Las Vegas, NV & RTC & Lyft & Paratransit & Distance & PTP & Substitutive \\ _Via To Transit_ & Seattle, WA & King County Metro & Via & Everyone & Flat rate & Hub & Complementary, FLM \\ _The RIDE Flex_ & Boston, MA & MBTA & Uber, Lyft & Paratransit & Flat rate & PTP & Substitutive \\ _NewMo_ Pilot & Newton, MA & City of Newton\({}^{*}\) & Via & Everyone, seniors\({}^{**}\) & Flat rate & Hub & Extension, FLM \\ No program name & Jersey City, NJ & NJ TRANSIT & Via & Everyone & Distance & Zone & Complementary, extension \\ No program name & St. Louis, MO & St. Louis Metro & Lyft & Everyone & Distance & Hub & Complementary, FLM \\ _MARKConnect_ & Atlanta, GA & MARTA & Uber, Lyft & Everyone (closures)\({}^{***}\) & Distance & PTP & Extension \\ _IndyGo + Uber_ & Indianapolis, IN & IndyGo & Uber & Essential workers\({}^{***}\) & Flat rate & PTP & Substitutive \\ \multicolumn{6}{l}{} \\ \end{tabular}
\end{table}
Table 1: Survey of recent pricing alliances. PTP: point-to-point. Flat fare: same price for every passenger. Mode: fares vary by travel mode. Distance: fares increase with distance traveled. (Dallas Area Rapid Transit 2020, Regional Transp. Commission of Southern Nevada 2021, City of Seattle, WA 2021, MBTA 2021, City of Newton, MA 2021, City of Jersey City, NJ 2021, St. Louis Metro 2021, MARTA 2021, Indianapolis Public Transp. Corporation 2021) \({}^{*}\) Not operated by a transit agency. \({}^{**}\) Specially marketed for senior citizens. \({}^{***}\) Available to everyone, but only in case of transit closures. \({}^{***}\) Transported essential workers at the beginning of the COVID-19 pandemic.
mode choices into consideration. Wei, Vaze, and Jacquillat (2020) develop fixed-route transit timetables to maximize welfare, subject to competition with ride-sourcing companies, and congestion effects from passengers' mode switching. Wang, Jacquillat, and Vaze (2022) optimize a network of vertiports for supporting urban aerial mobility, with passenger mode choices described by two alternative models, including a multinomial logit model. Banerjee et al. (2021) tackle a welfare-maximizing system design and pricing problem for centrally coordinated multimodal transport networks with price-dependent demand, and formulate it using mixed-integer convex optimization. In contrast, we tackle a multi-objective pricing alliance design problem with a practically suitable pricing scheme that enables transparent price communication to passengers, but also prevents its convexification and, in turn, heightens the computational challenge.
**Horizontal Cooperation**: Finally, we review cooperation models among competing operators. Literature on horizontal cooperation in logistics and airline scheduling is particularly mature (Cruijssen, Dullaert, and Fleuren 2007, Guajardo and Ronnqvist 2016, Wright, Groenevelt, and Shumsky 2010, Hu, Caldentey, and Vulcano 2013). Chun, Kleywegt, and Shapiro (2017) design a liner shipping alliance with endogenous linear demand for a homogeneous product; shipping companies first trade physical capacity on respective networks, and then compete to sell substitutable products in an overlapping market. Our work also involves joint products over a shared network subject to endogenous demand, but the allied operators offer those products together rather than exchanging capacity to compete. Algaba et al. (2019) formulate an urban transportation network flow game, using exogenous passenger and cost information to coordinate a single-fare payment among competing operators. Bian and Liu (2019, 2019) design mechanisms for the first-mile problem incorporating personalized passenger requirements. Siddiq, Tang, and Zhang (2021) investigate incentive mechanisms to inspire commuters to use public transportation, modeling commuters, transit agency, ride-sharing platform, municipal government, and local private enterprises as stakeholders.
Liu and Chow (2022) investigate whether competing transit agencies can share data to improve selfish outcomes when setting frequencies, subject to user equilibrium passenger flows. Policymakers can leverage results of their Bayesian game and coalition formation model to inform decisions about establishing mandatory data-sharing amongst transit operators, but the model is not amenable to large-scale operations management. The most similar study to ours in this branch of literature is by Schlicher and Lurkin (2022), who formulate a transport choice game in which operators cooperatively price their pooled resources, subject to passengers making travel choices according to a multinomial logit model. They design a market share exchange allocation rule that ensures a stable grand coalition. Their study differs from ours in that each operator offers homogeneous products with a single price to travelers with unspecified origins and destinations, thus entirely ignoring network effects. In summary, most existing studies individually model either operator or passenger incentives when designing integrated, multimodal urban transportation systems; to our knowledge, studies incorporating both strategic operators and passengers provide only general high-level intervention recommendations and rules of thumb. Our work differs in that we provide a prescriptive and strategic design framework to build pricing alliances at scale and in full operational detail.
### Contributions
We propose a prescriptive pricing alliance to enable incentive-aligned collaboration between transit agencies and established ride-sharing operators. A fare-setting model is formulated to maximize total system-wide benefits across the integrated network. Our framework helps operators navigate competing alliance objectives: (1) enhancing access to high-quality public transportation options for underserved populations, (2) lowering vehicle emissions and congestion from single-occupancy vehicle trips, and (3) maintaining the financial well-being of participating operators to ensure that the profit-oriented operators are incentivized to participate. A key technical challenge when optimizing these objectives lies in capturing interdependencies between fares and commuters' travel choices. In response, our model integrates a discrete choice model of passengers' route and mode decisions based on prices and non-pricing attributes like travel times.
From a technical standpoint, our fare-setting model is a large-scale, mixed-integer, non-convex optimization problem--a challenging class of problems. Our first technical contribution is to design a two-stage decomposition in which the first-stage pricing decisions parameterize second-stage fare discounts and the induced passenger behaviors. The second stage becomes a more tractable mixed-integer linear optimization problem that can be solved with commercial solvers. To solve the full model, we develop a new solution approach combining tailored coordinate descent, parsimonious second-stage evaluations, and interpolations using Special Ordered Sets of type 2 (SOS2) (Misener and Floudas 2010). We also develop acceleration techniques based on slanted coordinate traversal and search direction randomization. This solution approach--our second technical contribution--is applicable to any two-stage formulation with a low-dimensional, convex, continuous first-stage and any computationally expensive black-box second stage. This solution approach is found to significantly improve outcomes, for passengers and operators, compared to those obtained with state-of-the-art benchmarks based on Bayesian Optimization (Mockus 2012).
From a practical standpoint, we design a large-scale case study focused on the morning commute in the Greater Boston Area. We find that our model sets fares that are in realistic ranges and have interpretable connections to alliance goals. For example, an alliance with a greater focus on minimizing total VMT prefers flat rather than distance-varying fares to increase system utilization by long-distance commuters. On the other hand, alliances with a greater emphasis on increasing transit access will set discounts with greater geographic variation to make alliance routes more attractive to heterogeneous populations. The clear alignment between operator goals and passenger choices achieved by our fare structures illustrates the value of modeling endogenous demand. Moreover, analysis of our results shows that the model is appropriately responsive to equity-oriented objectives: it enables the alliance to lower fares for, and increase utilization by, low-income and long-distance commuters. Finally, when compared to non-cooperative pricing, our fares and our tailored revenue allocation mechanism together incentivize revenue-oriented MOD operators not only to participate in the alliance but also to adopt the transit operator's priorities.
Section 2 presents the allied fare-setting model formulation, its two-stage decomposition enabling tractable solutions, as well as our revenue allocation mechanism. Section 3 describes our parsimonious SOS2-based coordinate descent approach, whose computational performance is compared against benchmarks in Section 4. We present our practical insights in Section 5 and conclude in Section 6.
## 2 Pricing Alliance Design Problem
We now present our design pipeline for the _Pricing Alliance Design Problem_ (PADP). In the PADP, the alliance--i.e., the jointly acting operators--sets a fare structure that optimizes joint operator priorities over the integrated multimodal network, subject to the passengers' endogenous route choice decisions. The individual operators must then decide whether or not to participate in the alliance they have designed based on the optimized fares and a revenue allocation mechanism.
### Assumptions
Before formulating the allied fare-setting model, we specify our characterization of system-wide benefits, our model of passenger decision-making, and our assumptions about static fares.
_System-wide benefits._ The alliance cooperatively set fares over an integrated network with the objective of maximizing overall benefits to society, including travelers, operators, and the rest of society (Daganzo 2012). Consistent with the motivation of this work, we assume that transit agency's own objective is identical to that of the alliance. We characterize an operator's benefits as its fare revenue and a passenger's benefits as its average utility across all available travel options. High passenger utility corresponds to the availability of many high-quality travel options. Finally, there are many ways to capture the system's impact on the rest of society, defined as everyone except the travelers and operators. Most people who take alternative travel options choose to drive personal vehicles, contributing to negative externalities, such as air pollution. Because a pricing alliance involves no change in permanent infrastructure but rather better utilization of the existing infrastructure, the key benefits of the alliance to the rest of society are likely to come from single-occupancy VMT reduction under the allied pricing regime. We ultimately compute system-wide benefits as a weighted sum of operator revenue, passenger utility, and a penalty for the outside-option VMT. The weights are determined by the alliance's relative priorities and can be varied to evaluate trade-offs.
_Passenger discrete choice model._ We model travelers as rational agents making travel decisions according to a _multinomial logit_ (MNL) discrete choice model. In an MNL model, the choice probabilities are proportional to each option's exponentiated utility, also known as its _attractiveness_(McFadden 1974). The MNL choice model allows us to embed a closed form of the passengers' decision-making process in the alliance fare-setting model, but it also presents limitations related to the independence of irrelevant alternatives (IIA) property. Some have circumvented such inaccuracies by using the _general attraction_ model (GAM), of which the MNL model is a special case (Gallego, Ratliff, and Shebalov 2015). The GAM formulates each choice probability as a function not only of the available options' attractiveness, but also the
shadow attractiveness of unavailable options. In practice, researchers have set the shadow attractiveness values to zero, in the absence of reliable data to estimate these parameters (Wei, Vaze, and Jacquillat 2020). Others leverage the nested MNL (Williams 1977), of which the MNL is _also_ a special case. Lo, Yip, and Wan (2004) and Bertsimas, Ng, and Yan (2020) assume that passengers select travel mode in the first level, and then they select a route under that mode in the second level. In our work, we populate passengers' route choice sets with the fastest route from each available travel mode (transit-only, MOD-only, or transit-MOD hybrid), including the option to drive, referred to in the literature as the _outside option_ or the _no purchase alternative_. Thus, our simplified choice model framework is equivalent to a GAM with zero shadow attractiveness values, or to a mode-route nested MNL with second-level choice sets containing one route each.
_Fare-setting._ The alliance sets fares over the integrated network. Some MOD operators might set time-varying fares on their independently operated network. In particular, TNCs may implement fare multipliers to manage two-sided markets between drivers and riders (Castillo, Knoepfle, and Weyl 2017). In a pricing alliance, however, the MOD operator is a contractor to the transit agency and consequently agrees to set time- and demand-homogeneous fares over allied network. This agreement facilitates transparent communication with passengers who can easily anticipate public sector prices, and it also allows the transit operator to set a budget for the alliance with higher confidence. We also assume that each operator is capable of serving all demand redistribution that occurs as a result of the newly set fares, and that capacity reallocation is therefore unnecessary to consider in the pricing alliance design process. Schlicher and Lurkin (2022) make a similar assumption: they assume a constant marginal cost due to market shares that do not change significantly. From a transit perspective, this implies that the fixed-route options (e.g., buses) have low load factors to start with, especially in and near transit deserts; on the MOD side, it implies that the operators have large driver pools driven by private market dynamics. In summary, we assume that the load difference on the integrated network when transitioning from non-cooperative to allied fare-setting will not impose a large enough change in network utilization to necessitate consideration of the associated resource allocation decisions. In the practical case study of Section 5, we validate this assumption by showing that the potential pricing alliances indeed do not pose a risk of over-saturating the integrated infrastructure.
### Exact Formulation
We now provide notation for formulating our allied fare-setting model with endogenous demand. Passengers select from a set of routes, \(\mathcal{R}\), serviced by a set of operators, \(\mathcal{O}\), which includes a public transit operator and an MOD operator, so that \(|\mathcal{O}|=2\). A _route_ is a sequence of trip legs, each served by some operator's infrastructure. To capture flat and distance-based fares, we define non-discounted price of route \(r\in\mathcal{R}\) as:
\[\sum_{k\in\mathcal{O}_{r}}(\beta_{k}^{0}+\Delta_{rk}\cdot\beta_{k}^{\Delta}) \tag{1}\]
where \(\mathcal{O}_{r}\subseteq\mathcal{O}\) is the set of operators serving route \(r\); \(\Delta_{rk}\) is the distance of route \(r\) covered by operator \(k\); and \(\beta_{k}^{0}\) and \(\beta_{k}^{\Delta}\), respectively, are the base fare, and markup per unit distance traveled, for operator \(k\)'s
sub-network. We collectively refer to the base fares and distance-based markups of all operators as the _fare parameters_ (\(\beta\)), which are decision variables in our model. Fare parameters are constrained by (non-negative) upper and lower bounds (\((\beta_{\min}^{0},\beta_{\max}^{0})\), \((\beta_{\min}^{\Delta},\beta_{\max}^{\Delta})\)) determined by local legislative or operational requirements.
In addition to the fare parameters, the operators jointly select a set of discounted routes. Only a subset of routes, \(\mathcal{R}^{DE}\subseteq\mathcal{R}\), may be discount-eligible (DE). Rather than deciding whether or not each individual route should receive a discount, the discount-eligible routes may be grouped into _discount activation_ categories. Routes in the same discount activation category may share common geographic components specified by the alliance. By grouping routes into categories, passengers can easily interpret which routes are discounted from a map or a simple set of rules. Example definitions for discount activation categories might include all routes anchored on a particular hub location, or all routes whose origins and destinations are contained in specified regions. Let \(\mathcal{R}_{a}\subset\mathcal{R}\) be the set of routes corresponding to discount activation category \(a\in\mathcal{A}\). The sets \(\mathcal{R}_{a}\) partition \(\mathcal{R}^{DE}\), i.e., \(\cup_{a\in\mathcal{A}}\mathcal{R}_{a}=\mathcal{R}^{DE}\) and \(\mathcal{R}_{a}\cap\mathcal{R}_{b}=\emptyset\) for \(a\neq b\in\mathcal{A}\). Let \(x_{a}\in\{0,1\}\) denote the decision variable that activates discounts on all routes in \(\mathcal{R}_{a}\). This assumption is not restrictive; absence of activation categories can be handled easily by putting each route in its own category: \(|\mathcal{R}_{a}|=1,\forall a\in\mathcal{A}\). We note that the relaxation of this assumption might result in discount rules that are difficult to communicate to passengers in large-scale systems.
\(\Lambda\) is the discount multiplier for the routes selected to receive a discount, with an allowable range of \([\Lambda^{min},\Lambda^{max}]:0\leq\Lambda^{min}\leq\Lambda^{max}\leq 1\). The customer-facing price, \(p_{r}\), of route \(r\in\mathcal{R}\), is given as:
\[p_{r}=\begin{cases}(1-\Lambda\cdot x_{a})\cdot\left(\sum_{k\in\mathcal{O}_{r}} (\beta_{k}^{0}+\Delta_{rk}\cdot\beta_{k}^{\Delta})\right)&\text{if}\;\;r\in \mathcal{R}_{a},a\in\mathcal{A}\\ \sum_{k\in\mathcal{O}_{r}}(\beta_{k}^{0}+\Delta_{rk}\cdot\beta_{k}^{\Delta})& \text{if}\;\;r\in\mathcal{R}\setminus\mathcal{R}^{DE}\end{cases} \tag{2}\]
We consider a set \(\mathcal{N}\) of passenger types. Each _passenger type_\(i\in\mathcal{N}\) is identified by a unique combination of origin, destination, and preference profile as described by their route choice utility coefficients. \(\mathcal{R}_{i}\) is the set of routes available to passengers of type \(i\in\mathcal{N}\). Some passengers are more averse to expensive travel options, whereas others are more sensitive to travel time, constituting different preference profiles. There are \(N_{i}\) passengers of type \(i\in\mathcal{N}\). We denote the utility to a passenger of type \(i\in\mathcal{N}\) of route \(r\in\mathcal{R}_{i}\) as \(u_{ir}+\alpha_{i}\cdot p_{r}\), where \(u_{ir}\) is the utility from non-monetary route attributes and \(\alpha_{i}\leq 0\) is the utility per unit price. The market share \(s_{ir}\) of route \(r\in\mathcal{R}_{i}\) for passenger type \(i\in\mathcal{N}\) is computed according to MNL as:
\[s_{ir}=\frac{\exp(u_{ir}+\alpha_{i}\cdot p_{r})}{\exp(u_{i0})+\sum_{s\in \mathcal{R}_{i}}\exp(u_{is}+\alpha_{i}\cdot p_{s})}, \tag{3}\]
where the outside option--not in set \(\mathcal{R}=\bigcup_{i\in\mathcal{N}}\mathcal{R}_{i}\)--has a utility \(u_{i0}\) and a market share computed as:
\[s_{i0}=\frac{\exp(u_{i0})}{\exp(u_{i0})+\sum_{s\in\mathcal{R}_{i}}\exp(u_{is}+ \alpha_{i}\cdot p_{s})}. \tag{4}\]
Finally, the operators' relative priorities over the system-wide performance metrics are captured by non-negative objective function weights: \(\mu^{PAX},\mu^{REV},\mu^{VMT}\), respectively, corresponding to passenger benefits,
operator benefits, and the benefits from negative externality reduction. Table 2 summarizes all notation. Model PADP-FS (5)-(13) provides the exact formulation for the PADP fare-setting model. It jointly sets fares and discounts to maximize system-wide benefits across the integrated network (objective function (5)). Discounts are applied on selected routes (Constraints (6) and (7)) and utility-maximizing passengers make route selections according to an MNL (Constraints (8) and (9)). Fare parameters and the discount multipliers obey bounds (Constraints (10)-(12)).
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Component** & **Type** & **Description** \\ \hline \(\mathcal{A}\) & Set & Discount activation categories \\ \(\mathcal{N}\) & Set & Passenger types \\ \(\mathcal{O}\) & Set & Operators \\ \(\mathcal{R}\) & Set & Intrasystem routes, not including the outside option \\ \(\mathcal{O}_{r}\) & Set & Operators who help service route \(r\in\mathcal{R}\) \\ \(\mathcal{R}_{i}\) & Set & Route options available to passengers of type \(i\in\mathcal{N}\) \\ \(\mathcal{R}_{a}\) & Set & Routes in discount activation category \(a\in\mathcal{A}\) \\ \(\mathcal{R}^{DE}\) & Set & Discount-eligible routes, i.e., \(\bigcup_{a\in\mathcal{A}}\mathcal{R}_{a}\) \\ \hline \(N_{i}\) & Param. & Number of passengers of type \(i\in\mathcal{N}\) \\ \(\Delta_{i0}\) & Param. & Driving distance for a passenger of type \(i\in\mathcal{N}\) \\ \(\Delta_{rk}\) & Param & Distance the passenger travels with operator \(k\in\mathcal{O}\) on route \(r\in\mathcal{R}\) \\ \(u_{ir}\) & Param. & Non-monetary utility accrued by a passenger of type \(i\in\mathcal{N}\) on route \(r\in\mathcal{R}_{i}\) \\ \(u_{i0}\) & Param. & Utility accrued by a passenger of type \(i\in\mathcal{N}\) by driving \\ \(\alpha_{i}\) & Param. & Utility per unit price to a passenger of type \(i\in\mathcal{N}\) \\ \(\beta^{0}_{min},\beta^{0}_{max}\) & Param. & Minimum and maximum allowable base fares \\ \(\beta^{\Delta}_{min},\beta^{\Delta}_{max}\) & Param. & Minimum and maximum allowable distance-based markups \\ \(\Lambda^{min},\Lambda^{max}\) & Param. & Minimum and maximum allowable values of discount multipliers \\ \(\mu^{PAX},\mu^{REV},\mu^{VMT}\) & Param. & Relative priority weights of system-wide performance metrics \\ \hline \(x_{a}\) & Var. & Binary. Whether to activate discount option \(a\in\mathcal{A}\) \\ \(\beta^{0}_{k},\beta^{\Delta}_{k}\) & Var. & Continuous. Base fare and markup of operator \(k\in\mathcal{O}\) \\ \(p_{r}\) & Var. & Continuous. Customer-facing price of route \(r\in\mathcal{R}\) \\ \(\Lambda\) & Var. & Continuous. Discount multiplier applied to routes with activated discounts \\ \(s_{ir}\) & Var. & Continuous. Proportion of passengers of type \(i\in\mathcal{N}\) who choose route \(r\in\mathcal{R}_{i}\) \\ \(s_{i0}\) & Var. & Continuous. Proportion of passengers of type \(i\in\mathcal{N}\) who choose the outside option \\ \hline \hline \end{tabular}
\end{table}
Table 2: Notation.
\[s_{i0}=\frac{\exp(u_{i0})}{\exp(u_{i0})+\sum_{s\in\mathcal{R}_{i}} \exp(u_{is}+\alpha_{i}\cdot p_{s})} i\in\mathcal{N} \tag{9}\] \[\beta^{0}_{min}\leq\beta^{0}_{k}\leq\beta^{0}_{max} k\in\mathcal{O}\] (10) \[\beta^{\Delta}_{min}\leq\beta^{\Delta}_{k}\leq\beta^{\Delta}_{max} k\in\mathcal{O}\] (11) \[\Lambda^{min}\leq\Lambda\leq\Lambda^{max}\] (12) \[x_{a}\in\{0,1\} a\in\mathcal{A} \tag{13}\]
### Two-stage Decomposition
The PADP-FS model is a non-convex mixed-integer nonlinear optimization problem (MINLOP). There are no commercial solvers that accommodate non-convex MINLOPs, and no open-source solvers accept non-convex MINLOPs at practically large scale. Therefore, we propose a different solution approach. We decompose the formulation to tractably obtain high-quality solutions for practically sized problems (tens of thousands of variables and hundreds of thousands of constraints in our case study). By letting first-stage pricing decisions parameterize second-stage discount activations and induced passenger behaviors, the second stage can be formulated as a more tractable mixed integer linear optimization problem (MILOP).
Let \(\mathcal{B}:=[\beta^{0}_{min},\beta^{0}_{max}]^{2}\times[\beta^{\Delta}_{min}, \beta^{\Delta}_{max}]^{2}\) and \(\mathcal{L}:=[\Lambda^{min},\Lambda^{max}]\) respectively be the sets of allowable fare parameters and discount multipliers. We parameterize the second-stage problem by \((\widehat{\boldsymbol{\beta}},\widehat{\Lambda})\in\mathcal{B}\times \mathcal{L}\) and define \(\mathcal{S}(\widehat{\boldsymbol{\beta}},\widehat{\Lambda})\) as the feasible region parameterized by \((\widehat{\boldsymbol{\beta}},\widehat{\Lambda})\). We utilize the sales-based linear optimization formulation by Gallego, Ratliff, and Shebalov (2015) to reformulate the choice model constraints. The premise of the reformulation rests on proportionality constraints. Let \(\gamma_{ir}=\exp(u_{i0})/\exp(u_{ir}+\alpha_{i}\cdot p_{r})\) be the ratio of the attractiveness values of the outside option and route \(r\in\mathcal{R}_{i}\). Since the fare parameters are determined in the first stage, \(\gamma_{ir}\) is a constant in the second stage formulation for the non-discount-eligible routes. Then constraints (8) and (9) are reformulated as follows:
\[s_{i0}=\gamma_{ir}\cdot s_{ir} i\in\mathcal{N},r\in\mathcal{R}_{i} \tag{14}\] \[s_{i0}+\sum_{r\in\mathcal{R}_{i}}s_{ir}=1 i\in\mathcal{N}\] (15) \[s_{i0}\geq 0 i\in\mathcal{N}\] (16) \[s_{ir}\geq 0 i\in\mathcal{N},r\in\mathcal{R}_{i} \tag{17}\]
Equation (14) ensures that the market share of each route is proportional to its attractiveness. Constraints (15), (16), and (17) ensure that the market shares are non-negative and sum to 1. Note that Equation (14) still includes bilinearities for discount-eligible routes \(r\in\mathcal{R}_{i}\cap\mathcal{R}^{DE}\). For a type \(i\) passenger, \(\gamma_{ir}\) is either equal to the discounted price (\(\gamma_{ir}=\gamma_{ir}(\widehat{\boldsymbol{\beta}},\widehat{\Lambda})\)) or full price (\(\gamma_{ir}=\gamma_{ir}(\widehat{\boldsymbol{\beta}},0)\)), depending on whether the model selects the discount for route \(r\). Let \(\mathcal{N}_{a}\subset\mathcal{N}\) be the set of passenger types with at least one route option corresponding to discount activation category \(a\in\mathcal{A}\). We linearize constraint (14) as (18) using big-\(M\) constraints, letting \(M^{s}_{ir}=\gamma_{ir}(\widehat{\boldsymbol{\beta}},0)\geq 0\).
\[\begin{cases}s_{i0}\leq\gamma_{ir}(\widehat{\mathbf{\beta}},0)\cdot s_{ir}\\ s_{i0}\geq\gamma_{ir}(\widehat{\mathbf{\beta}},0)\cdot s_{ir}-M_{ir}^{s}\cdot x_{a}\\ s_{i0}\leq\gamma_{ir}(\widehat{\mathbf{\beta}},\widehat{\Lambda})\cdot s_{ir}+M_{ir }^{s}\cdot(1-x_{a})\\ s_{i0}\geq\gamma_{ir}(\widehat{\mathbf{\beta}},\widehat{\Lambda})\cdot s_{ir}\\ \end{cases}\qquad a\in\mathcal{A},i\in\mathcal{N}_{a},r\in\mathcal{R}_{i} \cap\mathcal{R}_{a} \tag{18}\]
We similarly handle the bilinearities presented by the revenue terms in the objective function. We define a new decision variable \(w_{ir}=p_{r}\cdot s_{ir}\) for passenger types \(i\in\mathcal{N}\) with discount-eligible routes \(r\in\mathcal{R}_{i}\cap\mathcal{R}^{DE}\). The linearized constraints (19) set the value of \(w_{ir}\) with \(M_{ir}^{w}=\widehat{\Lambda}\cdot\left(\sum_{k\in\mathcal{O}_{r}}(\widehat{ \beta}_{k}^{0}+\Delta_{rk}\cdot\widehat{\beta}_{k}^{\Delta})\right)\geq 0\).
\[\begin{cases}w_{ir}\leq\left(\sum_{k\in\mathcal{O}_{r}}(\widehat{ \beta}_{k}^{0}+\Delta_{rk}\cdot\widehat{\beta}_{k}^{\Delta})\right)\cdot s_{ir }\\ w_{ir}\geq\left(\sum_{k\in\mathcal{O}_{r}}(\widehat{\beta}_{k}^{0}+\Delta_{rk} \cdot\widehat{\beta}_{k}^{\Delta})\right)\cdot s_{ir}-M_{ir}^{w}\cdot x_{a}\\ w_{ir}\leq(1-\widehat{\Lambda})\cdot\left(\sum_{k\in\mathcal{O}_{r}}(\widehat{ \beta}_{k}^{0}+\Delta_{rk}\cdot\widehat{\beta}_{k}^{\Delta})\right)\cdot s_{ir }+M_{ir}^{w}\cdot(1-x_{a})\\ w_{ir}\geq(1-\widehat{\Lambda})\cdot\left(\sum_{k\in\mathcal{O}_{r}}(\widehat{ \beta}_{k}^{0}+\Delta_{rk}\cdot\widehat{\beta}_{k}^{\Delta})\right)\cdot s_{ir }\end{cases}\qquad a\in\mathcal{A},i\in\mathcal{N}_{a},r\in\mathcal{R}_{i}\cap \mathcal{R}_{a} \tag{19}\]
Table 3 summarizes the additional and modified notation for the second-stage model.
In response to fare parameters set in the first stage, the second-stage problem activates discounts that optimize system-wide performance metrics, subject to induced passenger decisions. The optimal value of the second-stage problem is denoted by \(W\) in equation (20).
\[W(\widehat{\mathbf{\beta}},\widehat{\Lambda}):=\max_{(\mathbf{x},\mathbf{s},\mathbf{w},\mathbf{p} )\in\mathcal{S}(\widehat{\mathbf{\beta}},\widehat{\Lambda})}\sum_{i\in\mathcal{N }_{i}}\cdot\left[\mu^{PAX}\cdot\left(u_{i0}+\sum_{r\in\mathcal{R}_{i}}(u_{ir}+ \alpha_{i}\cdot p_{r})\right)+\mu^{REV}\cdot\sum_{r\in\mathcal{R}_{i}}w_{ir}- \mu^{VMT}\cdot(\Delta_{i0}\cdot s_{i0})\right]\!, \tag{20}\]
where \(\mathcal{S}(\widehat{\mathbf{\beta}},\widehat{\Lambda})\) is given as follows:
Then we define the PADP-FS2SD model, the two-stage decomposition of the PADP-FS model.
\[\mathrm{(PADP-FS2SD)}\qquad\max W(\mathbf{\beta},\Lambda) \tag{26}\] \[\mathrm{s.t.}\quad\mathbf{\beta}\in\mathcal{B}\] (27) \[\Lambda\in\mathcal{L} \tag{28}\]
Lemma 1: _Formulations PADP-FS and PADP-FS2SD are equivalent._
The proof of Lemma 1 is in Appendix A.
\[\mathcal{S}(\widehat{\mathbf{\beta}},\widehat{\Lambda}) \equiv\Bigl{\{}(\mathbf{x},\mathbf{s},\mathbf{w},\mathbf{p})\in\{0,1\}^{|\mathcal{A}|} \times\mathbb{R}_{+}^{\sum_{i\in\mathcal{N}}(|\mathcal{R}_{i}|+1)}\times\mathbb{ R}^{\sum_{i\in\mathcal{N}}|\mathcal{R}_{i}\cap\mathcal{R}^{DE}|}\times\mathbb{R}^{| \mathcal{R}|}:\] \[\text{Constraints (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq: eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq: eq:eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq: eq:eq: eq: eq: eq:eq: eq: eq: eq: eq:eq: eq: eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq:
The proof of Lemma 2 is in Appendix C. Because the alliance's priorities may also include benefits to passengers and/or benefits to the rest of the society in the form of reduced VMT, the alliance may earn less revenue than the operators' combined revenue in the non-cooperative regime. Despite this, the transit operator can choose to guarantee that, by cooperating, the revenue-oriented MOD operator earns at least as much as it would have earned otherwise. We assume that the MOD operator will participate in the alliance if its non-cooperative and allied revenues are equal. In the event that the alliance accrues strictly more revenue than that in the non-cooperative regime, the operators split the surplus evenly.
## 3 Solution Approach
### Motivation
Section 2.3 presented a two-stage decomposition of the allied fare-setting formulation, with the second stage characterized as a mixed-integer linear optimization problem and the first stage as a low-dimensional decision problem over a convex space. Without an analytic closed-form of \(W\), the function's gradients are inaccessible, eliminating the possibility of using any gradient-based approaches. Bayesian Optimization is applicable and has been leveraged in recent urban transportation studies focusing on MOD systems (Liu et al., 2019), but it does not provide clear convergence criteria. PADP-2SD is also not amenable to Benders decomposition due to the nonlinear interdependencies between first- and second-stage decisions. Our problem's incompatibility with the simpler centralized welfare-maximization structure of the problem tackled by Banerjee et al. (2021) implies that their convexification strategy cannot be applied either.
Our solution strategy approximates gradient descent for solving the first-stage problem. Because the first-stage feasible space is low-dimensional and convex, we begin with a _coordinate descent_ framework, which takes turns fixing all fare parameters except one and greedily optimizing along the free dimension. Even one-dimensional search is difficult because the search space is a continuous spectrum of optimal MILOP solutions. While a solution of the second-stage problem is fast enough to be a useful tool (see Section 4: needing at most 5 seconds on average), it is also slow enough to warrant a judicious selection of first-stage valuation points. Thus, our tailored coordinate descent approach scans each search direction by solving a model that approximates the best solution along that search direction. After evaluating a few points along the free search direction with the second-stage MILOP, an auxiliary model interpolates intermediate solutions along that search direction with Special Ordered Sets of type 2 (SOS2) (Misener and Floudas, 2010). The process terminates when no improvements are found along any coordinate direction. We define this as the basic SOS2 Coordinate Descent (SOS2-CD) approach in Section 3.2.
To further improve final solution quality given a computational budget, we develop three acceleration strategies that build upon SOS2-CD. First, rather than using an arbitrary search direction sequence, we introduce more opportunities to escape local optima by randomizing search direction order. Second, we exploit the fact that the SOS2 approximation model is valid along _any search direction_ through the first-stage problem's search space, and not just those parallel to coordinate axes. Natural search direction candidates are
those where each operator's base fare and markup are jointly varied while holding all other parameters constant. By considering SOS2 coordinate descent over such slanted directions, we unlock directions navigating trade-offs between high base fares and low markups vs. low base fares and high markups, which would be unavailable with single-coordinate search directions. Finally, we show how to mitigate the SOS2-CD's sensitivity to random initializations by leveraging warm-start solutions. 3.3 describes the final algorithm, including acceleration strategies, initialization procedures, and the incorporation of time limits. Computational results in Section 4 demonstrate the effectiveness of SOS2-CD and all acceleration strategies.
### SOS2 Coordinate Descent
Let \(\mathcal{Y}:=\mathcal{B}\times\mathcal{L}\) denote the search space over which to optimize \(W\), and \(\mathbf{y}:=(\mathbf{\beta},\mathbf{\lambda})\in\mathcal{Y}\) denote a solution. Let \(\mathcal{Y}_{i}(\mathbf{y})=\{z\in\mathbb{R}\,:\,(y_{1},\cdots,y_{i-1},z,y_{i+1},\cdots,y_{n})\in\mathcal{Y}\}\) be the subset of the feasible space with all dimensions other than the \(i^{th}\) fixed to those of solution \(\mathbf{y}\in\mathcal{Y}\). We define \(\mathcal{S}(\mathbf{y})\) and \(\mathcal{S}^{*}(\mathbf{y})\subseteq\mathcal{S}(\mathbf{y})\), respectively, to be the set of feasible and optimal second-stage decisions given fare parameters \(\mathbf{y}:=(\mathbf{\beta},\Lambda)\in\mathcal{Y}\):
\[\mathcal{S}^{*}(\mathbf{y}):=\arg\max_{(\mathbf{x},\mathbf{s},\mathbf{w},\mathbf{p})\in\mathcal{S }(\mathbf{y})}\sum_{i\in\mathcal{N}}N_{i}\cdot\Big{[}\mu^{PAX}\cdot\big{(}u_{i0}+ \sum_{r\in\mathcal{R}_{i}}(u_{ir}+\alpha_{i}\cdot p_{r})\big{)}+\mu^{REV} \cdot\sum_{r\in\mathcal{R}_{i}}w_{ir}-\mu^{VMT}\cdot(\Delta_{i0}\cdot s_{i0}) \Big{]}.\]
**Basic Coordinate Descent.** Coordinate descent is a greedy method that successively optimizes a multivariate function along coordinate axes. Starting from an initial point, it cyclically optimizes along every coordinate direction holding all other dimensions fixed. Algorithm 1 presents basic coordinate descent to solve the PADP. Optimizing even a single dimension of \(W\) is hard, because it entails navigating a continuous spectrum of optimal solutions to MILOPs, which does not have closed analytic form. Therefore, we will propose a rigorous method for tractably modifying Step 7 of Algorithm 1 using _SOS2 interpolation_.
```
1:arg\(\mathbf{y}^{0}\) : Initial solution in \(\mathcal{Y}\)
2:procedureCoordinate Descent(\(\mathbf{y}^{0}\))
3:\(objPrev\leftarrow-\infty\); \(objCur\gets W(\mathbf{y}^{0})\); \(k\gets 0\); \(n\leftarrow\dim(\mathbf{y}^{0})\)
4:while\(objCur-objPrev>\epsilon\)do
5:\(k\gets k+1;objPrev\gets objCur;\mathbf{y}^{k}\leftarrow\mathbf{y}^{k-1}\)
6:for\(i\in\{1,\dots,n\}\)do
7:\(y_{i}^{k}=\arg\max_{z\in\mathcal{Y}_{i}(\mathbf{y}^{k})}W(y_{1}^{k},\cdots,y_{ i-1}^{k},z,y_{i+1}^{k},\cdots,y_{n}^{k})\)
8:\(objCur\gets W(\mathbf{y}^{k})\)
9:return\(\mathbf{y}^{k}\)
```
**Algorithm 1** Coordinate Descent for maximization of \(W\)
**SOS2 Interpolation.** Our SOS2 interpolation procedure performs approximate local search along a specified direction to produce the next candidate solution. First, we solve \(D\) second-stage models at evenly spaced points along the search direction, obtaining a sequence of fare parameter values acting as _anchors_ for the SOS2 interpolation. We denote the anchors by \(\boldsymbol{y}^{d}:=(\boldsymbol{\beta}^{d},\Lambda^{d})\) and their corresponding solutions by \((\boldsymbol{x}^{d},\boldsymbol{s}^{d},\boldsymbol{w}^{d},\boldsymbol{p}^{d}) \in\mathcal{S}^{*}(\boldsymbol{y}^{d}),\forall d\in\{1,\ldots,D\}\). Let \(\Omega=\{\boldsymbol{y}^{d}\,:\,d\in\{1,\ldots,D\}\}\) be the ordered set of anchors. Larger \(D\) values interpolate more accurately, but the solution is also more computationally expensive.
Figure 2 visualizes the selection of the next candidate solution using SOS2 variables. \(W\) is exactly evaluated at every anchor and approximated between the anchors using the interpolated anchor solutions. The next candidate solution is selected where the approximation of \(W\) is maximized. Rather than directly interpolating \(W\), or \(\boldsymbol{w}_{ir}\) variables linearizing \(p_{r}s_{ir}\) terms, we interpolate the price and market share variables, \(\boldsymbol{p}\) and \(\boldsymbol{s}\). Otherwise, the interpolated values of \(W\) will be convex combinations of anchor valuations (straight line segments connecting consecutive anchors in Figure 2), eliminating any chance of selecting fare parameters between anchors. Moreover, since the objective function's nonlinearities are quadratic in nature due to the multiplicative revenue terms, we capture them with this SOS2 approximation.
For a given number of anchor points D, the SOS2 model (Misener and Floudas 2010) is algebraically specified as \(SOS2(D)\equiv\Big{\{}\,(\boldsymbol{\lambda},\boldsymbol{z})\in\mathbb{R}_{+ }^{D}\times\{0,1\}^{D-1}\,:\,\sum_{d=1}^{D}\lambda_{d}=1,\sum_{d=1}^{D-1}z_{d}= 1,\lambda_{1}\leq z_{1},\lambda_{d}\leq z_{d-1}+z_{d}\,\forall d\in\{2,\ldots,D\},\lambda_{D}\leq z_{D-1}\Big{\}}\). Here, the \(\boldsymbol{\lambda}\) variables are the convex combination weights for outputs at fare parameters \((\boldsymbol{\beta}^{d},\Lambda^{d})\), and each binary \(z_{d}\) variable indicates whether to select the segment between anchors \(d\) and \(d+1\). Now we use the SOS2 variables to approximate \(W\) along a given coordinate axis. Expression (31) presents the set of fare parameters, \(SOS2^{*}(\Omega)\), that optimize approximated \(W\) given the ordered anchor set \(\Omega\). Expression (32) denotes optimal solutions at all anchors. The optimal SOS2 variables are selected to maximize the approximated \(W\) function in Constraint (33). Finally, the approximately optimal fares are interpolated in equation (34). The approximated objective function is quadratic, making
Figure 2: SOS2 interpolation of \(W\) value and the selection of next candidate first-stage solution \(\boldsymbol{y}^{*}\).
(33) a mixed-integer quadratic optimization problem. Fortunately, it can be solved almost instantly to global optimality with commercial solvers, because \(D\) is small by design.
\[SOS2^{*}(\Omega):= \tag{31}\] \[\left\{\boldsymbol{y}\,:(\boldsymbol{x}^{d},\boldsymbol{s}^{d}, \boldsymbol{w}^{d},\boldsymbol{p}^{d})\in\mathcal{S}^{*}(\boldsymbol{y}^{d}), \quad\forall\boldsymbol{y}^{d}\in\Omega\right.\] (32) \[\left.\qquad(\boldsymbol{\lambda}^{*},\boldsymbol{z}^{*})\in \operatorname*{arg\,max}_{(\boldsymbol{\lambda},\boldsymbol{z})\in SOS2(D)}\quad\sum_{i\in\mathcal{N}}N_{i}\cdot\left( \mu^{PAX}\cdot\left(u_{i0}+\sum_{r\in\mathcal{R}_{i}}(u_{ir}+\alpha_{i}\cdot \sum_{d\in\mathcal{D}}p_{r}^{d}\cdot\lambda_{d})\right)+\right.\right.\] \[\left.\qquad\qquad\left.\mu^{REV}\cdot\sum_{r\in\mathcal{R}_{i}} \Big{(}\sum_{d\in\mathcal{D}}(p_{r}^{d}\cdot\lambda_{d})\cdot\sum_{d\in \mathcal{D}}(s_{ir}^{d}\cdot\lambda_{d})\Big{)}-\mu^{VMT}\cdot\Delta_{i0} \cdot\sum_{d\in\mathcal{D}}(s_{i0}^{d}\cdot\lambda_{d})\Big{)}\right.\] (33) \[\left.\boldsymbol{y}= \sum_{\boldsymbol{y}^{d}\in\Omega}\lambda_{d}^{*}\boldsymbol{y}^{d}\right. \tag{34}\]
**Summary of SOS2 Coordinate Descent.** We present SOS2-CD in Algorithm 2, which replaces the one-dimensional optimization in Step 7 of Algorithm 1 with the SOS2-based approximation. The subroutine Search Directions provides a comprehensive ordered list of search directions that can potentially be multidimensional and/or randomized (options further discussed in the next subsection); the default is to cycle through coordinate axes, i.e. to return \(searchDirections=\{1,\ldots,\dim(\boldsymbol{y}^{0})\}\) when \(random\) and \(multidim\) are both set to \(false\). The subroutine Generate Anchors returns evenly spaced SOS2 anchors along the specified search direction. The current solution is included in the anchor set to ensure that the new solution is at least as good as the previous. Appendix D presents subroutines Search Directions and Generate Anchors in full detail. After generating the anchors in Step 7, Step 8 computes an optimal solution for each anchor, uses these anchor solutions for interpolation, and picks the solution that maximizes approximated \(W\) over the given search direction. Step 9 computes true value of \(W\) at the new candidate solution and updates the current solution if necessary. The algorithm iterates until convergence.
### Final Algorithm
We now present three strategies that provide SOS2-CD with additional opportunities to escape local optima and thus improve solution quality. The first strategy relaxes the assumption of deterministic search direction order. Because the order of search direction is arbitrary, we can randomize it after each iteration. We can select this strategy by setting the \(random\) argument to TRUE. Second, since the SOS2 approximation model is valid along _any search direction_ intersecting the current solution, not just the coordinate axes, an operator's fare parameter pair (base fare and markup) defines a natural subset of dimensions to search simultaneously. Given a pair of dimensions, this strategy randomly selects the spanning dimension, and then selects the line's slope in this 2D plane uniformly at random from the set of affine lines that intersect the current solution and span the selected dimensions. Finally, it drops anchors at evenly spaced points along the sampled line and obtains the next candidate solution maximizing approximated value of \(W\). While there are
many possibilities for multidimensional search directions, we limit to each operator's fare parameter pair. Thus, when the algorithm's \(multidim\) argument is set to TRUE, the list of search directions contains three items: (1) transit parameters, (2) MOD parameters, and (3) the discount multiplier. Whenever an operator's fare parameters are selected as the search direction, we sample a new affine line with the aforementioned procedure. Appendix D fully specifies the subroutine Search Directions.
The last acceleration strategy incorporates a timed warm-start procedure. The outcome of a single round of SOS2-CD may depend on the initial solution. From a random set of initial solutions, the basic implementation repeats SOS2-CD until a computational time budget limit has elapsed. Each repetition of SOS2-CD is called a _trajectory_. The best fare parameters found across all trajectories are returned. Convergence to higher quality solutions may be more likely given intelligent initializations. We can warm-start the algorithm by first obtaining a few samples in the region with a specified \(warmStartProcedure\), and selecting the best starting points from them. The \(warmStartProcedure\) might simply be uniform sampling from the region, or it can consist of searching the space in a more principled way, such as with Bayesian Optimization.
Algorithm 3 presents the overall solution algorithm. \(\tau^{WS}\) and \(\tau\) each define the time limits devoted to the warm-start and SOS2-CD procedures, respectively. The arguments \(random\) and \(multidim\) are Booleans indicating whether randomized and/or multidimensional search directions will be used. The \(warmStartProcedure\) specifies the procedure for generating informed initializations.
```
1:args\(\mathbf{y}^{0}\) : Initial solution in \(\mathcal{Y}\); \(D\): Number of SOS2 anchors; \(random\): Boolean, whether to randomize search directions; \(multidim\): Boolean, whether to use multidimensional slanted search directions
2:procedure SOS2 Coordinate Descent(\(\mathbf{y}^{0},D,random,multidim\))
3:\(objPrev\leftarrow-\infty\); \(objCur\gets W(\mathbf{y}^{0})\); \(k\gets 0\)
4:while\(objCur-objPrev>\epsilon\)do
5:\(k\gets k+1;objPrev\gets objCur;\mathbf{y}^{k}\leftarrow\mathbf{y}^{k-1}\)
6:for\(i\in\) Search Directions (\(random\), \(multidim\)) do
7:\(\Omega\leftarrow\textsc{GenerateAnchors}(\mathbf{y}^{k},i,D)\)
8: Interpolate an optimal solution: draw some \(\mathbf{y}^{*}\) from \(SOS2^{*}(\Omega)\)
9:\(\mathbf{y}^{k}\leftarrow\mathbf{y}^{*}\) if \(W(\mathbf{y}^{*})>objCur\) else \(\mathbf{y}^{k}\)
10:\(objCur\gets W(\mathbf{y}^{k})\)
11:return\(\mathbf{y}^{k}\)
```
**Algorithm 2** SOS2 Coordinate Descent for maximization of \(W\)
## 4 Computational Results
We now discuss the accuracy and tractability of our approach through several computational experiments using a large-scale profit maximization case study of the Greater Boston Area (see Section 5.1 for details).
All optimization models are solved with Gurobi v9.0 and the JuMP package in Julia v1.4 (Dunning, Huchette, and Lubin 2017).
### Comparisons under 1-Hour Computational Time Budget
We now demonstrate the superior computational performance of our approach (Algorithm 3). Table 4 compares different versions of our approach, with different combinations of acceleration strategies, including multidimensional search (SOS2-CD-MD), randomized search directions (SOS2-CD-R), both (SOS2-CD-MD-R) and neither (SOS2-CD). None of these four approaches use intelligent warm-starts. We establish two algorithmic benchmarks against which to compare our computational results. The first benchmark is Brute-Force Coordinate Descent (BF-CD). BF-CD differs from SOS2-CD in the way it conducts each iteration of coordinate descent. BF-CD uses a much higher number of "anchors" along the search direction and solves a second-stage model at each anchor. Instead of the SOS2-based interpolation, it just selects the anchor with the highest value of the second-stage objective function as the new candidate solution. The trade-off at each iteration is a drastic computation time increase for a more accurate evaluation of the points along the search direction. We implement BF-CD using a 1% granularity for the discount multiplier and a $0.01 granularity for both base fares and markups. The second benchmark is Bayesian Optimization (BO)--a global optimization method for black-box functions that are computationally expensive to evaluate and
may not have gradients (Mockus 2012). Our black-box function is \(W\). BO imposes upon \(W\) a prior belief about the space of possible objective values based on the candidate solutions considered so far. The posterior distribution decides which candidate solution to evaluate next, so that our sequential search successfully explores unseen regions in the decision space and exploits regions that are more likely to host global optima based on prior beliefs. Appendix F includes a detailed account of BO, including all hyperparameter settings.
We also tested time-limited SOS2-CD-MD-R with BO warm-starts, with varying time limit allocations to the warm-starts. In other words, we execute Algorithm 3 where the \(warmStartProcedure\) is Bayesian Optimization. The warm-start trials have names ending in BO-\(TL\), where \(TL\) is the BO warm-start time limit in minutes. Table 4 presents the performances statistics across 50 trials each with a 1-hour limit. All outcomes are expressed in surplus USD over the average 1-hour BO benchmark performance.
First, we observe that all four variations of our approach, even without warm-starts, significantly outperform the BO benchmark in terms of the average (by $13.5K-$19.3K) and best-case (by $5.1K-$5.8K) performance. Moreover, our approaches with multidimensional search (SOS2-CD-MD-R and SOS2-CD-MD), beat the BO benchmark also on the worst-case performance across the 50 trials (by $7.5K-$22.3K). Note that the average-case as well as the worst-case performance of the approaches with either acceleration strategy (MD or R or both) were superior to those of the basic SOS2-CD approach. The BF-CD approach never terminated within the one-hour time limit; in fact, BF-CD could not even evaluate one full set of anchors in all but 7 cases. Furthermore, our approaches with warm-starts perform even better than those without. In particular, a 40-minute BO warm-start drastically outperforms the benchmarks in the worst case and provides the best average-case performance, while 20 minute BO warm-start provides the strongest best-case performance. In summary, all our approaches significantly beat benchmarks, and all three acceleration strategies (random search, slanted search and warm-start) were found to enhance the performance of our basic SOS2-CD solution approach.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & & & & Objective (Thousand \$) \\ Algorithm & \(random\) & \(multidim\) & \(warmStartProcedure\) & \(\tau^{WS}\) & \(\tau\) & Min & Avg. & Max \\ \hline BO & - & - & - & - & - & \(-28.0\) & \(0.0^{*}\) & \(18.6\) \\ SOS2-CD-MD-R-BO-\(50\) & Yes & Yes & BO & 50 & 10 & \(-11.9\) & \(15.4\) & \(24.1\) \\ SOS2-CD-MD-R-BO-\(40\) & Yes & Yes & BO & 40 & 20 & \(\mathbf{12.6}\) & \(\mathbf{20.3}\) & \(24.1\) \\ SOS2-CD-MD-R-BO-\(30\) & Yes & Yes & BO & 30 & 30 & \(0.0\) & \(19.7\) & \(24.0\) \\ SOS2-CD-MD-R-BO-\(20\) & Yes & Yes & BO & 20 & 40 & \(-4.3\) & \(19.1\) & \(\mathbf{24.6}\) \\ SOS2-CD-MD-R-BO-\(10\) & Yes & Yes & BO & 10 & 50 & \(-9.8\) & \(20.0\) & \(24.2\) \\ SOS2-CD-MD-R & Yes & Yes & - & - & 60 & \(-20.5\) & \(19.0\) & \(23.7\) \\ \hline BF-CD & - & - & - & - & 60 & - & - & \(4.8\) \\ SOS2-CD & No & No & - & - & 60 & \(-103.8\) & \(13.5\) & \(24.1\) \\ SOS2-CD-MD & No & Yes & - & - & 60 & \(-5.7\) & \(19.3\) & \(24.1\) \\ SOS2-CD-R & Yes & No & - & - & 60 & \(-77.6\) & \(17.1\) & \(24.4\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Objective function statistics with 1-hour time limits and 50 trials each, expressed in terms of surplus compared to average 1-hour BO performance. \({}^{*}\) Average BO performance = $3,634,074.
### Comparisons under Higher Computational Time Budgets
All comparisons in the previous subsection assumed a 1-hour computational time budget and showed the significant superiority of our basic approach over the benchmarks, as well as the value of our acceleration strategies. A question emerges as to whether these findings hold when longer computational budgets are available. To answer this question, Table 5 compares performances under three time budgets - 1 hour, 6 hours, and 12 hours. We additionally provide statistics on the number of trajectories. All versions of our approach under all time budgets outperform the BO benchmark on average. This is especially true for the versions of SOS2-CD with acceleration strategies. The larger time budgets allow accelerated SOS2-CD to offer robust performance. In general, the trajectories of SOS2-CD with multidimensional search (that is, SOS2-CD-MD and SOS2-CD-MD-R) converge more quickly, allowing more trajectories to be computed within a given time limit. BF-CD is extremely slow and did not terminate before the 12-hour time limit in any of our runs. We report the performance statistics for BF-CD corresponding to the best solutions found within the computational time budgets, prior to termination. While the best-case runs of BF-CD provide a slight edge over all benchmarks (of merely $200 USD), the average-case and the worst-case performance is significantly worse than our methods. While BF-CD is more thorough for a single random initialization, it is too computationally intensive to properly explore the search region. Note that warm-starts did not provide much additional value for longer time budgets and hence warm-start approaches are omitted from Table 5. For additional analyses of the solution times and SOS2 optimality gaps, see Appendix E.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline Time & Algorithm & \multicolumn{3}{c}{Trajectories} & \multicolumn{3}{c}{Objective (Thousand USD)} \\ limit & & Min & Avg. & Max & Min & Avg. & Max \\ \hline \multirow{8}{*}{1 hour} & BO & - & - & - & \(-28.0\) & \(0^{*}\) & \(18.6\) \\ & BF-CD & 0 & 0 & 0 & - & - & \(4.8\) \\ & SOS2-CD & \(2\) & \(3.8\) & \(5\) & \(-103.8\) & \(13.5\) & \(24.1\) \\ & SOS2-CD-MD & \(2\) & \(4.0\) & \(7\) & \(\mathbf{-5.7}\) & \(\mathbf{19.3}\) & \(24.1\) \\ & SOS2-CD-R & \(2\) & \(3.8\) & \(5\) & \(-77.6\) & \(17.1\) & \(\mathbf{24.4}\) \\ & SOS2-CD-MD-R & \(2\) & \(4.2\) & \(6\) & \(-20.5\) & \(19.0\) & \(23.7\) \\ \hline \multirow{8}{*}{6 hours} & BO & - & - & - & \(7.7\) & \(17.9\) & \(23.9\) \\ & BF-CD & 0 & 0 & 0 & \(-2132.7\) & \(-791.9\) & \(\mathbf{24.6}\) \\ & SOS2-CD & 16 & 20.1 & 25 & \(-2.7\) & \(22.3\) & \(24.3\) \\ & SOS2-CD-MD & 15 & 21.3 & 28 & 22.0 & \(23.3\) & \(24.1\) \\ & SOS2-CD-R & 13 & 19.7 & 24 & \(\mathbf{22.4}\) & \(\mathbf{23.8}\) & \(24.4\) \\ & SOS2-CD-MD-R & 15 & 21.4 & 26 & 21.1 & \(23.3\) & \(24.3\) \\ \hline \multirow{8}{*}{12 hours} & BO & - & - & - & \(18.7\) & \(21.9\) & \(23.9\) \\ & BF-CD & 0 & 0 & 0 & \(-2003.4\) & \(-112.7\) & \(\mathbf{24.6}\) \\ \cline{1-1} & SOS2-CD & 32 & 39.6 & 50 & 22.0 & \(23.7\) & \(24.3\) \\ \cline{1-1} & SOS2-CD-MD & 32 & 42.4 & 54 & 22.8 & \(23.7\) & \(24.2\) \\ \cline{1-1} & SOS2-CD-R & 26 & 38.2 & 47 & \(\mathbf{23.5}\) & \(\mathbf{24.0}\) & \(24.4\) \\ \cline{1-1} & SOS2-CD-MD-R & 31 & 42.7 & 52 & 22.2 & \(23.7\) & \(24.4\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Objective function statistics with varying time budgets and 50 trials each, expressed in terms of surplus compared to 1-hour BO performance. \({}^{*}\) Average BO performance = $3,634,074.
## 5 Insights from Practical Case Study
To inform the real-world policymakers' decisions, we obtained practical results with the PADP model over a Greater Boston Area case study described in Section 5.1. Section 5.2 confirms that our model yields interpretable outputs with prices in realistic ranges. An equity-oriented case study in Section 5.3 underlines the value of accurately capturing passenger preferences. Section 5.4 demonstrates the value of cooperative pricing. All results are obtained with the SOS2-CD-MD-R approach and a 12-hour time limit.
### Greater Boston Area Case Study
We model a potential pricing alliance between the Massachusetts Bay Transit Authority (MBTA) and a TNC like Uber or Lyft, in the Greater Boston Area. We use Lyft data for generating case study inputs because of fare parameter data availability (Lyft Inc. 2020). MBTA subsidizes Uber and Lyft trips as part of their on-demand paratransit program called The RIDE Flex (MBTA 2021). We model an alliance with a wider passenger scope that aligns with MBTA goals outlined in a recent report (MDOT 2019). In this report, the MBTA identified 14 towns (called "urban gateways.") adjacent to the commuter rail network whose residents had the greatest likelihood of utilizing--and benefiting from--targeted transit expansion efforts. We identify these 14 towns as the _service region_ of the potential pricing alliance, as depicted in Figure 3.
Our case study integrates many datasets describing travel characteristics in the Greater Boston Area during the weekday morning commute (6-10 am). We consider passenger travel patterns for those who
Figure 3: Partial map of urban gateways, as identified by the MBTA (MDOT 2019). The regions are identified with solid-line bounding boxes. The dashed-line bounding box demarcates the region that we identified as the inner city. (Urban gateway not depicted: Brockton, located south of the inner city.)
commute from the service region to the inner city (Boston and Cambridge), or those who commute locally within the service region. We define a _local commute_ as either working in the town of residence or in an adjacent town that is also part of the service region. For example, commutes between Salem and Lynn or between Burlington and Melrose are considered local. We use Origin-Destination Employment Statistics from the Longitudinal Employer-Household Dynamics (LODES) datasets provided by the U.S. Census Bureau to approximate the commuting population at a census tract level (U.S. Census Bureau 2017).
We obtain MBTA's commuter rail network data using MBTA General Transit Specification Feed data (MBTA 2018), while the MOD operator corresponds to all potential direct travel options and first-mile connections in the service region. We construct route choice sets for each passenger type by first executing Yen's \(k\)-shortest paths algorithm (Yen 1970). We then include in each passenger type's route choice set their fastest option of each mode: transit-only, MOD only, hybrid (MOD first mile to transit), and driving (which corresponds to the outside option). We represent the utility of each route option as a linear combination of travel and wait time; incurred costs including fare, gasoline, and parking fees as appropriate; and mode discomfort relative to the convenience of driving. The discount activation categories correspond to town pairs. In other words, a discount might be activated from any town in the service region to the inner city, to an adjacent town that is also part of the service region, or to itself. In total, there are 77 discount activation categories in the case study. We allow each operator to set fares up to a maximum of $10 for base fares, $5 per mile for distance-based markups, and a maximum 0.5 for discount multiplier. Transit-only routes are not eligible for discount, while all others are. Appendix G provides more details about the case study.
### Model Validation
First, we will demonstrate that the allied fare-setting model sets route prices in realistic and reasonable ranges from a practical standpoint. Further, we find that the optimal fares intuitively reflect various portfolios of alliance priorities. We vary the objective function coefficients \(\boldsymbol{\mu}\), i.e., relative weights among the three performance metrics: revenue, passenger utility, and VMT. In particular, we focus on regimes with varying combinations of priorities between revenue and passenger utility (i.e., setting \(\mu^{VMT}=0\)), as well as between revenue and VMT (i.e., setting \(\mu^{PAX}=0\)). We do not emphasize regimes that completely exclude revenue as a priority, because they intuitively result in zero fares and are not interesting from an analysis standpoint. Thus, all experiments have \(\mu^{REV}>0\). We also do not analyze regimes that vary all three metrics for reasons explained later in this section. Table 6 presents summary statistics about route prices, system utilization, and system-wide performance metrics across the tested priority regimes. Figure 4 depicts optimal fare parameters and discount multipliers, demonstrating the different fare-setting strategies of each regime.
We extract a few representative solutions from Table 6 and present them in Table 7 alongside real-world fares, and the corresponding ridership values obtained by our model for the real-world fares. We compute the MOD base fare by combining Lyft's published minimum fare and service fee, and we compute their
\begin{table}
\begin{tabular}{c c c|c c c|c c c|c} \hline \multicolumn{6}{c}{Objective weights} & \multicolumn{4}{c|}{Route price (\$)} & \multicolumn{4}{c|}{Performance} & System \\ \(\mu^{PAX}\) & \(\mu^{REV}\) & \(\mu^{VMT}\) & Min. & Mean & Max. & PAX & REV & VMT & util. \% \\ \hline
[MISSING_PAGE_POST]
& 1.0 & \$0.00 & \$0.00 & \$0.00 & 100.00\% & 0.00\% & 100.00\% & 50.32\% \\ \hline \end{tabular}
\end{table}
Table 6Aggregate metrics for different operator priority regimes. System-wide performance metrics are normalized against best possible values. System utilization (util.) is the alliance’s total market share, i.e. the percentage of travelers electing to travel on a transit, MOD, or hybrid option instead of driving a single-occupancy vehicle.
Figure 4: Optimal fares across varying alliance priority regimes.
markup by combining the published markups per unit distance and time, assuming an average vehicle speed of 25 mph (Lyft Inc. 2020). We ignore fare multipliers utilized to manage the two-sided market, since they are outside the scope of this work; note that this may lead to slight undercounting of real MOD fares and slight overcounting of the real ridership. The real-world MBTA commuter rail base fare and markup are interpolated from its zone-based pricing structure, which assigns higher prices to farther zones (MBTA 2020). While all regimes have slightly lower ridership than that under real fares, all benchmarks achieve non-negligible improvements in system-wide metrics. In particular, the REV, REV+PAX, and REV+VMT regimes respectively achieve objective value increases of 47.4%, 5.6%, and 1.8% respectively.
As shown in Table 6, each set of priorities induces interpretable optimal prices and passenger decisions. The minimum, mean, and maximum real-world route prices in the service region are respectively $4, $10.23, and $54.46, while those given by our model are in the range $0-$60.37; thus optimal fares are set at the correct order of magnitude across all regimes. Intuitively, route prices are the highest for the revenue maximizing regime, and they decrease gradually as the importance of VMT or passenger utility increases. System-wide performance metrics are normalized against the best possible values across tested regimes, naturally achieved by each metric's corresponding single-objective optimization. The lowest revenue is achieved in regimes that solely maximize passenger utility or minimize VMT, because very low prices achieve very low revenues, but increase passenger happiness and entice more passengers away from single-occupancy vehicles. Analogously, the highest fares achieve the highest revenue, with more passengers electing to travel outside the system, and lowering overall passenger utility. System-wide outcomes vary smoothly with gradually changing alliance priorities. Note that even under single-objective revenue maximization,, route prices remain in the ballpark of real-world fares. While both base fares and discount multiplier reach their upper limits, the both optimal markups stay in the interior of the allowable range. We attribute the model's realism to the incorporation of the endogenous passenger choice model into the fare-setting model, as opposed to modeling exogenous demand parametrically. These intuitive observations confirm that our fare-setting model is suitable for generating trustworthy qualitative insights.
\begin{table}
\begin{tabular}{l|l l l l l l l} \hline \hline
**Priority regime** & \multicolumn{2}{l}{**Base fares (\$)**} & \multicolumn{2}{l}{**Markups (\$/mile)**} & \multicolumn{2}{l}{**Discount**} & \multicolumn{1}{l}{**\% routes**} & \multicolumn{1}{l}{**Number of**} \\ & **MOD** & **Transit** & **MOD** & **Transit** & **Multiplier** & **discounted** & **travelers** \\ \hline Real fares & $4.53 & $4.50 & $1.07 & $0.16 & 0\% & 0\% & 25,816 \\ REV & $10.00 & $10.00 & $2.37 & $0.63 & 50\% & 31.20\% & 18,309 \\ REV+PAX & $10.00 & $8.50 & $0.56 & $0.25 & 31\% & 70.13\% & 23,793 \\ REV+VMT & $10.00 & $1.83 & $0.16 & $0.00 & 50\% & 6.50\% & 25,051 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Summary statistics of representative priority regimes. REV, REV+PAX, and REV+VMT regimes respectively have objective weights (\(\mu^{REV},\mu^{PAX},\mu^{VMT}\)) equal to \((1,0,0),(1,1,0)\), and \((1,0,1)\). “% routes discounted” provides the proportion of routes in the system with activated discounts. “Number of travelers” is the total alliance passenger count originating within the alliance service region.
At a quick glance, minimizing VMT and maximizing passenger utility seem to achieve similar outcomes in Table 6--lower prices and higher system utilization--raising the question of why it is worth modeling them separately. We turn to Figure 4 to illustrate how each objective yields qualitatively very different designs. As VMT minimization increases in importance (moving from the middle towards the right in Figure 4a), the markup is zeroed out, equalizing fares across longer and shorter routes. The elimination of a distance-based markup entices more longer-distance commuters to travel on the allied network, thus lowering VMT. When the system maximizes passenger utility, a more nuanced fare structure emerges to address heterogeneous passenger preferences. All pricing levers are employed: base fares, markups, discount multipliers, and discount activations. Higher markups and base fares are coupled with more numerous discounts across hybrid options, illustrated in the left halves of both Figures 4a and 4b. Thus, prioritization of each objective (PAX versus VMT) results in similar system-wide performance metrics by qualitatively different means. To extract the corresponding fare designs, we consider case studies prioritizing at most one of PAX and VMT at a time, with varying weights for revenue. Section 5.3 further investigates geographic factors.
Finally, note that the variation in system utilization due to allied fare-setting is small compared to the integrated network's total loads. Before the pandemic, MBTA's bus load factor during peak hours was already below 75% (Hicks 2017). MBTA commuter rail transported around 120k passengers on an average weekday in 2018 (MBTA 2022), with 81.2% of inbound ridership on peak trains (BRMPO 2012), yielding approximately 49k travelers on MBTA commuter rail during the AM rush. Moreover, approximately 116k daily TNC rides were destined for Boston in 2018, while another 15.3k originated in the alliance service region every day (MDPU 2018). In contrast, the ridership numbers in Table 7 show that the alliance's ridership under real fares is a small proportion of the entire integrated network and that the system is capable of accommodating all demand redistribution as a result of allied fare-setting. In fact, Table 7 shows that the aggregate alliance ridership is slightly lower than under real fares. Thus, drops on other routes will compensate for the slight ridership increases that may happen on certain routes under our proposed pricing alliance, ensuring that the system-wide transit load factors and MOD detour times are expected to remain largely unchanged as a result of the pricing alliance. We conclude that the linked resource reallocation problem need not be considered, when looking for rapid gains through pricing alliance formation.
### Ensuring Equitable Access through a Refined Income-Aware Model Specification
Our fare-setting model captures passenger's travel preferences and travel decisions when designing fares, a critical step to satisfying passenger needs. However, different groups of passengers may have very different preferences. Ignoring such differences can lead to inequitable and socially undesirable outcomes. After all, equity is a key driver for integrating on-demand services into public transportation options. Acknowledging this challenge, in this section we further refine our choice model and quantify the impacts of this nuanced model specification on system-wide metrics compared to an aggregated, average-case choice model.
To this end, we augment our case study. Towns targeted for transit expansion by the MBTA in our case study have wide-ranging median household incomes, translating into varying price sensitivities. Affluent travelers' route choice decisions are less susceptible to changing fares than those of low-income travelers. To partly account for such passenger heterogeneity, we compute a ratio of each town's median household income to the average of the median household incomes across the entire service region (U.S. Census Bureau 2018). We use this income ratio to scale passengers' price sensitivities. See Appendix G for details.
We compare the allied system ridership across priority regimes and towns under fares corresponding to base (i.e., income-agnostic) as well as refined (i.e., income-aware) choice models. We first calculate fares using the fare-setting model incorporating income-agnostic and income-aware choice models separately, and then evaluate both fare designs by calculating (and reporting in Table 8) ridership using only the income-aware choice model. Due to the use of the income-aware model, the system ridership in the REV+PAX regime increases by 13% on average for the towns with below-average median HHI compared to a less than 2% average increase for the towns with above-average median HHI. While ridership increases across the board due to generally lower fares, the greatest increases occur in Lawrence and Lowell, which are the two towns with the lowest median household incomes. The lower middle-income bracket (Lynn, Brockton, Salem, Haverhill, and Framingham) sees the next-highest ridership increase. Similar trends are observed in the REV regime: ridership increases across all towns due to the fact that higher revenue can be achieved with lower fares and higher volume. The above-average income towns gain ridership by only 10% on average, while the below-average income towns have a 37% gain.
\begin{table}
\begin{tabular}{l c c|c c c|c c c|c c c|c} \hline \hline & \multicolumn{2}{c|}{**HHI**} & \multicolumn{2}{c|}{**Dist.**} & \multicolumn{3}{c|}{**REV+PAX**} & \multicolumn{3}{c|}{**REV**} & \multicolumn{3}{c|}{**REV+VMT**} & \multicolumn{1}{c}{} \\
**Town** & **\$K** & **Miles** & **Base** & **Refined** & **Diff.** & **Base** & **Refined** & **Diff.** & **Base** & **Refined** & **Diff.** & **Real** \\ \hline Lawrence & 41.6 & 32.4 & 1585 & 1982 & 125\% & 918 & 1881 & 205\% & 1331 & 2065 & 155\% & 1566 \\ Lowell & 52.0 & 30.8 & 2000 & 2525 & 126\% & 1285 & 1565 & 122\% & 1852 & 2633 & 142\% & 2115 \\ Lynn & 54.6 & 13.5 & 3539 & 3769 & 107\% & 2283 & 2560 & 112\% & 3615 & 3188 & 88\% & 4023 \\ Brockton & 55.1 & 24.6 & 3498 & 3709 & 106\% & 1949 & 3364 & 173\% & 3416 & 3919 & 115\% & 3872 \\ Salem & 65.6 & 18.9 & 1885 & 2043 & 108\% & 1189 & 1382 & 116\% & 1961 & 1786 & 91\% & 2171 \\ Haverhill & 67.6 & 39.1 & 1605 & 1746 & 109\% & 1068 & 1214 & 114\% & 1630 & 1809 & 111\% & 1801 \\ Framingham & 79.1 & 26.1 & 1203 & 1327 & 110\% & 780 & 936 & 120\% & 1093 & 1117 & 102\% & 1198 \\ Waltham & 85.7 & 11.8 & 1974 & 1873 & 95\% & 1597 & 1713 & 107\% & 2101 & 1946 & 93\% & 2249 \\ Woburn & 88.7 & 14.3 & 1795 & 1828 & 102\% & 1479 & 1633 & 110\% & 2047 & 1910 & 93\% & 2199 \\ Stoneham & 94.8 & 11.6 & 761 & 785 & 103\% & 610 & 691 & 113\% & 880 & 830 & 94\% & 948 \\ Wakefield & 95.3 & 13.3 & 1004 & 1007 & 100\% & 783 & 889 & 114\% & 1140 & 1076 & 94\% & 1228 \\ Melrose & 103.7 & 9.8 & 810 & 821 & 101\% & 637 & 711 & 112\% & 889 & 842 & 95\% & 949 \\ Burlington & 105.4 & 17.6 & 1091 & 1144 & 105\% & 975 & 1053 & 108\% & 1264 & 1193 & 94\% & 1340 \\ Reading & 112.6 & 16.6 & 846 & 871 & 103\% & 703 & 787 & 112\% & 972 & 927 & 95\% & 1038 \\ Winchester & 159.5 & 10.8 & 197 & 207 & 105\% & 193 & 198 & 103\% & 218 & 210 & 96\% & 227 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Allied system’s daily morning rush ridership with (Refined) and without (Base) the choice model refinement, their percentage difference (Diff.), real-world ridership (Real), median household income (HHI), and average distance (Dist.) of alliance routes originating in the corresponding town and ending in the inner city.
In contrast to the 7% and 23% average ridership gains in REV+PAX and REV regimes, average ridership grows by less than 4% in the REV+VMT regime. But interestingly, this regime has significantly larger ridership increases for longer distance passengers. In particular, the five towns that are farthest from the inner city area--Lawrence, Lowell, Brockton, Haverhill, and Framingham--are the exact five towns with ridership increases. Their average increase is about 25% while the remaining 10 towns see an average 7% drop in ridership. Thus, the REV+VMT regime increases access to the allied network for commuters who are farther from the inner city. Overall, the prioritized system-wide objectives improved by 19.16%, 0.35% and 0.99% respectively for REV+PAX, REV, and REV+VMT regimes compared to the real-world fares.
These results underline the importance of capturing passenger preference heterogeneity to amplify our model's practical impact. A choice model that reflects passenger preferences more accurately improves system-wide outcomes and especially maximizes passenger benefits. We conclude that our income-aware refined model improves transportation equity for passengers--as compared to the income-agnostic aggregate model--making it a valuable tool for transit agencies to incorporate into strategic decision-making.
### Quantifying the Value of Cooperation
To quantify the value of operator cooperation for operators and passengers, we solve the non-cooperative fare-setting model for all allied priorities depicted in Table 6. In each experiment, the transit operator's priorities are identical to alliance priorities, whereas the MOD operator always maximizes revenue. Thus, our experiment reflects an assumption that the prospective alliance will adopt the transit operator's priorities.
Figure 5 depicts the non-cooperative equilibrium fares. Discount multipliers are not included since they are not applicable in the non-cooperative setting. Table 9 compares allied outcomes to corresponding non-cooperative outcomes. In particular, we report the percentage increase in the alliance objective compared to that computed under the non-cooperative setting (transit obj. % inc.), and the percentage increase in MOD operator revenue due to the alliance (MOD rev. % inc.). We also provide the revenue allocations to both operators as determined by the revenue allocation mechanism in Section 2.4. The non-cooperative system utilization and non-cooperative average route prices are also provided for each experiment. Figure 6 illustrates passenger mode choices across all tested regimes for both allied and non-cooperative fares.
Figure 5: Non-cooperative fare parameters for different transit operator priorities.
Figure 5 illustrates that the MOD operator's revenue-maximizing strategy remains relatively constant, regardless of the transit operator's priorities. Still, when the transit operator prioritizes VMT minimization or passenger benefits maximization, the MOD operator selects higher fare parameters (mainly through higher markups) than in the corresponding allied settings. Thus, the average route price (Route price ($) Mean) columns in Tables 6 and 9 show that the non-cooperative average route prices are higher than average allied route prices in every regime except for the one where the transit operator only prioritizes revenue.
In scenarios that partially maximize passenger benefits or minimize VMT, the alliance sets MOD fare parameters lower than in the non-cooperative regime (Figure 3(a) vs. 5). Figure 6 shows that MOD-only market shares decrease and hybrid market shares increase in the non-cooperative regime as passenger benefits or VMT are increasingly prioritized. We can conclude that, although fewer passengers utilize MOD-only
Figure 6: Market shares by mode across priority regimes for allied and non-cooperative fare-setting models.
\begin{table}
\begin{tabular}{c c c|c c c c c|c c} \hline \hline \multicolumn{3}{c|}{Transit obj. weights} & \multicolumn{2}{c}{Transit obj.} & \multicolumn{2}{c}{MOD rev.} & \multicolumn{2}{c}{MOD allied} & \multicolumn{2}{c}{Transit allied} & \multicolumn{2}{c}{System} & \multicolumn{2}{c}{Route price} \\ \(\mu_{TR}^{AX}\) & \(\mu_{TR}^{BEV}\) & \(\mu_{TR}^{MT}\) & \% inc. & \% inc. & rev. alloc. & rev. alloc. & util. \% & ($) Mean \\ \hline
[MISSING_PAGE_POST]
& 1.0 & 1.90\% & 0.00\% & $501.90K & -$501.90K & 35.01\% & $12.83 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Allied vs. non-cooperative outcomes. Transit operator’s priorities represented as \(\mu_{TR}\).
options in non-cooperative scenarios where the transit operator is VMT- or passenger-oriented, a higher volume of passengers selects hybrid options due to the very low (or free) transit fares observed in Figure 5. Thus, the MOD operator earns more revenue in those non-cooperative scenarios in which the transit operator is more altruistic. This is reflected in the revenue allocation mechanism: observe in Table 9 that the MOD operator earns strictly more revenue in almost all regimes where the transit operator is not solely a revenue maximizer, even though the system as a whole generates strictly less total revenue, as seen in comparison with Table 6. The revenue allocation mechanism ensures that the MOD operator receives their non-cooperative earnings, despite the lower MOD fares that the alliance sets to achieve lower VMT or higher passenger benefits. This in turn reduces the transit's revenue allocation as high VMT or low passenger benefits are increasingly penalized. On the other hand, the transit operator always strictly improves its objective of optimizing total system-wide performance, however it chooses to define it. As a result, the MOD operator interestingly finds it in its interest to adopt transit's priorities as transit increasingly diverges from revenue maximization. In other words, _the revenue-maximizing MOD operator would not prefer a revenue-maximizing alliance_. In fact, the MOD operator would benefit most from total altruism on transit's side (an exclusive focus on either passenger utility or VMT reduction). Passengers win due to the strictly lower prices and higher system utilization that result from such alliances.
The transit operator must ultimately set the ceiling in terms of the price they are willing to pay for the alliance benefits. Table 9 shows that the transit agency runs a deficit to appease the MOD operator if its revenue emphasis is too low. A deficit alone might not be enough to dissuade the transit agency from participating in the alliance: as we have noted, every public transit mode operates at a loss, especially on-demand options (Kane, Tomer, and Puentes 2016). To weigh the financial implications of an alliance, the agency may compare the magnitude of the loss to the cost of the analogous MOD system operated by transit on their own in the absence of outsourcing through an alliance. Oftentimes, transit agencies also receive grants to fund pricing alliances (Federal Transp. Administration 2016), and the daily deficit rate can be compared to the grant award amount and intended duration. To enable a daily deficit rate that keeps pace with financial resources over time, the transit agency may propose to reset overall performance metric goals to induce different optimal fares, or to adjust the geographic and/or temporal scope of the alliance. In the end, each transit agency is expected to choose the trade-off point between its financial, passenger-focused, and environmental goals that most closely aligns with their overall policy and various practical and financial constraints. Regardless, our allied and non-cooperative fare-setting models together with our revenue allocation mechanism jointly provide a toolkit usable by transit agencies to weigh these trade-offs as they evaluate a potential pricing alliance.
## 6 Conclusions, Limitations and Future Directions
We contribute a pricing alliance design framework to enable incentive-aligned collaboration between transit agencies and MOD operators. Our allied fare-setting model captures the interdependent decisions of passengers and operators, allowing the alliance to maximize benefits for all stakeholders across the integrated
network. The prescriptive pricing framework can be generalized to different types of large-scale alliances with varying MOD operators, service populations, fare structures, service goals, and network configurations. We accomplish large scale by developing a tractable two-stage fare-setting formulation equivalent to the original mixed-integer non-convex optimization problem, which we then solve with a tailored SOS2 coordinate descent approach. From a technical standpoint, our approach selects consistently and significantly higher quality solutions than benchmarks based on Bayesian Optimization, enabling additional system-wide benefits worth tens of thousands dollars per day over the service region. Practically speaking, the high quality solutions from our allied fare-setting model together with our dedicated revenue allocation mechanism work together to align revenue-oriented MOD operators with transit goals of passenger utility and single-occupancy VMT reduction. In other words, cooperative pricing results in win-win-win outcomes for passengers, MOD operators, and transit agencies. Finally, the income-aware nuanced version of our fare-setting model helps enhance passenger equity-related goals: by tuning passenger route choice models, the alliance can prioritize lower fares and higher utilization for low-income or long-distance commuters.
Our analysis is based on a few assumptions which can be relaxed in future work. We assumed that the MOD operator serves as a contractor to the transit agency and agrees to set static fares for trips in the integrated system. While this is one good way to ensure public sector prices are transparently communicated, it may also be possible to set fare schemes that allow MOD operators, especially TNCs, to maintain their dynamic prices, perhaps through the transit operator subsidizing the cost of a passenger's trip up to a fixed dollar amount. Additionally, we model average-case travel times over a fixed set of route options for the MOD operator's portion of the network to represent average-case operations, which simplifies their typically dynamic routing scheme. In other words, we consider the case where the alliance sets one permanent fare scheme that is optimized for average-case performance. Future research may consider optimizing for the worst-case performance, and/or integrating dynamic routing into the fare-setting model if the alliance prefers dynamic fares, requiring the integration of two complex problem classes. Finally, we did not incorporate joint resource reallocation into the pricing scheme, due to the observation that the system is capable of accommodating all demand re-distributions attributed to changing prices. This assumption works well for our case study over the Greater Boston Area because the allied system's ridership is a small fraction of the larger service region with a high-capacity existing network. Future research could consider relaxing this assumption to generalize the analysis by jointly modeling the capacity allocation and pricing decisions.
## Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant no. 1122374.
|
2303.12360 | Automatically Predict Material Properties with Microscopic Image Example
Polymer Compatibility | Many material properties are manifested in the morphological appearance and
characterized with microscopic image, such as scanning electron microscopy
(SEM). Polymer miscibility is a key physical quantity of polymer material and
commonly and intuitively judged by SEM images. However, human observation and
judgement for the images is time-consuming, labor-intensive and hard to be
quantified. Computer image recognition with machine learning method can make up
the defects of artificial judging, giving accurate and quantitative judgement.
We achieve automatic miscibility recognition utilizing convolution neural
network and transfer learning method, and the model obtains up to 94% accuracy.
We also put forward a quantitative criterion for polymer miscibility with this
model. The proposed method can be widely applied to the quantitative
characterization of the microstructure and properties of various materials. | Zhilong Liang, Zhenzhi Tan, Ruixin Hong, Wanli Ouyang, Jinying Yuan, Changshui Zhang | 2023-03-22T07:51:32Z | http://arxiv.org/abs/2303.12360v2 | # Automatically Predict Material Properties with Microscopic Image:
###### Abstract
Many material properties are manifested in the morphological appearance and characterized with microscopic image, such as scanning electron microscopy (SEM). Polymer compatibility is a key physical quantity of polymer material and commonly and intuitively judged by SEM images. However, human observation and judgement for the images is time-consuming, labor-intensive and hard to be quantified. Computer image recognition with machine learning method can make up the defects of artificial judging, giving accurate and quantitative judgement. We achieve automatic compatibility recognition utilizing convolution neural network and transfer learning method, and the model obtains up to 94% accuracy. We also put forward a quantitative criterion for polymer compatibility with this model. The proposed method can be widely applied to the quantitative characterization of the microstructure and properties of various materials.
M achemine, Transfer Learning, Polymer Compatibility, SEM, Computer Image Recognition
## 1 Introduction
The methods to computing and characterizing material properties have developed rapidly in recent years. In terms of material property calculations, methods such as molecular dynamics (MD) simulations achieved good results in free energy calculation[1, 2] and drug discovery[3, 4]. In addition to material property calculation, material property characterization is also a crucial part of material researches. Among kinds of characterization methods, scanning electron microscopy (SEM) is one of the most important means from observe microscopic image to determine macroscopic and intrinsic properties of the materials. At the present stage, this relies heavily on human observation and judgement, which is time-consuming, labor-intensive and hard to be quantified. The development of artificial intelligence and machine learning makes automatic image recognition applicable[5]. With the help of computer image recognition technology, we could build an effective bridge from the microgram to the macroscopic properties. Allen et al.[6] used convolution neutral network (CNNs) to identify the common functional groups in gas-phase FTIR (Fourier transform infrared spectroscopy), simplified the recognition process of infrared spectrum. Lee et al.[7] developed a GA-based image analysis method to accurately analyze the morphological properties of nanoparticles and achieved 99.75% accuracy. Gao et al.[8] obtained the function model of material T\({}_{\text{g}}\) and image characteristics through machine learning by extracting microscopic image features of materials.
However, directly bridging microscopic images and material macroscopic properties remains a challenge and worth exploring. Considering that polymer composite materials are important engineering materials and that their compatibility is a key index to determine their performance, we chose quantitative judgment of polymer compatibility with SEM images as an example. Polymer blend materials involve blending two or more polymers to produce a new material. These blends have been extensively researched due to their capacity to combine the advantageous properties of different polymers and their ease of synthesis and processing. Researchers have utilized machine |
2302.05535 | K-Spectral Sets | We use results in [M. Crouzeix and A. Greenbaum,Spectral sets: numerical
range and beyond, SIAM Jour. Matrix Anal. Appl., 40 (2019), pp. 1087-1101] to
derive a variety of K-spectral sets and show how they can be used in some
applications. We compare the K-values derived here to those that can be derived
from a straightforward application of the Cauchy integral formula, by replacing
the norm of the integral by the integral of the resolvent norm. While, in some
cases, the new upper bounds on the optimal K-value are much tighter than those
from the Cauchy integral formula, we show that in many cases of interest, the
two values are of the same order of magnitude, with the bounds from the Cauchy
integral formula actually being slightly smaller. We give a partial explanation
of this in terms of the numerical range of the resolvent at points near an
ill-conditioned eigenvalue. | Anne Greenbaum, Natalie Wellen | 2023-02-10T22:28:58Z | http://arxiv.org/abs/2302.05535v2 | # \(K\)-Spectral Sets +
###### Abstract
We use results in [M. Crouzeix and A. Greenbaum, _Spectral sets: numerical range and beyond_, SIAM Jour. Matrix Anal. Appl., 40 (2019), pp. 1087-1101] to derive a variety of \(K\)-spectral sets and show how they can be used in some applications. We compare the \(K\) values derived here to those that can be derived from a straightforward application of the Cauchy integral formula, by replacing the norm of the integral by the integral of the resolvent norm. While, in some cases, the new upper bounds on the optimal \(K\) value are _much_ tighter than those from the Cauchy integral formula, we show that in many cases of interest, the two values are of the same order of magnitude, with the bounds from the Cauchy integral formula actually being slightly smaller. We give a partial explanation of this in terms of the numerical range of the resolvent at points near an ill-conditioned eigenvalue.
2000 Mathematical subject classifications : 47A25 ; 47A30
**Keywords :** numerical range, spectral set
## 1 Introduction
Let \(A\) be an \(n\) by \(n\) matrix or a bounded linear operator on a complex Hilbert space \((H,\langle\cdot,\cdot\rangle,\|\cdot\|)\). A closed set \(\Omega\subset\mathbb{C}\) is a \(K\)-spectral set for \(A\) if the spectrum of \(A\) is contained in \(\Omega\) and if, for all rational functions \(f\) bounded in \(\Omega\), the following inequality holds:
\[\|f(A)\|\leq K\|f\|_{\Omega}, \tag{1}\]
where \(\|\cdot\|\) on the left denotes the norm in \(H\) and \(\|\cdot\|_{\Omega}\) on the right denotes the \(\infty\)-norm on \(\Omega\). It was shown in [5] that the closure of the numerical range
\[W(A):=\{\langle Aq,q\rangle:q\in H,\;\|q\|=1\} \tag{2}\]
is a \((1+\sqrt{2})\)-spectral set for \(A\). This was extended in [4] to show that other regions in the complex plane are \(K\)-spectral sets. In particular, it was shown that the numerical range with a circular hole or cutout is a \((3+2\sqrt{3})\)-spectral set.
In this paper, we use theorems proved in [4] to derive values of \(K\) for which (1) holds for other regions \(\Omega\). A simple way to find such a \(K\) value for a given region \(\Omega\) containing the spectrum
of \(A\) in its interior is to use the Cauchy integral formula, replacing the norm of the integral by the integral of the resolvent norm:
\[f(A)=\frac{1}{2\pi i}\int_{\partial\Omega}(\zeta I-A)^{-1}f(\zeta)\,d\zeta \Rightarrow\|f(A)\|\leq\frac{1}{2\pi}\left(\int_{\partial\Omega}\|(\zeta I-A)^ {-1}\|\ |d\zeta|\right)\|f\|_{\Omega}.\]
Thus one can always take
\[K=\frac{1}{2\pi}\int_{\partial\Omega}\|(\zeta I-A)^{-1}\|\ |d\zeta|. \tag{3}\]
The main goal of [4] was to produce \(K\) values that are independent of \(A\) for certain regions \(\Omega\) (that do depend on \(A\)), but it was also hoped that the values derived there would be smaller than those in (3). We will compare these \(K\) values for various sets \(\Omega\). For some sets, we will also compare these values to what we believe to be the optimal \(K\) value. This is computed numerically using an optimization code and, at least, provides a _lower bound_ on \(K\).
The main theorem in [4] (Theorem 1 in this paper), relates the value of \(K\) not to \(\frac{1}{2\pi}\) times a boundary integral of the resolvent norm but to \(\frac{1}{\pi}\) times a boundary integral of the absolute value of the minimum point in the spectrum of the Hermitian part of a certain unit scalar times the resolvent. This is \(\frac{1}{\pi}\) times the infimum of the real part of the numerical range of this unit scalar times the resolvent. If the absolute value of this infimum turns out to be much less than the _numerical radius_ (the supremum of the absolute values of points in the numerical range of the resolvent, which is between \(\frac{1}{2}\) and \(1\) times the norm of the resolvent), then Thoerem 1 may give a much smaller \(K\) value than that in (3); on the other hand, if the absolute value of this infimum turns out to be almost equal to the numerical radius of the resolvent, then the two \(K\) values may be close, with formula (3) actually producing a somewhat smaller value. We show that this latter situation holds in a number of cases of interest and we give a partial explanation as to why.
The organization of this paper is as follows. In section 2 we establish notation and review results from [4]. In section 3 we extend these results slightly and show how they can be applied to an arbitrary region containing the spectrum of \(A\) to determine a value of \(K\) for which the region is a \(K\)-spectral set. In section 4 we apply the extended results to a variety of problems. We consider block diagonal matrices and show how the numerical range can be divided into disjoint components that constitute a \(K\)-spectral set for the matrix. We also consider relevant \(K\)-spectral sets for describing the behavior of continuous and discrete time dynamical systems. In section 5 we give further comparisons between the values of \(K\) determined by the methods of sections 2 and 3 and those defined by (3), and we provide an explanation for the observed results.
## 2 Results from [4]
### Notation
Let \(f\) be a rational function bounded in a closed set \(\Omega\) containing the spectrum of \(A\). Assume that the boundary \(\partial\Omega\) is rectifiable and has a finite number of connected components. From the Cauchy integral formula, we can write
\[f(z)=\frac{1}{2\pi i}\int_{\partial\Omega}\frac{f(\zeta)}{\zeta-z}\,d\zeta, \ \ f(A)=\frac{1}{2\pi i}\int_{\partial\Omega}(\zeta I-A)^{-1}f(\zeta)\,d\zeta.\]
Letting \(s\) denote arc length, going in a counter-clockwise direction along \(\partial\Omega\), and letting \(\partial\omega\subset\mathbb{R}\) denote the values of \(s\) as \(\zeta(s)\) traverses \(\partial\Omega\), the above equations can be written in the form
\[f(z)=\frac{1}{2\pi i}\int_{\partial\omega}\frac{f(\zeta(s))}{\zeta(s)-z}\zeta^ {\prime}(s)\,ds,\ \ f(A)=\frac{1}{2\pi i}\int_{\partial\omega}(\zeta(s)I-A)^{-1}f(\zeta(s)) \zeta^{\prime}(s)\,ds.\]
We will also use the Cauchy transform of the complex conjugate \(\bar{f}\):
\[g(z):=C(\overline{f},z):=\frac{1}{2\pi i}\int_{\partial\omega}\frac{\overline {f(\zeta(s))}}{\zeta(s)-z}\zeta^{\prime}(s)\,ds,\ \ g(A):=\frac{1}{2\pi i}\int_{\partial\omega}(\zeta(s)I-A)^{-1}\overline{f( \zeta(s))}\zeta^{\prime}(s)\,ds.\]
Finally we define the transform of \(f\) by the double layer potential kernel,
\[\mu(\zeta(s),z):=\frac{1}{\pi}\frac{d}{ds}(\arg(\zeta(s)-z))=\frac{1}{2\pi i} \left(\frac{\zeta^{\prime}(s)}{\zeta(s)-z}-\frac{\overline{\zeta^{\prime}(s)} }{\overline{\zeta(s)}-\bar{z}}\right), \tag{4}\]
\[\mu(\zeta(s),A)=\frac{1}{2\pi i}\left((\zeta(s)I-A)^{-1}\zeta^{\prime}(s)-( \overline{\zeta(s)}I-A^{*})^{-1}\overline{\zeta^{\prime}(s)}\right). \tag{5}\]
With these definitions, we can write
\[S(f,z):=f(z)+\overline{g(z)}=\int_{\partial\omega}f(\zeta(s))\mu(\zeta(s),z) \,ds,\]
\[S(f,A):=f(A)+g(A)^{*}=\int_{\partial\omega}f(\zeta(s))\mu(\zeta(s),A)\,ds.\]
Further, note that \(S(1,A)=2I\) since
\[\int_{\partial\omega}\mu(\zeta(s),A)\,ds=\frac{1}{2\pi i}\int_{\partial\omega }(\zeta(s)I-A)^{-1}\zeta^{\prime}(s)\,ds+\left(\frac{1}{2\pi i}\int_{\partial \omega}(\zeta(s)I-A)^{-1}\zeta^{\prime}(s)\,ds\right)^{*}=I+I^{*}=2I.\]
### Main Results from [4]
Define
\[c_{1}:=\sup\{\max_{z\in\Omega}|C(\bar{f},z)|:f\text{ a rational function},\|f\|_{\Omega}\leq 1\}.\]
It is shown in [4, Lemma 1] that \(c_{1}\) satisfies
\[c_{1}\leq\operatorname*{supess}_{\zeta_{0}\in\partial\Omega}\int_{\partial \omega}|\mu(\zeta(s),\zeta_{0})|\,ds. \tag{6}\]
Define
\[c_{2}:=\frac{1}{2}\sup\{\|S(f,A)\|:f\text{ a rational function},\|f\|_{\Omega} \leq 1\}. \tag{7}\]
Following is (a part of) the main theorem of [4, Theorem 2]:
**Theorem 1**.: _With \(c_{1}\) and \(c_{2}\) as defined above, \(\Omega\) is a \(K\)-spectral set for \(A\), where_
\[K=c_{2}+\sqrt{c_{2}^{2}+c_{1}}.\]
One can use (6) and definition (4) to bound \(c_{1}\) in the theorem. If we fix \(\zeta_{0}\in\partial\Omega\) and let \(\zeta(s)\) move around a curve \(\Gamma_{j}\) that is all or part of \(\partial\Omega\) then, from the definition in (4), \(\int_{s:\zeta(s)\in\Gamma_{j}}|\mu(\zeta(s),\zeta_{0})|\,ds\) is equal to \(\frac{1}{\pi}\) times the total variation in the argument of \(\zeta(s)-\zeta_{0}\). For example, if \(\partial\Omega\) is a circle or the boundary of a convex set such as in Figure 1(a), then the argument of \(\zeta(s)-\zeta_{0}\) changes by \(\pi\) as \(\zeta(s)\) traverses the curve \(\partial\Omega\) so that \(\int_{\partial\omega}|\mu(\zeta(s),\zeta_{0})|\,ds=1\). If \(\zeta_{0}\) lies inside a circle or the boundary curve of a convex set such as in Figure 1(b), then the integral of \(|\mu(\zeta(s),\zeta_{0})|\) over that piece of the boundary is \(2\). If \(\zeta_{0}\) lies outside a circle of radius \(r\) such as in Figure 1(c), then the argument of \(\zeta(s)-\zeta_{0}\) goes from its initial value, say, \(0\) to \(\arcsin(r/R)\), where \(R\) is the distance from \(\zeta_{0}\) to the center of the circle, back to \(0\), to \(-\arcsin(r/R)\), and back to \(0\), for a total change of \(4\arcsin(r/R)\). Note that for any region \(\Omega\), the upper bound (6) on \(c_{1}\) can be computed numerically, by testing many points \(\zeta_{0}\in\partial\Omega\) and finding the one that leads to the largest total variation in the argument of \(\zeta(s)-\zeta_{0}\), as \(\zeta(s)\) traverses \(\hat{o}\)
To obtain upper bounds on \(c_{2}\), we first note that if \(\mu(\zeta(s),A)\) is positive semidefinite (PSD) for \(s\in[s_{min},s_{max}]\), then
\[\left\|\int_{s_{min}}^{s_{max}}f(\zeta(s))\mu(\zeta(s),A)\,ds\right\|\leq\max _{s\in[s_{min},s_{max}]}|f(\zeta(s))|\,\,\left\|\int_{s_{min}}^{s_{max}}\mu( \zeta(s),A)\,ds\right\|. \tag{8}\]
A proof can be obtained by noting that
\[\left\|\int_{s_{min}}^{s_{max}}f(\zeta(s))\mu(\zeta(s),A)\,ds\right\|=\sup_{ \|x\|=\|y\|=1}\left|\int_{s_{min}}^{s_{max}}f(\zeta(s))\,\langle\mu(\zeta(s), A)y,x\rangle\,\,ds\right|,\]
and following the arguments in [5, Lemma 2.3]. Thus if \(\mu(\zeta,A)\) is PSD for all \(\zeta\in\partial\Omega\), then \(c_{2}\leq 1\), since for any rational function \(f\) with \(\|f\|_{\Omega}\leq 1\),
\[\|S(f,A)\|\leq\left\|\int_{\partial\omega}\mu(\zeta(s),A)\,ds\right\|=\|2I\|=2,\]
Figure 1: Various boundary configurations. The blue asterisk represents \(\zeta_{0}\), and the red lines show how the angle of the vector \(\zeta(s)-\zeta_{0}\) changes as \(\zeta(s)\) traverses the boundary curve.
and from definition (7), \(c_{2}\) is bounded by half this value. For \(\Omega\) a convex set containing \(W(A)\), Theorem 1 yields the Crouzeix-Palencia result [5] that \(\Omega\) is a \((1+\sqrt{2})\)-spectral set for \(A\), since in this case \(c_{1}\leq 1\) and \(c_{2}\leq 1\).
When \(\mu(\zeta(s),A)\) is not PSD, we will add a multiple of the identity to \(\mu(\zeta(s),A)\) to obtain a PSD operator. For this, we need bounds on the minimum value in the spectrum of \(\mu(\zeta(s),A)\):
\[\lambda_{min}(\mu(\zeta(s),A)):=\min\{\lambda:\lambda\in\text{Sp}(\mu(\zeta(s),A))\}. \tag{9}\]
First, fix a point \(\zeta_{0}=\zeta(s_{0})\) in \(\partial\Omega\) where the unit tangent \(\zeta_{0}^{\prime}:=\left.\frac{d\zeta}{ds}\right|_{s_{0}}\) exists. Since \(\mu(\zeta(s),A)\) depends on \(\zeta^{\prime}(s)\), when we fix a point \(\zeta_{0}\), we will write \(\mu(\zeta_{0},\zeta_{0}^{\prime},A)\) to make this dependence clear. Since the magnitude of \(\zeta_{0}^{\prime}\) is \(1\), it can be written in the form \(e^{i\theta_{0}}\) for some \(\theta_{0}\in[0,2\pi)\). Therefore, using definition (5), we can write
\[\mu(\zeta_{0},\zeta_{0}^{\prime},A)=\frac{1}{2\pi}\left[e^{i(\theta_{0}-\pi/2) }(\zeta_{0}I-A)^{-1}+e^{-i(\theta_{0}-\pi/2)}\left((\zeta_{0}I-A)^{-1}\right)^ {*}\right]. \tag{10}\]
It follows that \(\lambda_{min}(\mu(\zeta_{0},\zeta_{0}^{\prime},A))\) is \(\frac{1}{\pi}\) times the smallest real part of points in \(\text{cl}(W(e^{i(\theta_{0}-\pi/2)}(\zeta_{0}I-A)^{-1}))\) (where \(\text{cl}(\cdot)\) denotes the closure), and therefore \(\lambda_{min}(\mu(\zeta_{0},\zeta_{0}^{\prime},A))\) is greater than or equal to \(-\frac{1}{\pi}\) times the numerical radius of this matrix, \(w(e^{i(\theta_{0}-\pi/2)}(\zeta_{0}I-A)^{-1}):=\sup\{|z|:z\in W(e^{i(\theta_{0 }-\pi/2)}(\zeta_{0}I-A)^{-1})\}\). Equivalently, we can write
\[\lambda_{min}(\mu(\zeta_{0},\zeta_{0}^{\prime},A))\geq-\frac{1}{\pi}w((\zeta_ {0}I-A)^{-1})). \tag{11}\]
The following theorem is from [4, Lemmas 5, 7, and 8]. Again letting \(\zeta_{0}\) denote a point on \(\partial\Omega\), the half-plane \(\Pi_{0}:=\{z\in\mathbb{C}:\text{Im}(\zeta_{0}^{\prime}(\overline{\zeta_{0}}- \bar{z}))\geq 0\}\) has the same outward normal as \(\Omega\) at \(\zeta_{0}\). For a disk about a point \(\xi\) of radius \(r\), the assumption \(\zeta_{0}-\xi=ir\zeta_{0}^{\prime}\) in the theorem means that \(\partial\Omega\) and the boundary of the disk are tangent at \(\zeta_{0}\) and the outward normal to \(\Omega\), \(\zeta_{0}^{\prime}/i\), is the same as the inward normal to the disk.
**Theorem 2**.: _If \(W(A)\subset\Pi_{0}\), then \(\lambda_{min}(\mu(\zeta_{0},\zeta_{0}^{\prime},A))\geq 0\), with equality if \(\zeta_{0}\in\partial W(A)\). If, for some \(\xi\in\mathbb{C}\backslash\text{Sp}(A)\), \(\zeta_{0}-\xi=ir_{1}\zeta_{0}^{\prime}\), where \(r_{1}\leq 1/\|(A-\xi I)^{-1}\|\), then \(\lambda_{min}(\mu(\zeta_{0},\zeta_{0}^{\prime},A))\geq-\frac{1}{2\pi r_{1}}\). If \(\zeta_{0}-\xi=ir_{2}\zeta_{0}^{\prime}\), where \(r_{2}\leq 1/w((A-\xi I)^{-1})\), then \(\lambda_{min}(\mu(\zeta_{0},\zeta_{0}^{\prime},A))\geq-\frac{1}{\pi r_{2}}\)._
Note that the interior of the disks \(\{z\in\mathbb{C}:|z-\xi|<1/\|(A-\xi I)^{-1}\|\}\) and \(\{z\in\mathbb{C}:|z-\xi|<1/w((A-\xi I)^{-1})\}\) alluded to in the theorem contain no points in the spectrum of \(A\) since \(\|(A-\xi I)^{-1}\|\geq w((A-\xi I)^{-1})\geq|(\lambda-\xi)^{-1}|\) for all \(\lambda\in\text{Sp}(A)\); that is, the inverses of these quantities, which are the radii of the disks, are less than or equal to \(|\lambda-\xi|\).
Theorems 1 and 2 can be used together to obtain \(K\) values for certain types of sets, such as the numerical range with a circular hole or cutout, where it will be shown in the next subsection that \(K\leq 3+2\sqrt{3}\). Unlike the \(K\) value in (3), this value, \(3+2\sqrt{3}\), is a fixed constant independent of \(A\) and does not require knowledge of the resolvent norm on the boundary of the set. However, one must have information about the resolvent at the point \(\xi\) where the disk is to be removed in order to know how large a disk can be removed. The resolvent, \((A-\xi I)^{-1}\), now plays a role in the definition of the set, rather than in the constant \(K\) as in (3).
For any set \(\Omega\) containing the spectrum of \(A\), Theorem 1 can be used directly to derive a value of \(K\) that depends on \(\lambda_{min}(\mu(\zeta,A))\). Since \(\lambda_{min}(\mu(\zeta,A))\) is twice the minimum real part of all points in the closure of the numerical range of \((\zeta^{\prime}/(2\pi i))(\zeta I-A)^{-1}\), it follows that \(|\lambda_{min}(\mu(\zeta,A))|\leq(1/\pi)w((\zeta I-A)^{-1})\in[(1/(2\pi))\|( \zeta I-A)^{-1}\|,(1/\pi)\|(\zeta I-A)^{-1}\|]\). If it turns out that \(|\lambda_{min}(\mu(\zeta,A))|\approx(1/\pi)w((\zeta I-A)^{-1})\) for \(\zeta\in\partial\Omega\), then the value of \(K\) in (3) may
be somewhat smaller than that from Theorem 1, since the value in (3) involves the integral of \((1/(2\pi))\|(\zeta I-A)^{-1}\|\). If \(|\lambda_{min}(\mu(\zeta,A))|<<(1/\pi)w(\zeta I-A)^{-1})\) (as, for example, if \(\Omega=W(A)\), where \(\lambda_{min}(\mu(\zeta,A))=0\)), then the \(K\) value from Theorem 1 may be _much_ smaller than that in (3).
### Example from [4]
Using these results, it is shown in [4] that if \(\Omega=\Omega_{0}\backslash\mathcal{D}(\xi,r)\), where \(\Omega_{0}\) is a convex domain containing \(\mathrm{cl}(W(A))\) and \(\mathcal{D}(\xi,r)\) is the disk about a point \(\xi\in\mathbb{C}\backslash\mathrm{Sp}(A)\) of radius \(r\), where \(r\leq 1/w((A-\xi I)^{-1})\), then \(\Omega\) is a \((3+2\sqrt{3})\)-spectral set for \(A\). This assumes that \(\partial\mathcal{D}(\xi,r)\subset\Omega_{0}\) or the number of intersection points of \(\partial\Omega_{0}\) and \(\partial\mathcal{D}(\xi,r)\) is finite.
To bound \(c_{1}\) in this case, suppose first that \(\partial\mathcal{D}(\xi,r)\subset\Omega_{0}\). If \(\zeta_{0}\in\partial\Omega_{0}\), then as \(\zeta(s)\) traverses \(\partial\Omega_{0}\), the argument of \(\zeta(s)-\zeta_{0}\) changes by \(\pi\), as illustrated in Figure 1(a). As \(\zeta(s)\) traverses \(\partial\mathcal{D}(\xi,r)\), the argument of \(\zeta(s)-\zeta_{0}\) changes by \(4\arcsin(r/|\zeta_{0}-\xi|)<2\pi\), as illustrated in Figure 1(c). Thus, in this case,
\[\int_{\{s:\zeta(s)\in\partial\Omega_{0}\}}|\mu(\zeta(s),\zeta_{0})|\,ds=1,\ \ \int_{\{s:\zeta(s)\in\partial\mathcal{D}(\xi,r)\}}|\mu(\zeta(s),\zeta_{0})|\,ds <2.\]
[To simplify notation, throughout the rest of the paper we will write simply \(\int_{\partial\Omega_{j}}\ldots\,ds\) in place of \(\int_{\{s:\zeta(s)\in\partial\Omega_{j}\}}\ldots\,ds\).] Now suppose \(\zeta_{0}\in\partial\mathcal{D}(\xi,r)\). Then as \(\zeta(s)\) traverses \(\partial\Omega_{0}\), the argument of \(\zeta(s)-\zeta_{0}\) changes by \(2\pi\), as illustrated in Figure 1(b), while as \(\zeta(s)\) traverses \(\partial\mathcal{D}(\xi,r)\), the argument of \(\zeta(s)-\zeta_{0}\) changes by \(\pi\), as illustrated in Figure 1(a). Thus, in this case, we have
\[\int_{\partial\Omega_{0}}|\mu(\zeta(s),\zeta_{0})|\,ds=2,\ \ \int_{\partial \mathcal{D}(\xi,r)}|\mu(\zeta(s),\zeta_{0})|\,ds=1.\]
It follows that for \(\zeta_{0}\) anywhere on the boundary of \(\Omega\), the change in argument of \(\zeta(s)-\zeta_{0}\) as \(\zeta(s)\) traverses \(\partial\Omega\) is at most \(3\pi\); that is, \(c_{1}\leq 3\). If, instead, the disk \(\mathcal{D}(\xi,r)\) intersects \(\partial\Omega_{0}\) as in Figure 1(d), then it is clear that the total variation in the argument of \(\zeta(s)-\zeta_{0}\) as \(\zeta(s)\) traverses \(\partial\Omega\) is smaller and thus \(c_{1}\) is again bounded by 3.
To bound \(c_{2}\), let \(\Gamma_{0}=\partial\Omega_{0}\backslash\mathrm{cl}(\mathcal{D}(\xi,r))\) and let \(\Gamma_{1}=\partial\mathcal{D}(\xi,r)\cap\mathrm{cl}(\Omega_{0})\), so that \(\partial\Omega=\Gamma_{0}\cup\Gamma_{1}\). Let \(f\) be a function with \(\|f\|_{\Omega}\leq 1\) and write \(S(f,A)=S_{0}+S_{1}+S_{2}\), where
\[S_{0}=\int_{\Gamma_{0}}f(\zeta(s))\mu(\zeta(s),A)\,ds,\ S_{1}=\int_{\Gamma_{1} }f(\zeta(s))\left(\mu(\zeta(s),A)+\frac{1}{\pi r}I\right)\,ds,\ S_{2}=-\frac{1}{ \pi r}\int_{\Gamma_{1}}f(\zeta(s))I\,ds.\]
It follows from Theorem 2 that for \(\zeta\in\partial\Omega_{0}\), \(\mu(\zeta,A)\) is PSD. Since adding PSD operators to a PSD operator does not decrease the norm, we can extend the integral over \(\Gamma_{0}\) to an integral over the entire boundary \(\partial\Omega_{0}\) to obtain:
\[\|S_{0}\|\leq\left\|\int_{\partial\Omega_{0}}\mu(\zeta(s),A)\,ds\right\|=\|2I \|=2.\]
If \(\zeta\in\partial\mathcal{D}(\xi,r)\), since \(r\leq 1/w((A-\xi I)^{-1})\), Theorem 2 shows that \(\mu(\zeta,A)+\frac{1}{\pi r}I\) is PSD, and hence
\[\|S_{1}\|\leq\left\|\int_{\Gamma_{1}}\left(\mu(\zeta(s),A)+\frac{1}{\pi r}I \right)\,ds\right\|\leq\left\|\int_{\partial\mathcal{D}(\xi,r)}\left(\mu( \zeta(s),A)+\frac{1}{\pi r}I\right)\,ds\right\|=\frac{1}{\pi r}\int_{\partial \mathcal{D}(\xi,r)}ds=2.\]
Here we have used the fact that the spectrum of \(A\) lies outside \(\mathcal{D}(\xi,r)\) and hence \(\int_{\partial\mathcal{D}(\xi,r)}\mu(\zeta(s),A)\,ds=0\). It is clear that \(\|S_{2}\|\leq 2\), since the length of \(\Gamma_{1}\) is less than or equal to the length of \(\partial\mathcal{D}(\xi,r)\), which is \(2\pi r\). Thus \(\|S(f,A)\|\leq 6\) and \(c_{2}\leq 3\). Applying Theorem 1 with \(c_{1}=c_{2}=3\), yields the result from [4] that \(\Omega\) is a \((3+2\sqrt{3})\)-spectral set for \(A\).
Some Simple Extensions
The arguments in section 2.3 can be extended in some simple ways.
Suppose, for example, that \(\Omega=\Omega_{0}\backslash\mathcal{D}(\xi,r)\) where \(\Omega_{0}\) and \(\mathcal{D}(\xi,r)\) are as in section 2.3, but where the intersection of \(\Omega_{0}\) and \(\mathcal{D}(\xi,r)\) is at most a half-disk, as pictured in Figure 1(d). The greatest variation in the argument of \(\zeta_{0}-\zeta(s)\) can be attained when \(\zeta_{0}\) is in the position of the asterisk in the figure. Then the total variation of the argument of \(\zeta(s)-\zeta_{0}\) could change by as much as \(\pi/2\) as \(\zeta(s)\) traverses \(\Gamma_{1}\). It changes by the same amount as \(\zeta(s)\) moves along \(\Gamma_{0}\) to the point where the argument of \(\zeta(s)-\zeta_{0}\) matches \(\zeta_{0}^{\prime}\) or \(-\zeta_{0}^{\prime}\), with a change of \(\pi\) in between. The total change could therefore be as large as \(2\pi\). It follows that in this case, for any \(\zeta_{0}\) on \(\partial\Omega\),
\[\int_{\partial\Omega_{0}}|\mu(\zeta(s),\zeta_{0})|\,ds\leq 2,\]
and therefore \(c_{1}\leq 2\) when at most a half-disk is removed from \(\Omega_{0}\). Using the same definitions of \(S_{0}\), \(S_{1}\), and \(S_{2}\) as in section 2.3, we now observe that the length of \(\Gamma_{1}\) is at most \(\pi r\) instead of \(2\pi r\), so that \(\|S_{2}\|\leq 1\), leading to the estimate \(\|S(f,A)\|\leq 5\) and \(c_{2}\leq 5/2\). Using these values of \(c_{1}\) and \(c_{2}\) in Theorem 1 leads to the result that \(\Omega\) is a \((2.5+\sqrt{8.25})\)-spectral set for \(A\).
If the radius \(r\) of the disk removed from \(\Omega_{0}\) satisfies \(r\leq 1/\|(A-\xi I)^{-1}\|\), then from Theorem 2, it follows that \(\lambda_{min}(\mu(\zeta_{0},A))\geq-\frac{1}{2\pi r}\). In this case, we can replace \(S(f,A)=S_{0}+S_{1}+S_{2}\) by \(S(f,A)=S_{0}+\tilde{S}_{1}+\tilde{S}_{2}\), where
\[\tilde{S}_{1}=\int_{\Gamma_{1}}f(\zeta(s))\left(\mu(\zeta(s),A)+\frac{1}{2\pi r }I\right)\,ds,\ \ \tilde{S}_{2}=-\frac{1}{2\pi r}\int_{\Gamma_{1}}f(\zeta(s))I\,ds.\]
Now
\[\|\tilde{S}_{1}\|\leq\left\|\int_{\Gamma_{1}}\left(\mu(\zeta(s),A)+\frac{1}{2 \pi r}I\right)\,ds\right\|\leq\frac{1}{2\pi r}\int_{\partial\mathcal{D}(\xi, r)}ds=1,\]
and \(\|\tilde{S}_{2}\|\leq 1\). With \(c_{1}=3\) and \(c_{2}=2\), it follows from Theorem 1 that \(\Omega\) is a \((2+\sqrt{7})\)-spectral set, and if the intersection of \(\Omega_{0}\) and \(\mathcal{D}(\xi,r)\) is at most a half-disk, then with \(c_{1}=2\), and \(\|\tilde{S}_{2}\|\leq 1/2\), we can take \(c_{2}=7/4\), and then it follows from Theorem 1 that this is a 4-spectral set for \(A\).
### Removing More Disks
The techniques of section 2.3 can be used to bound \(K\) when multiple disks are removed from \(\Omega_{0}\supset\operatorname{cl}(W(A))\). Consider the simplest case, where the disks \(\mathcal{D}_{1}(\xi_{1},r_{1}),\ldots,\mathcal{D}_{m}(\xi_{m},r_{m})\) do not overlap and lie entirely inside \(\Omega_{0}\). For \(\zeta_{0}\in\partial\Omega_{0}\), the total variation in \(\arg(\zeta(s)-\zeta_{0})\) becomes
\[\pi+4\sum_{j=1}^{m}\arcsin\left(\frac{1}{r_{j}|\zeta_{0}-\xi_{j}|}\right) \leq\pi+2m\pi.\]
If \(\zeta_{0}\) lies on \(\partial\mathcal{D}_{i}\), then the change in \(\arg(\zeta(s)-\zeta_{0})\) is \(2\pi\) as \(\zeta(s)\) traverses \(\partial\Omega_{0}\) and \(\pi\) as \(\zeta(s)\) traverses \(\partial\mathcal{D}_{i}\). The total change is
\[3\pi+4\sum_{\begin{subarray}{c}j=1\\ j\neq i\end{subarray}}^{m}\arcsin\left(\frac{1}{r_{j}|\zeta_{0}-\xi_{j}|} \right)\leq 3\pi+2(m-1)\pi.\]
In either case, the total variation of \(\arg(\zeta(s)-\zeta_{0})\) is at most \((2m+1)\pi\), so that \(c_{1}\leq 2m+1\).
To bound \(c_{2}\), write \(S(f,A)=S_{0}+\sum_{j=1}^{m}S_{j}+\sum_{j=1}^{m}S_{m+j}\), where
\[S_{0}=\int_{\partial\Omega_{0}}f(\zeta(s))\mu(\zeta(s),A)\,ds,\ \ S_{j}=\int_{ \partial{\cal D}_{j}}f(\zeta(s))\left(\mu(\zeta(s),A))+\frac{p_{j}}{2\pi r_{j}} I\right)\,ds,\]
\[S_{m+j}=-\frac{p_{j}}{2\pi r_{j}}\int_{\partial{\cal D}_{j}}f(\zeta(s))I\,ds,\ \ j=1,\ldots,m,\]
where \(p_{j}=1\) if \(r_{j}=1/\|(A-\xi_{j}I)^{-1}\|\) and \(p_{j}=2\) if \(r_{j}=1/w((A-\xi_{j}I)^{-1})\). Then
\[\|S_{0}\|\leq 2,\ \ \|S_{j}\|\leq p_{j},\ \ \|S_{m+j}\|\leq p_{j},\ \ j=1, \ldots,m.\]
It follows that
\[\|S(f,A)\|\leq 2+2\sum_{j=1}^{m}p_{j},\]
and \(c_{2}\leq 1+\sum_{j=1}^{m}p_{j}\). Assuming, for simplicity, that each \(p_{j}\) is the same, say, \(p_{j}=p\), and applying Theorem 1 with \(c_{1}=2m+1\) and \(c_{2}=1+mp\), we find that \(\Omega\) is a
\[\left(1+mp+\sqrt{(1+mp)^{2}+2m+1}\right) \tag{12}\]
spectral set for \(A\). This bound on \(K\) holds for other configurations where disks overlap or only partially intersect with \(\Omega_{0}\), although better bounds on \(c_{1}\) and/or \(c_{2}\) may be attainable by considering each geometry individually.
### Other \(K\)-Spectral Sets
In the previous subsection, we made use of Theorem 2 to derive values of \(K\) that are independent of the operator \(A\) for special types of regions \(\Omega\) (that _do_ depend on \(A\)). For a given operator \(A\) and region \(\Omega\) containing the spectrum of \(A\), one can use Theorem 1 directly to derive \(K\) values, but in most cases, these values will have to be computed numerically. A bound on the parameter \(c_{1}\) depends only on the geometry of \(\Omega\), while \(c_{2}\) can be bounded using computed values of \(\lambda_{min}(\mu(\zeta(s),A))\).
For continuous time systems of differential equations, the solution to the initial value problem \(y^{\prime}(t)=Ay(t)\), \(t>0\), is \(y(t)=e^{tA}y(0)\). If the spectrum of \(A\) lies in the open left half-plane, then \(\lim_{t\to\infty}y(t)=0\), but if \(W(A)\) extends into the right half-plane, then initially \(\|y(t)\|\) may grow at a rate determined by the numerical abcissa of \(A\): \(\alpha(A):=\sup\{\mbox{Re}(z):z\in W(A)\}\). If the left half-plane, or the part of \(W(A)\) that lies in the left half-plane, is a \(K\)-spectral set for \(A\), however, then \(K\) is an upper bound on the growth of \(\|y(t)\|\). When dealing with powers of a matrix or operator \(A\), it is well-known that if the spectrum of \(A\) lies inside the open unit disk, then \(A^{k}\to 0\) as \(k\to\infty\). If the numerical range of \(A\) extends outside the unit disk, however, then \(\|A^{k}\|\leq 2w(A)^{k}\) may grow with \(k\) before asymptotically decreasing to \(0\). If the unit disk, or the intersection of \(W(A)\) with the unit disk, is a \(K\)-spectral set for \(A\), however, then the growth of \(\|A^{k}\|\) is limited to a factor of \(K\).
In either of these cases, the set \(\Omega=W(A)\cap(\mbox{left half-plane})\) or \(\Omega=W(A)\cap(\mbox{unit disk})\) is convex, so \(c_{1}=1\). To bound \(c_{2}\), let \(\Gamma_{0}\) denote the part of \(\partial W(A)\) that is retained as part of \(\partial\Omega\) and let \(\Gamma_{1}\) denote the line segment or circular arc resulting from the intersection of \(W(A)\) with the imaginary axis or the unit circle. Then \(\partial\Omega=\Gamma_{0}\cup\Gamma_{1}\). For \(f\in{\cal A}(\Omega)\) with \(\|f\|_{\Omega}\leq 1\), define
\[S_{0}=\int_{\Gamma_{0}}f(\zeta(s))\mu(\zeta(s),A)\,ds,\ S_{1}=\int_{\Gamma_{1} }f(\zeta(s))(\mu(\zeta(s),A)+\gamma(s)I)\,ds,\ S_{2}=-\int_{\Gamma_{1}}f(\zeta (s))\gamma(s)I\,ds,\]
where \(\gamma(s)\geq-\lambda_{min}(\mu(\zeta(s),A))\). Proceeding as in section 2.3, since \(\mu(\zeta(s),A)\) is PSD for \(\zeta(s)\in\partial W(A)\), we can write
\[\|S_{0}\|\leq\left\|\int_{\Gamma_{0}}\mu(\zeta(s),A)\,ds\right\|\leq\left\|\int_ {\partial W(A)}\mu(\zeta(s),A)\,ds\right\|=\|2I\|=2.\]
Similarly, since \(\mu(\zeta(s),A)+\gamma(s)I\) is PSD on \(\Gamma_{1}\) and \(\mu(\zeta(s),A)\) is PSD on \(\partial W(A)\), if we let \(\Gamma_{2}\) denote the part of \(\partial W(A)\) that was discarded and define \(\gamma(s)\) to be \(0\) on \(\Gamma_{2}\), then we have
\[\|S_{1}\|\leq\left\|\int_{\Gamma_{1}}(\mu(\zeta(s),A)+\gamma(s)I)\,ds\right\| \leq\left\|\int_{\Gamma_{1}\cup\Gamma_{2}}(\mu(\zeta(s),A)+\gamma(s)I)\,ds \right\|=\left|\int_{\Gamma_{1}\cup\Gamma_{2}}\gamma(s)\right|=\int_{\Gamma_ {1}}|\gamma(s)|\,ds.\]
Finally, we can write
\[\|S_{2}\|\leq\int_{\Gamma_{1}}|\gamma(s)|\,ds.\]
Since \(S(f,A)=S_{0}+S_{1}+S_{2}\), it follows that \(\|S(f,A)\|\leq 2+2\int_{\Gamma_{1}}|\gamma(s)|\,ds\) and therefore
\[c_{2}\leq 1+\int_{\Gamma_{1}}|\gamma(s)|\,ds. \tag{13}\]
In general, suppose a set \(\Omega\) consists of \(m\) disjoint regions \(\Omega_{1},\ldots,\Omega_{m}\) with boundaries \(\Gamma_{1},\ldots,\Gamma_{m}\). An example might be the \(\epsilon\)-pseudospectrum of \(A\):
\[\Lambda_{\epsilon}(A):=\{z\in\mathbb{C}:\|(zI-A)^{-1}\|>\epsilon^{-1}\}\]
For this set, the value (3) is easy to compute:
\[K=\frac{\mathcal{L}(\partial\Lambda_{\epsilon})}{2\pi\epsilon},\]
where \(\mathcal{L}(\cdot)\) denotes the length of the curve. In this case, it may be difficult to come up with an analytic expression for the bound (6) on \(c_{1}\). This bound can be estimated numerically (to any desired accuracy), however, by first discretizing \(\partial\Lambda_{\epsilon}(A)\), then considering each discretization point as a possible value for \(\zeta_{0}\) in (6), determining the total variation of the argument of \(\zeta(s)-\zeta_{0}\) as \(\zeta(s)\) traverses the discretized \(\partial\Lambda_{\epsilon}(A)\), and finally taking \(c_{1}\) to be \(\frac{1}{\pi}\) times the maximum value of this total variation. To compute a bound on \(c_{2}\), let \(f\) be any rational function with \(\|f\|_{\Lambda_{\epsilon}(A)}\leq 1\), and write \(S(f,A)=S_{1}+S_{2}\), where
\[S_{1}=\int_{\cup_{j}\Gamma_{j}}f(\zeta(s))(\mu(\zeta(s),A)+\gamma(s)I)\,ds,\ \ S_{2}=-\int_{\cup_{j}\Gamma_{j}}f(\zeta(s))\gamma(s)I\,ds.\]
Taking \(\gamma(s)\) to be greater than or equal to \(-\lambda_{min}(\mu(\zeta(s),A))\), so that \(\mu(\zeta(s),A)+\gamma(s)I\) is PSD, we can write
\[\|S_{1}\|\leq\left\|\int_{\cup_{j}\Gamma_{j}}(\mu(\zeta(s),A)+\gamma(s)I)\,ds \right\|\leq 2+\left\|\int_{\cup_{j}\Gamma_{j}}\gamma(s)I\,ds\right\|\leq 2+ \int_{\cup_{j}\Gamma_{j}}|\gamma(s)|\,ds,\]
and similarly,
\[\|S_{2}\|\leq\int_{\cup_{j}\Gamma_{j}}|\gamma(s)|\,ds.\]
In this case, \(\|S(f,A)\|\leq 2+2\int_{\cup_{j}\Gamma_{j}}|\gamma(s)|\,ds\) and therefore
\[c_{2}\leq 1+\int_{\cup_{j}\Gamma_{j}}|\gamma(s)|\,ds.\]
Applications
Throughout this section and the next, we will always assume that the space \(H\) in which we are working is Euclidean space and the norm of interest is the 2-norm, which will be denoted as \(\|\cdot\|_{2}\).
### Block Diagonal Matrices
If \(A\) is a block diagonal matrix, say,
\[A=\left[\begin{array}{cc}A_{11}&0\\ 0&A_{22}\end{array}\right],\]
then since
\[f(A)=\left[\begin{array}{cc}f(A_{11})&0\\ 0&f(A_{22})\end{array}\right],\]
it is clear that \(\|f(A)\|_{2}\) can be bounded based on the size of \(f\) on \(W(A_{11})\cup W(A_{22})\). Yet \(W(A)\) is a possibly larger set: the convex hull of \(W(A_{11})\cup W(A_{22})\). Of course, if one knew that \(A\) was block diagonal, then one could take advantage of this property, but the same observation holds when \(A\) is unitarily similar to a block diagonal matrix, and then it is an np-hard problem to identify the blocks [7]. Instead, one might start with \(W(A)\) and try to remove one or more disks that would cut the region into disjoint pieces corresponding to the blocks of \(A\).
An example is illustrated in Figure 2. For this matrix, \(A_{11}\) was a real random 4 by 4 matrix and \(A_{22}\) was equal to \(8I\) plus a real random 4 by 4 matrix, where the random matrix entries were drawn from a standard normal distribution. The disk removed was centered at \(\xi=3.5\) and had radius \(1/w((\xi I-A)^{-1})\). According to the results of section 2.3, the remaining region (outlined with a thick black line in the figure) is a \((3+2\sqrt{3})\)-spectral set for \(A\). For comparison, if one evaluates the resolvent norm integral in (3) over the boundary of this set, one obtains the slightly larger value of 8.01. Also shown in red in the figure are the numerical ranges of each block.
For a matrix with more diagonal blocks, one could remove more disks from \(W(A)\) and obtain a \(K\)-spectral set with three or more disjoint regions, where \(K\) is boun
Figure 2: Eigenvalues and numerical range of a block diagonal matrix cut into two pieces by removing a disk about \(\xi=3.5\) of radius \(1/w((A-\xi I)^{-1})\). Resulting region is outlined in black; numerical ranges of the blocks are shown in red.
In other cases, a single disk may not be wide enough to split the numerical range into disjoint pieces. Then multiple disks could be removed, and \(K\) would again be bounded by expression (12). A better bound might be obtained by using Theorem 1 directly and numerically determining bounds on \(c_{1}\) and \(c_{2}\), as described in section 3.2. Figures 3 and 4 show additional illustrations, along with the \(K\) value obtained from formula (12) and one computed directly from Theorem 1.
### Bounding Solutions to the Initial Value Problem
The results from section 3.2 can be used to bound the solutions to both continuous and discrete time dynamical systems, assuming that the spectrum of \(A\) lies in the left half-plane or the unit disk, respectively, by determining a \(K\) value for the set \(\Omega\) equal to the intersection of \(W(A)\) with the left half-plane or the unit disk.
In this case, since \(\Omega\) is simply connected, one _may_ be able to find the _optimal_\(K\) value numerically. If \(A\) is an \(n\) by \(n\) matrix, then the form of the function \(f\) with \(\|f\|_{\Omega}=1\) that maximizes \(\|f(A)\|\) is known; it is of the form \(B\circ\varphi\), where \(\varphi\) is any conformal mapping from \(\Omega\) to the unit disk and \(B\) is a finite Blaschke product of degree at most \(n-1\):
\[B(z)=\prod_{j=1}^{n-1}\frac{z-\alpha_{j}}{1-\bar{\alpha}_{j}z},\ \ |\alpha_{j}|\leq 1.\]
We use the Kerzmann-Stein procedure [8, 9] as implemented in chebfun [6] to conformally map \(\Omega\) to the unit disk. We then try many different initial guesses for the roots \(\alpha_{j}\) of \(B\) and use the optimization code fmincon in MATLAB to search for roots that maximize \(\|B(\varphi(A))\|_{2}\). We can check a number of conditions that are known to hold for the optimal Blaschke product \(B\) to give us some confidence that we have indeed found the global maximum. See [2] for details. Still, these conditions are not sufficient to guarantee a global maximum, but at least the maximum
Figure 3: \(A\) is a block diagonal matrix with three blocks. Each block is the sum of a multiple of the identity and a real random matrix \(R\) with entries from a standard normal distribution. Block \(A_{11}=-20I+R_{1}\) is 10 by 10, block \(A_{22}=R_{2}\) is 5 by 5, and block \(A_{33}=20I+R_{3}\) is 10 by 10. The disks removed had radii \(1/\|(\xi_{1,2}I-A)^{-1}\|_{2}\), where \(\xi_{1}=-9.5\) and \(\xi_{2}=10\). Based on formula (12), the remaining region is a \(K=3+\sqrt{14}\approx 6.74\) spectral set, and using Theorem 1 directly, as described in section 3.2, we computed \(c_{1}\leq 2.60\), \(c_{2}\leq 1.78\), and \(K=4.19\). Using formula 3, the value of \(K\) was computed to be \(11.88\).
value of \(\|B(\varphi(A))\|_{2}\) returned by the optimization code is a _lower bound_ on the optimal \(K\) value for the region \(\Omega\).
As an example, the left plot in Figure 5 shows the behavior of \(\|e^{tA}\|_{2}\) for a matrix \(A\) from [10] that models the ecosystem of Tuesday Lake in Wisconsin after the introduction of pisciverous largemouth bass. The plot shows initial growth and then decay of the phosphorus turnover rate. The right plot in the figure shows the eigenvalues and numerical range of the matrix and the part of the numerical range in the left half-plane. In this case we found, by integrating \(|\lambda_{min}(\mu(\zeta(s),A)|\) along the segment of the imaginary axis inside \(W(A)\) and using Theorem 1, that \(K\) could be bounded by \(2.66\), while formula (3) gave the slightly larger value \(K=3.72\). Based on results from our optimization code, we believe that the optimal value of \(K\) for this region is \(1.95\), and, as noted earlier, this is at least a lower bound on \(K\). In this case the different bounds on \(K\) are all very close and somewhat larger than the maximum value of \(\|e^{tA}\|_{2}\), \(t>0\).
Figure 4: \(A\) is a block diagonal matrix with two blocks. Each block is the sum of a multiple of the identity and a real random matrix \(R\) with entries from a standard normal distribution. Block \(A_{11}=-5I+R_{1}\) is \(10\) by \(10\), and block \(A_{22}=(10+5i)I+R_{2}\) is \(10\) by \(10\). Two disks of radius \(1/\|(\xi_{1,2}I-A)^{-1}\|_{2}\), where \(\xi_{1}=4+1.5i\) and \(\xi_{2}=3+4i\), were needed to split the numerical range of \(A\) into two disjoint sets. Based on formula (12), the remaining region is a \(3+\sqrt{14}\approx 6.74\) spectral set, and using Theorem 1 directly, as described in section 3.2, we computed \(c_{1}\leq 3.20\), \(c_{2}\leq 1.73\), and \(K=4.21\). Using formula 3, the value of \(K\) was computed to be \(7.94\).
Figure 5: Matrix modeling the ecosystem in Tuesday Lake after introducing piscivores [10]. Left plot shows \(\|e^{tA}\|_{2}\) growing before decaying; right plot shows \(W(A)\) extending into the right half-plane (dashed curve) and eigenvalues (\(x\)’s) in the left half-plane.
As another example, we consider the matrix transient_demo(20) available in the eigtool package [13]. The upper left plot in Figure 6 shows the behavior of \(\|e^{tA}\|_{2}\), \(t>0\), which grows to about \(16.61\) before starting to decrease. The upper right plot shows the eigenvalues, in the left half-plane, and the numerical range, extending into the right half-plane, together with the region \(\Omega\) consisting of the part of \(W(A)\) in the left half-plane. Integrating \(|\lambda_{min}(\mu(\zeta(s),A))|\) along the segment of the imaginary axis forming the right boundary of \(\Omega\) and using Theorem 1, we determined that \(K=c_{2}+\sqrt{c_{2}^{2}+c_{1}}\approx 2c_{2}=40.13\). In this case, formula (3) gave a smaller value, \(K=27.95\). The reason for this smaller value can be seen in the lower plots of Figure 6. The large values of \(|\lambda_{min}(\mu(\zeta(s),A))|\) and of \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\) occur on the segment of the imaginary axis, and, while \(|\lambda_{min}(\mu(\zeta(s),A)|\) is always less than or equal to \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\), the difference is small. Since the value of \(K\) from Theorem 1 is approximately equal to \(2c_{2}\), which is approximately twice the integral of \(|\lambda_{min}(\mu(\zeta(s),A))|\) over this segment, and the value of \(K\) from (3) is the integral of \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\) over this segment (and over the remainder of \(\partial\Omega\), where \(\|(\zeta I-A)^{-1}\|_{2}\) is much smaller), the result is a smaller value of \(K\) from formula (3). The lower right plot shows why \(|\lambda_{min}(\mu(\zeta,A))|\) might be almost as large as \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\). It shows the numerical ranges of several of the matrices \(\frac{\zeta^{\prime}}{2\pi i}(\zeta I-A)^{-1}=\frac{1}{2\pi}(\zeta I-A)^{-1}\) for \(\zeta\) on this segment of the imaginary axis. While the smaller numerical ranges lie mostly in the right half-plane, for the larger ones, the absolute value of the real part of the leftmost point in these numerical ranges (which is \(\frac{1}{2}|\lambda_{min}(\mu(\zeta,A))|\)) is almost as large as the numerical radius. We will later see why this might be expected when \(\zeta\) is close to an ill-conditioned eigenvalue. In this example, our optimization code found a function \(B\circ\varphi\) for which \(\|B(\varphi(A))\|_{2}=21.54\), and we believe that this is the optimal value of \(K\) for this set \(\Omega\).
Using the same matrix, transient_demo(20), we computed norms of powers of \(A\) and found that they grew to about \(20.72\) before starting to decrease, as shown in the upper left plot of Figure 7. The upper right plot shows the numerical range of the matrix, which extends beyond \(\mathcal{D}(0,1)\), and the eigenvalues which all lie within \(\mathcal{D}(0,1)\). If we take \(\Omega\) to be \(W(A)\cap\mathcal{D}(0,1)\), whose boundary is the wide solid line in the upper-right plot, then we can use Theorem 1 to calculate a value of \(K\) for which \(\Omega\) is a \(K\)-spectral set. Integrating \(|\lambda_{min}(\mu(\zeta(s),A))|\) along the arc of the unit circle inside \(W(A)\), we determined that \(K=c_{2}+\sqrt{c_{2}^{2}+c_{1}}=70.44\). Again in this case, formula (3) gave a smaller value, \(K=36.03\). The reason can be seen in the lower plots of Figure 7. The large values of \(|\lambda_{min}(\mu(\zeta,A))|\) and of \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\) occur on the arc of the unit circle inside \(W(A)\), as shown in the lower left plot. In this case, \(|\lambda_{min}(\mu(\zeta(s),A)|\) is greater than \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\). The lower right plot shows why \(|\lambda_{min}(\mu(\zeta,A))|\) might be larger than \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\). It shows the numerical ranges of several of the matrices \(\frac{\zeta^{\prime}}{2\pi i}(\zeta I-A)^{-1}=\frac{e^{i\theta}}{2\pi}(\zeta I -A)^{-1}\) for \(\zeta=e^{i\theta}\) on this arc of the unit circle. For the larger numerical ranges, the absolute value of the real part of the leftmost point in these numerical ranges (which is \(\frac{1}{2}|\lambda_{min}(\mu(\zeta,A))|\)) is almost as large as the numerical radius. Again, we will give a partial explanation for this in the last section. In this example, our optimization code found a function \(B\circ\varphi\) for which \(\|B(\varphi(A))\|_{2}=21.06\), and we believe that this is the optimal value of \(K\) for this set \(\Omega\).
## 5 Further Comparisons and Some Explanations
It was noted earlier that if \(\Omega=W(A)\), then Theorem 1 yields the Crouzeix-Palencia result [5] that \(K=1+\sqrt{2}\approx 2.415\). For a normal matrix, some of the eigenvalues lie on the boundary of \(W(A)\), and if \(\Omega\) is taken to be a slightly larger set containing \(W(A)\), then as \(\Omega\to W(A)\), the integral in (3) may approach \(\infty\). Clearly, Theorem 1 provides a _much_ smaller bound on \(K\). The same holds for large Jordan blocks. For an \(n\) by \(n\) Jordan block, the numerical range is a disk about the eigenvalue of radius \(\cos(\pi/(n+1))\), and for \(z\) on the boundary of \(W(A)\), \(\|(zI-A)^{-1}\|_{2}\) increases with the matrix size. Thus, the value \(K=1+\sqrt{2}\) is _much_ smaller than the bound from (3). Table 1 shows a comparison of the bound from Theorem 1 when \(\Omega=W(A)\) with that from (3) for a variety of matrices, including the transient_demo matrix mentioned earlier and a boeing_demo matrix also from eigtool [13].
When we start applying Thoerem 1 to other regions, however, it is less clear that it will offer a significant improvement over (3), as illustrated in the examples of the previous section. This was observed earlier in [3], where it was shown that \(|\lambda_{min}(\mu(\zeta,A))|\) may grow very quickly as one moves inside \(W(A)\). An example in [4] involved the Grcar matrix of size 100 (gallery('grcar',100) in MATLAB), where \(\Omega\) was taken to be \(W(A)\backslash\mathcal{D}(0,1/w(A^{-1}))\). This is pictured in Figure 8. The results in [4] show that this is a \((3+2\sqrt{3})\approx 6.46\)-spectral set for \(A\), and direct application of Theorem 1 shows that it is a \(4.35\)-spectral set, with \(c_{1}\leq 2.84\) and \(c_{2}\leq 1.85\). In contrast, evaluating \(K\) in (3) gives the result \(34.86\).
Figure 6: Matrix from the eigtool command transient_demo(20) [13]. Upper left shows \(\|e^{tA}\|_{2}\) growing before decaying; upper right shows \(W(A)\) extending into the right half-plane (dashed curve) and eigenvalues of \(A\) (x’s) in the left half-plane. Lower left shows \(|\lambda_{\min}(\mu(\zeta),A))|\) and \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\) for \(\zeta\) on the segment of the imaginary axis forming the right boundary of \(\Omega\). Lower right shows numerical ranges of several of the matrices \(\frac{1}{2\pi}(\zeta I-A)^{-1}\) for \(\zeta\) on this segment of the imaginary axis; for the larger numerical ranges, the absolute value of the minimal real part, which is \(\frac{1}{2}|\lambda_{min}(\mu(\zeta,A))|\), is almost as large as the numerical radius, explaining why \(|\lambda_{min}(\mu(\zeta,A))|\) is of the same order of magnitude as \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\).
Using a somewhat smaller size Grcar matrix (\(n=32\)), we computed the \(\epsilon\)-pseudospectrum for \(\epsilon=10^{-3}\). This has multiple components, as pictured in Figure 9. For this region we also computed a value of \(K\) from Theorem 1 and found that \(c_{1}\leq 3.02\), \(c_{2}\leq 2.10\times 10^{3}\), and \(K=4.20\times 10^{3}\). Evaluating \(K\) in (3), however, gives the smaller value \(K=2.12\times 10^{3}\). As in some of the examples of section 4, the value of \(K\) from Theorem 1 is almost twice that from (3).
In summary, while for some matrices \(A\) and some regions \(\Omega\) (like \(\Omega=W(A)\)), Theorem 1 provides _much_ smaller \(K\) values than (3), for other regions, the \(K\) value from Theorem 1 may be slightly larger or about the same size as that in (3). This most often happens when the eigenvalues of the matrix are ill-conditioned and the boundary of \(\Omega\) comes close to some eigenvalues. Unfortunately, the boundary of \(\Omega\) is near ill-conditioned eigenvalues for many matrices and regions of interest in applications.
To see why this might lead to larger \(K\) values in Theorem 1, we will argue that in these cases, the numerical range of \((\zeta I-A)^{-1}\) is close to a disk about a value whose distance from the origin is much less than the radius of the disk so that every point on the boundary of this disk has absolute value close to the numerical radius of \((\zeta I-A)^{-1}\). First note that if \(x\) and \(y\) are two unit vectors that are orthogonal to each other, then the rank one matrix \(xy^{*}\) has numerical range equal to a disk about the origin of radius \(\frac{1}{2}\). To see this, consider a unitary similarity transformation \(Q^{*}xy^{*}Q\), where the columns of \(Q\) are \([x,y,q_{3},\ldots,q_{n}]\). The matrix \(Q^{*}xy^{*}Q\) is the direct sum of a 2 by 2 Jordan block with eigenvalue 0 and an \(n-2\) by \(n-2\) block of zeros, and the numerical range of this matrix is a disk about the origin of radius \(\frac{1}{2}\).
Let \(\lambda\) be a simple eigenvalue of \(A\) and suppose that a point \(\zeta\in\partial\Omega\) is _very_ close to \(\lambda\). Then
Figure 7: Matrix from the eigtool command transient_demo(20) [13]. Upper left shows \(\|A^{k}\|_{2}\) growing before decaying; upper right shows \(W(A)\) extending beyond \(\mathcal{D}(0,1)\) (dashed curve) and eigenvalues of \(A\) (x’s) in the unit disk. Lower left shows \(|\lambda_{\min}(\mu(\zeta,A))|\) and \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\) for \(\zeta\) on the arc of the unit circle inside \(W(A)\). Lower right shows numerical ranges of several of the matrices \(\frac{e^{i\theta}}{2\pi}(\zeta I-A)^{-1}\) for \(\zeta=e^{i\theta}\) on this arc of the unit circle; for the larger numerical ranges, the absolute value of the minimal real part, which is \(\frac{1}{2}|\lambda_{min}(\mu(\zeta,A))|\), is almost as large as the numerical radius, explaining why \(|\lambda_{min}(\mu(\zeta,A))|\) is of the same order of magnitude as \(\frac{1}{2\pi}\|(\zeta I-A)^{-1}\|_{2}\).
the resolvent \((\zeta I-A)^{-1}\) is close to the rank one matrix
\[\frac{1}{\zeta-\lambda}\frac{xy^{*}}{y^{*}x}, \tag{14}\]
where \(x\) and \(y\) are normalized right and left eigenvectors, respectively, corresponding to the eigenvalue \(\lambda\): \(Ax=\lambda x\), \(y^{*}A=\lambda y^{*}\). The condition number of \(\lambda\) is defined as \(1/|y^{*}x|\), and if \(\lambda\) is ill-conditioned this means that \(|y^{*}x|\) is tiny. Let \(q_{1}=(x-\frac{1}{2}(y^{*}x)y)/\|x-\frac{1}{2}(y^{*}x)y\|\), \(q_{2}=(y-\frac{1}{2}(x^{*}y)x)/\|y-\frac{1}{2}(x^{*}y)x\|\), and let \(q_{3},\ldots,q_{n}\) be any orthonormal vectors that are orthogonal to \(q_{1}\) and \(q_{2}\). Note that if \(|y^{*}x|\) is small, then \(q_{1}\) is almost orthogonal to \(q_{2}\):
\[q_{2}^{*}q_{1}=\frac{\frac{1}{4}(y^{*}x)|y^{*}x|^{2}}{1-\frac{3}{4}|y^{*}x|^{2 }}.\]
The rank one matrix in (14) is therefore very nearly unitarily similar to the direct sum of the following 2 by 2 matrix and an \(n-2\) by \(n-2\) block of zeros:
\[\frac{1}{(\zeta-\lambda)(y^{*}x)}\left[\begin{array}{cc}(q_{1}^{*}x)(y^{*}q _{1})&(q_{1}^{*}x)(y^{*}q_{2})\\ (q_{2}^{*}x)(y^{*}q_{1})&(q_{2}^{*}x)(y^{*}q_{2})\end{array}\right]=\frac{1}{ (\zeta-\lambda)(y^{*}x)}\left[\begin{array}{cc}\frac{1}{2}(y^{*}x)&1\\ 0&\frac{1}{2}(y^{*}x)\end{array}\right]+O(|y^{*}x|^{2}).\]
Ignoring the \(O(|y^{*}x|^{2})\) terms, the numerical range of this matrix is \(\frac{1}{(\zeta-\lambda)(y^{*}x)}\) times a disk of radius \(\frac{1}{2}\) about \(\frac{1}{2}y^{*}x\).
If the boundary of \(\Omega\) comes only _fairly_ close to an ill-conditioned eigenvalue, as in the examples of section 4, the matrix in (14) may not be close to \((\zeta I-A)^{-1}\) because the other nearby eigenvalues still have an effect. The closest rank one matrix to \((\zeta I-A)^{-1}\) is \(\sigma_{1}u_{1}v_{1}^{*}\), where \(\sigma_{1}\) is the largest singular value of \((\zeta I-A)^{-1}\) and \(u_{1}\) and \(v_{1}\) are the associated left and right singular vectors, respectively. In this case, if \(u_{1}\) and \(v_{1}\) are almost orthogonal to each other, then the same argument shows that if \((\zeta I-A)^{-1}\approx\sigma_{1}u_{1}v_{1}^{*}\), then the numerical range of
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline matrix & \(n\) & \(\kappa(V)\) & \(W(A)\) & \(\frac{1}{2\pi}\int_{\partial W(A)}\|R\|_{2}\) & CP \\ \hline normal & \(\infty\) & 1 & \(\mathrm{conv}(\mathrm{Sp}(A))\) & \(\infty\) & 2.415 \\ \hline normal + E, & 50 & 1.01 & \(\sim\)\(\mathrm{conv}(\mathrm{Sp}(A))\) & 44.2 & 2.415 \\ \(\|E\|=1.e-3\) & & & & & \\ \hline \hline randn(n)/\(\sqrt{n}\) & 50 & 75.3 & \(\sim\mathcal{D}(0,\sqrt{2})\) & 3.72 & 2.415 \\ \hline transient & 50 & 6.5e+6 & \(\sim\mathcal{D}(-.5,.78)\) & 13.83 & 2.415 \\ demo & & & & & \\ \hline boeing & 55 & 9.5e+6 & \(\sim\mathcal{D}(-433,\) & 2.415 & 2.415 \\ demo & & & \(8.5e+6)\) & & \\ \hline random complex & 50 & 1.3e+12 & \(\sim\mathcal{D}(0,9.2)\) & 3.60 & 2.415 \\ triangular & & & & & \\ \hline \hline Jordan block & 50 & \(\infty\) & \(\mathcal{D}(0,0.998)\) & 33.41 & 2.415 \\ \hline Jordan block & \(\infty\) & \(\infty\) & \(\mathcal{D}(0,1)\) & \(\infty\) & 2.415 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of \(K\) value from Theorem 1 when \(\Omega=W(A)\) with that from (3). Column 2 shows the size of the matrix and column 3 gives the condition number of a matrix of normalized eigenvectors. Column 4 gives an exact or approximate (\(\sim\)) description of \(W(A)\). Column 5 contains the value of \(K\) from (3) and column 6 contains the Crouzeix-Palencia bound (i.e., the bound from Theorem 1) on \(K\).
\((\zeta I-A)^{-1}\) is approximately equal to \(\sigma_{1}\) times a disk of radius \(\frac{1}{2}\) about \(\frac{1}{2}v_{1}^{*}u_{1}\). Again, the radius is much larger than the absolute value of the center, so all points on the boundary of this disk have absolute value close to the numerical radius.
To see that the right and left singular vectors corresponding to the largest singular value of \((\zeta I-A)^{-1}\) are almost orthogonal to each other when \(\zeta\) is close to a simple but ill-conditioned eigenvalue \(\lambda\) of \(A\), we can use a theorem of Stewart [11]. First note that these are the left and right singular vectors corresponding to the _smallest_ singular value of \(\zeta I-A\). Let us start with the matrix \(\lambda I-A\), which has a null space of dimension one. The normalized right and left eigenvectors, \(x\) and \(y\), corresponding to the eigenvalue \(\lambda\) of \(A\) satisfy \((\lambda I-A)x=0\) and \((\lambda I-A)^{*}y=0\). It follows that these are right and left singular vectors of \(\lambda I-A\) corresponding to the smallest singular value, \(0\). Write the SVD of \(\lambda I-A\) as \(Y\Sigma X^{*}\), where \(X=[x,X_{2}]\) and \(Y=[y,Y_{2}]\), and we have put the smallest singular value first. Define \(E:=(\zeta-\lambda)I\) so that
Figure 8: Matrix from MATLAB command gallery(’grcar’,100) Eigenvalues (dots) and \(W(A)\) and \(\mathcal{D}(0,1/w(A^{-1}))\) (dashed curves) and \(W(A)\backslash\mathcal{D}(0,1/w(A^{-1}))\) (thick solid curve). It was shown in [4] that this is a \((3+2\sqrt{3})\approx 6.46\)-spectral set for \(A\). Direct application of Theorem 1 shows that it is a \(4.38\)-spectral set for \(A\), and the value of \(K\) from (3) is \(34.85\).
Figure 9: Matrix from MATLAB command gallery(’grcar’,32) Eigenvalues (dots) and components of the \(10^{-3}\)-pseudospectrum (solid curves). Direct application of Theorem 1 shows that this is a \(4.20\times 10^{3}\)-spectral set for \(A\), but the value of \(K\) from (3) is \(2.12\times 10^{3}\).
\((\lambda I-A)+E=\zeta I-A\). Define
\[\gamma:=\left\|\left[\begin{array}{c}Y_{2}^{*}Ex\\ X_{2}^{*}E^{*}y\end{array}\right]\right\|_{F}=\left\|\left[\begin{array}{c}( \zeta-\lambda)Y_{2}^{*}x\\ (\tilde{\zeta}-\tilde{\lambda})X_{2}^{*}y\end{array}\right]\right\|_{F}\leq \sqrt{2}\ |\zeta-\lambda|,\]
\[\delta := \sigma_{n-1}(\lambda I-A)-\|y^{*}Ex\|_{2}-\|Y_{2}^{*}EX_{2}\|_{2}\] \[= \sigma_{n-1}(\lambda I-A)-|\zeta-\lambda|\,(|y^{*}x|+\|Y_{2}^{*}X _{2}\|_{2})\] \[\geq \sigma_{n-1}(\lambda I-A)-|\zeta-\lambda|(1+|y^{*}x|),\]
where \(\sigma_{n-1}(\lambda I-A)\) is the second smallest singular value of \(\lambda I-A\). Assuming that \(\gamma/\delta<1/2\), it is shown in [11, Theorem 6.4] that there are vectors \(p\) and \(q\) satisfying
\[\left\|\left[\begin{array}{c}p\\ q\end{array}\right]\right\|_{F}<2\frac{\gamma}{\delta}\]
such that \(x+X_{2}p\) and \(y+Y_{2}q\) are (multiples of) right and left singular vectors of \((\lambda I-A)+E=\zeta I-A\), corresponding to the smallest singular value; i.e., they are left and right singular vectors of \((\zeta I-A)^{-1}\), corresponding to the largest singular value. It follows that if \(x\) and \(y\) are almost orthogonal to each other and if \(\|p\|_{2}\) and \(\|q\|_{2}\) are small, then the singular vectors \(u_{1}\) and \(v_{1}\) corresponding to the largest singular value of \((\zeta I-A)^{-1}\) are almost orthogonal to each other:
\[\left|\frac{(x+X_{2}p)^{*}(y+Y_{2}q)}{\|x+X_{2}p\|_{2}\|y+Y_{2}q\|_{2}}\right| =\frac{|x^{*}y+x^{*}Y_{2}q+p^{*}X_{2}^{*}y+p^{*}X_{2}^{*}Y_{2}q|}{\|x+X_{2}p\|_ {2}\|y+Y_{2}q\|_{2}}\leq\frac{|x^{*}y|+\|q\|_{2}+\|p\|_{2}+\|p\|_{2}\|q\|_{2}}{ \sqrt{(1-\|p\|_{2}^{2})(1-\|q\|_{2}^{2})}}.\]
|
2308.10605 | Traversable wormholes in $f(R)$ gravity sourced by a cloud of strings | Wormhole solutions in General Relativity (GR) require \textit{exotic} matter
sources that violate the null energy condition (NEC), and it is well known that
higher-order modifications of GR and some alternative matter sources can
support wormholes. In this study, we explore the possibility of formulating
traversable wormholes in $f(R)$ modified gravity, which is perhaps the most
widely discussed modification of GR, with two approaches. First, to investigate
the effects of geometrical constraints on the global characteristics, we gauge
the $rr$-component of the metric tensor, and employ Pad\`{e} approximation to
check whether a well-constrained \textit{shape function} can be formulated in
this manner. We then derive the field equations with a background of string
cloud, and numerically analyse the energy conditions, stability, and amount of
exotic matter in this space-time. Next, as an alternative source in a simple
$f(R)$ gravity model, we use the background cloud of strings to estimate the
wormhole shape function, and analyse the relevant properties of the space-time.
These results are then compared with those of wormholes threaded by normal
matter in the simple $f(R)$ gravity model considered. The results demonstrate
that wormholes with NEC violations are feasible; however, the wormhole
space-times in the simple $f(R)$ gravity model are unstable. | Parangam Goswami, Anshuman Baruah, Atri Deshamukhya | 2023-08-21T10:02:31Z | http://arxiv.org/abs/2308.10605v1 | # Traversable wormholes in \(f(R)\) gravity sourced by a cloud of strings
###### Abstract
Wormhole solutions in General Relativity (GR) require _exotic_ matter sources that violate the null energy condition (NEC), and it is well known that higher-order modifications of GR and some alternative matter sources can support wormholes. In this study, we explore the possibility of formulating traversable wormholes in \(f(R)\) modified gravity, which is perhaps the most widely discussed modification of GR, with two approaches. First, to investigate the effects of geometrical constraints on the global characteristics, we gauge the \(rr\)-component of the metric tensor, and employ Pade approximation to check whether a well-constrained _shape function_ can be formulated in this manner. We then derive the field equations with a background of string cloud, and numerically analyse the energy conditions, stability, and amount of exotic matter in this space-time. Next, as an alternative source in a simple \(f(R)\) gravity model, we use the background cloud of strings to estimate the wormhole shape function, and analyse the relevant properties of the space-time. These results are then compared with those of wormholes threaded by normal matter in the simple \(f(R)\) gravity model considered. The results demonstrate that wormholes with NEC violations are feasible; however, the wormhole space-times in the simple \(f(R)\) gravity model are unstable.
_Keywords--_ Wormhole, string cloud, modified gravity, energy conditions
###### Contents
* 1 Introduction
* 2 Traversable wormholes
* 2.1 A novel shape function
* 3
Traversable wormholes in \(f(R)\) gravity * 3.1 Cloud of Strings as a source * 3.2 Traversable wormholes in \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity with a string cloud background * 3.3 Traversable wormholes in \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity with ordinary matter
* 4 Discussion and Conclusion
## 1 Introduction
The governing equation of the theory of General Relativity (GR) is the Einstein field equation (EFE), and solutions to this set of coupled differential equations have been remarkably successful in accounting for phenomena ranging from black holes [1] to the evolution of the universe. Spherical symmetry is an important physically relevant constraint in solving the EFEs, and wormholes are an intriguing prospect among spherically symmetric solutions, explored since the early days of GR [2, 3]. Ellis [4] and Bronnikov [5] independently reported the first traversable Lorentzian wormhole solution, and the geometric requirements were established in detail by Morris & Thorne in 1988 [6]. Wormholes are exact solutions to the EFEs, and can be interpreted as bridges connecting two different asymptotically flat regions of space-time via a _throat_. The topology of the wormhole interior is non-trivial, while the topology at the boundaries remains simple [7]. While theoretical as of now, wormholes are of crucial importance in fundamental physics, especially considering quantum entanglement [8] and quantum gravity. Moreover, it has recently been shown that wormholes may actually mimic astrophysical black holes in some observations [57]. A fundamental problem in such space-times is that for the _throat_ to remain open for signal propagation, the behavior of the matter sources supporting such a space-time is _exotic_ in that the null energy condition (NEC) is violated [6]. Specifically, this implies that observers in an inertial frame measure negative energy densities at the throat, which is unphysical. Such behaviour can be avoided in modified gravity theories.
Despite its success, GR fails to explain cosmological phenomena such as late-time accelerated cosmic expansion [9, 10] and the so called 'inflationary epoch' [11]. Modified theories of gravity address these shortcomings [12, 13, 14], and the extra degrees of freedom in modified gravity theories also enables one to evade NEC violations in wormhole space-times. Moreover, wormhole solutions are an inherent feature of most modifications of GR. One of the simplest modifications to GR is \(f(R)\) modified gravity, where the Ricci scalar \(R\) in the Einstein-Hilbert action is replaced by some arbitrary function of it [15]. This simple modification to GR can lead to a host of models that can independently meet both cosmological and solar system tests [16]. It has been shown that the extra degrees of freedom in \(f(R)\) gravity arising from higher order curvature terms may lead to scenarios where the matter content satisfies the NEC in wormhole space-times [17, 18]. Wormholes in the framework of \(f(R)\) gravity have been studied extensively in different iterations of \(f(R)\) gravity (for instance, in Refs. [19, 20, 21, 22, 23, 24, 25, 26, 27, 28]).
Another inherent limitation of GR is that it provides only a classical description of gravity, and the theory is non-renormalisable at high energy (small length) scales. String theory [29] is perhaps the strongest contender to a unified paradigm of gravity with the other fundamental forces, and it posits that the fundamental constituent of matter and energy are extended objects, instead of point-like ones. Precisely, the extended objects are considered as one-dimensional relativistic strings, and the interactions of strings on a classical level provide better models of several fundamental interactions [30, 31, 32]. To this end, string clouds as a gravitational source have been studied extensively in literature. A general solution to the EFEs for a spherically symmetric cloud of strings was first reported in [33], with an emphasis on the energy conditions. Properties of compact objects such as black holes in the background of a string cloud have been reported previously [34, 35, 36]. Traversable wormholes in the background of a string cloud have been reported [37], with an emphasis on the amount of exotic matter and stability of the wormhole configuration against radial perturbations. Recently, a detailed study of the properties of traversable wormholes surrounded by a string cloud in the framework of \(f(R)\) gravity have been reported with analysis of the quasi normal modes (QNMs) of the wormhole solutions [38].
In this study, we investigate traversable wormholes in \(f(R)\) gravity with two motivations: First, to investigate the effects of geometrical constraints, we gauge the \(rr\)-component of the metric tensor, and employ Pade approximation to check whether a well-constrained _shape function_ can be formulated in this manner. We then derive the field equations with the background of a string cloud, and numerically analyse the energy conditions, stability, and amount of exotic matter in this space-time. Next, we use a background of string cloud to estimate the wormhole shape function in a simple \(f(R)\) gravity model, with \(f(R)=\alpha R^{m}-\beta R^{n}\)[39], and analyse the relevant properties of the space-time as in the previous case. These results are then compared with those of wormholes threaded by normal matter in the same modified gravity model considered.
The remainder of this manuscript is organised as follows. In Sec. 2, we discuss the traversable wormhole geometry. A novel shape function is proposed using Pade approximation in Sec. 2.1. In Sec. 3, we present the modified EFEs in the general framework of \(f(R)\) gravity. In Sec. 3.1 we analyse the various energy conditions, the stability in terms of TOV equation, and the amount of exotic matter required to sustain traversible wormholes in the framework of \(f(R)\) gravity with a background of string cloud. Next, in Sec. 3.2 we estimate the shape function in the considered form of \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity model [39] with a string cloud background and analyse the properties of the wormhole space-time. For a comparative analysis, in Sec. 3.3 we present wormhole solutions supported by ordinary matter for the \(f(R)\) model [39] considered in Sec. 3.2 using our proposed shape function obtained by employing Pade approximation in Sec. 2.1. Finally, we conclude the work with some remarks in Sec. 4. We adhere to use the natural system of units (\(G=c=1\)) throughout the work.
Traversable wormholes
Morris & Thorne [6] used the following metric ansatz to describe a static, spherically symmetric space-time
\[ds^{2}=-e^{2\Phi(r)}dt^{2}+\frac{dr^{2}}{1-\frac{b(r)}{r}}+r^{2}d\theta^{2}+r^{2 }sin^{2}\theta d\phi^{2} \tag{1}\]
Eq. (1) is the line element of a traversable wormhole. The proper radial coordinate \(l(r)=\int_{r_{0}}^{r}\frac{dr}{\sqrt{1-\frac{b}{r}}}\) should be well behaved throughout the space-time in order to avoid singularities. It imposes the constraint \(\frac{b}{r}\leq 1\) at the throat. This space-time is a special case of the Ellis-Bronnikov [4, 5] space-time described in terms of \(l(r)\) as
\[ds^{2}=-dt^{2}e^{2\Phi(l)}+dl^{2}+r^{2}(l)(d\theta^{2}+r^{2}sin^{2}\theta d\phi ^{2}) \tag{2}\]
where the throat is located at some minimum \(r(l)\).
The metric function \(\Phi(r)\) in Eq. (1) is known as the red-shift function, and the first coefficient of the line element in Eq. (1) provides a measure of the gravitational red-shift. The topological configuration of the space-time is determined by the second coefficient of the line element in Eq. (1), and the metric function \(b(r)\) is known as the shape function. At some minimum value of the radial co-ordinate \(r\), the throat of the wormhole is located at some arbitrary value \(r_{0}\). A significant aspect of traversable wormholes is that the throat should not be surrounded by an event horizon. Horizons in spherically symmetric space-times are described by physically non-singular surfaces at \(g_{00}=-e^{2\Phi}\to 0\), and this results in the constraint that throughout the space-time \(\Phi(r)\) should be well defined. Moreover, the geometric constraints on the shape function \(b(r)\) demanded by traversability are: (i) \(b(r_{o})=r_{o}\), (ii) \(\frac{b(r)-b^{\prime}(r)r}{b^{2}}>0\), (iii) \(b^{\prime}(r_{o})-1\leq 0\), (iv) \(\frac{b(r)}{r}<1,\forall r>r_{o}\), (v) \(\frac{b(r)}{r}\to 0\) as \(r\rightarrow\infty\), where prime denotes a derivative with respect to the radial co-ordinate \(r\). The energy density \(\rho\), radial pressure \(p_{r}\), and transverse pressure \(p_{t}\) of the matter sources are constrained by these conditions on the metric functions through the EFEs. Therefore, while constructing traversable wormhole configurations violations of the energy conditions appear owing to these constraints. Next, we propose a novel form the shape function and check the viability of the shape function by analysing the various constraints. In addition, in this work we consider tideless traversable wormhole solutions described by a constant red-shift function \(\Phi^{\prime}(r)=0\).
### A novel shape function
We examine the following functional form as a probable shape function
\[b(r)=r_{0}\left[\log\left(\frac{r}{r_{0}}\right)+\coth(r_{0})\tanh(r)\right] ^{a} \tag{3}\]
where, \(r_{0}\) is the throat, and \(a\) is a free parameter. The viability of the shape function \(b(r)\) depends on several constraints as discussed in Sec 2. Analysing these constraints for
the functional form of Eq. (3), insights can be obtained about the throat radius \(r_{0}\), and the parameter \(a\). However, it can be observed Eq. (3), does not satisfy the condition of \(b(r_{0})=r_{0}\), which is an important consequence of traversable wormholes, the other required conditions for the viability of the shape function, such as the asymptotic flatness, and the flaring out condition are also not satisfied. Thus, in order to have a plausible form of \(b(r)\), the next step involves adopting the Pade approximation for the functional form in Eq. (3), and to check if a viable form of \(b(r)\) can be obtained that satisfies all the required constraints on a shape function. Use of rational expansions made by Pade functions is common in existing literature [40, 41, 42, 43]. The Pade approximation is built on the Taylor series expansion. For a given function such as \(f(z)=\sum_{i=1}^{\infty}c_{i}z^{i}\), expanding the series with the coefficients \(c_{i}\), the \((n,m)\) Pade approximant ratio is given as [44],
\[P_{n,m}(z)=\frac{\sum_{k=0}^{n}a_{k}z^{k}}{1+\sum_{\sigma=0}^{m}b_{\sigma}z^{ \sigma}} \tag{4}\]
The most simple forms of Pade approximant are of the orders \((1,0)\) & \((0,1)\). Thus, considering the \((1,0)\) approximation and expanding Eq. (3) about the throat radius \(r_{0}\), the shape function is
\[b(r)=r_{0}-a(r_{0}-r)\left[1+r_{0}\ \text{cosech}(r_{0})\text{sech}(r_{0})\right] \tag{5}\]
With the form as described in Eq. (5), the various conditions required for the viability are analysed and it becomes evident that the function in Eq. 5 satisfies all the necessary conditions to be a shape function for \(r_{0}=3\) and \(0<a<1\). Moreover, the function satisfies the condition \(b(r_{0})=r_{0}\). We therefore propose \(b(r)=r_{0}-a(r_{0}-r)\left[1+r_{0}\ \text{cosech}(r_{0})\text{sech}(r_{0})\right]\) as a new viable shape function.
The plots of the asymptotic flatness \(b(r)/r\to 0\) as \(r\rightarrow\infty\), and flaring out condition \(\frac{b(r)-b^{\prime}(r)r}{b^{2}}>0\) are shown in Figure 1.
With the obtained viable shape function, next the traversable wormhole solutions are analysed in the framework of \(f(R)\) gravity with a background of string cloud.
## 3 Traversable wormholes in \(f(R)\) gravity
A general form of the action in \(f(R)\) modified theories is [19]
\[S=\int d^{4}x\sqrt{-g}\left[f(R)+\mathcal{L}_{m}\right] \tag{6}\]
Using the metric formalism of \(f(R)\) gravity, the modified EFEs are obtained as
\[FR_{\mu\nu}-\frac{1}{2}f(R)\,g_{\mu\nu}-\nabla_{\mu}\nabla_{\nu}F+g_{\mu\nu} \square F=\,T^{m}_{\mu\nu}\,, \tag{7}\]
where \(F\equiv df/dR\), and \(T^{m}_{\mu\nu}=\frac{-2}{\sqrt{-g}}\frac{\delta\mathcal{L}_{m}}{\delta g^{\mu \nu}}\) is the stress-energy tensor of background matter. We consider that the spherically symmetric space-time is represented by the line-element in Eq. (1). Contracting Eq. (7) yields:
\[FR-2f(R)+3\square F=T \tag{8}\]
Here, \(T\) is the trace of the stress-energy tensor of matter and \(\square F\) is:
\[\square F=\frac{1}{\sqrt{-g}}\partial_{\mu}(\sqrt{-g}g^{\mu\nu}\partial_{\nu} F)=\left(1-\frac{b}{r}\right)\left[F^{\prime\prime}-\frac{b^{\prime}r-b}{2r^{2}(1- b/r)}\,F^{\prime}+\frac{2F^{\prime}}{r}\right] \tag{9}\]
with \(F^{\prime}=df(R)/dR\) and \(b^{\prime}=d\,b(r)/dr\). Substituting Eq. (8) in Eq. (7) yields the modified EFEs as:
\[G_{\mu\nu}\equiv R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=T^{\rm eff}_{\mu\nu} \tag{10}\]
Here, \(T^{\rm eff}_{\mu\nu}\) is the effective stress-energy tensor, responsible for the energy condition violations and is generally interpreted as a gravitational fluid. \(T^{\rm eff}_{\mu\nu}\) comprises of the matter stress energy tensor \(T^{m}_{\mu\nu}\) and the curvature stress-energy tensor \(T^{c}_{\mu\nu}\) given by
\[T^{c}_{\mu\nu}=\frac{1}{F}\left[\nabla_{\mu}\nabla_{\nu}F-\frac{1}{4}g_{\mu \nu}\left(RF+\square F+T\right)\right] \tag{11}\]
Assuming that the geometry of the wormhole is threaded by an anisotropic distribution of matter
\[T_{\mu\nu}=(\rho+p_{t})U_{\mu}\,U_{\nu}+p_{t}\,g_{\mu\nu}+(p_{r}-p_{t})\chi_{ \mu}\chi_{\nu}\,, \tag{12}\]
where \(U^{\mu}\) is the four-velocity, and \(\chi^{\mu}\) represents a unit space-like vector.
With the line element in Eq. (1), the modified EFEs can be expressed as the following [19]
\[\rho=\frac{Fb^{\prime}}{r^{2}} \tag{13}\]
\[p_{r}=-\frac{bF}{r^{3}}+\frac{F^{\prime}}{2r^{2}}(b^{\prime}r-b)-F^{\prime\prime} \left(1-\frac{b}{r}\right) \tag{14}\]
\[p_{t}=-\frac{F^{\prime}}{r}\left(1-\frac{b}{r}\right)+\frac{F}{2r^{3}}(b-b^{ \prime}r) \tag{15}\]
The Ricci scalar is given as \(R=\frac{2b^{\prime}}{r^{2}}\). With the explicit forms of energy density \(\rho\), radial pressure \(p_{r}\), and transverse pressure \(p_{t}\) in Eqs. (13)-(15), the various energy conditions, stability and the amount of exotic matter required for the wormhole configuration can be analysed.
### Cloud of Strings as a source
A cloud of strings is analogous to the perfect fluid models of gas and dust. However, the difference is that it comprises one-dimensional objects extended along some specific direction. The string cloud can exist in different geometrical configurations such as planar, axisymmetric, or spherical [45]. A general solution for the spherical distribution of the cloud of strings was reported in [45]. In addition, thermodynamic properties of string gas has been reported in [46]. For a spherically symmetric string cloud in four-dimensions, the energy momentum tensor has the following non-null components [45]
\[T^{t}{}_{t}=T^{r}{}_{r}=-\frac{\eta^{2}}{r^{2}} \tag{16}\]
where \(\eta\) is a constant related to the total energy of the string cloud. With the form of the shape function as described in Eq. (5), and using Eq. (13), we get
\[F(r)=-\frac{\eta^{2}}{a\left(r_{0}\ \text{cosech}(r_{0})\ \text{sech}\ (r_{0})+1\right)} \tag{17}\]
With the obtained \(F(r)\) and the field equations Eqs. (13)-(15), the numerical analyses are conducted to obtain the energy conditions, check the stability, and estimate the amount of exotic matter.
For any observer traversing a time-like curve to detect the energy density of the matter field to be positive, the stress-energy tensor of matter must adhere to some sets of inequalities known as the energy conditions in GR [47, 18]. The weak energy condition (WEC) implies \(\rho\geq 0\), the NEC implies \(\rho+p_{r}\geq 0\), and \(\rho+p_{t}\geq 0\), whereas the strong energy condition (SEC) implies \(\rho+p_{r}+2p_{t}\geq 0\). The profile of energy conditions with a background of string cloud is presented with the throat radius fixed at \(r_{0}=3\). Emphasis is made on the effect of the string cloud constant \(\eta\) on the energy conditions. Results are discussed by fixing the parameter \(a=0.5\). For the whole range of \(a\), \(0<a<1\), the energy conditions show similar behaviour. Previous studies indicate that the constant associated with the total energy of the string cloud should be small [33, 37].
Figure 2 shows the profile of the NEC terms \(\rho+p_{r}\) and \(\rho+p_{t}\). It can be observed that the first NEC term is satisfied at the throat \(r_{0}=3\) for the considered values of the \(\eta\) as
\(\eta=0.1\), \(0.3\), and \(0.5\). However, the second NEC term \(\rho+p_{t}\) is violated for all the values of \(\eta\). Owing to the violation of the second NEC term, as a whole the NEC is inferred to be violated.
From figure 3, it can be observed that the WEC is violated at the wormhole throat for all the values of \(\eta\). It is also evident from the fact that the non-null components of the stress-energy tensor of the string cloud comes with a minus sign as shown in Eq. (16). Moreover, it is seen that the SEC exhibits an oscillatory (indeterminate) behaviour along the radial coordinate \(r\). With increasing \(r\), the oscillation between positive and negative values decreases; however, the behaviour extends asymptotically.
Figure 3: Profile of the (a) WEC \(\rho\), and (b) SEC \(\rho+p_{r}+2p_{t}\) vs. \(r\) respectively with \(b(r)\) as in Eq. 5
Figure 2: Profile of the NEC terms (a) \(\rho+p_{r}\), and (b) \(\rho+p_{t}\) respectively vs. \(r\) with \(b(r)\) as in Eq. 5
In addition to the energy conditions, analysing two other parameters, viz. the equation of state (EoS) parameter \(\omega=p_{r}/\rho\), and anisotropy parameter \(\Delta=p_{t}-p_{r}\) turns out to be useful. Information about the nature of the matter source threading the wormhole geometry can be obtained from the EoS parameter, and the attractive or repulsive nature of the space-time geometry (geometrical viability) can be understood by the anisotropy parameter.
Figure 4 shows the behaviour of the EoS and the anisotropy parameters. Near the wormhole throat, the EoS parameter is \(\omega<-1\), signifying a phantom-like behaviour of the string cloud for all values of \(\eta\). The anisotropy parameter \(\Delta<0\) near the wormhole throat, signifying an attractive nature of the space-time geometry. The summary of the energy conditions are presented in Table 1.
First reported in the context of neutron stars [48, 49], the Tolman-Oppenheimer-Volkov (TOV) equation provides information regarding the stability of stellar structures. In order to probe the stability of wormholes in terms of the hydrostatic, gravitational, and anisotropic forces in the space-time, a more generalized version of the formalism was developed in [50]. The generalized TOV equation [50, 51] is given as
\[-\frac{dp_{r}}{dr}-\frac{\epsilon^{\prime}(r)}{2}(\rho+p_{r})+\frac{2}{r}(p_{ t}-p_{r})=0, \tag{18}\]
where \(\epsilon(r)=2\Phi(r)\). \(F_{\rm h}\) represents the hydrostatic force, \(F_{\rm g}\) the gravitational force, and \(F_{\rm a}\), the anisotropic force. These three terms of the TOV equation can determine the equilibrium anisotropic mass distribution [51] in that stable stellar structures satisfy Eq. (18).
\[F_{\rm h}=-\frac{dp_{r}}{dr},\hskip 28.452756ptF_{\rm a}=\frac{2}{r}(p_{t}-p_{r }),\hskip 28.452756ptF_{\rm g}=-\frac{\epsilon^{\prime}}{2}(\rho+p_{r}) \tag{19}\]
Owing to the constant red-shift function \(\Phi^{\prime}(r)=0\), the gravitational force is \(F_{\rm g}=0\) in the analysis.
Figure 4: Profile of the (a) EoS parameter \(\omega\), and (b) anisotropy parameter \(\Delta\) vs. \(r\) respectively with \(b(r)\) as in Eq. 5
Using the averaged null energy condition, \(\int_{\lambda_{1}}^{\lambda_{2}}T_{ij}k^{i}k^{j}d\lambda\geq 0\), evaluated along the radial coordinate \(r\), the amount of exotic matter in wormhole space-times can be estimated. However, the amount of energy condition violating matter can be estimated in a more generalised manner by using a volume integral instead of the line integral, namely the volume integral quantifier (VIQ) [52, 53, 54], which is defined as
\[I_{v}=\oint[\rho+p_{r}]dV=8\pi\int_{r_{0}}^{s}(\rho+p_{r})r^{2}dr \tag{20}\]
After matching the wormhole space-time with an exterior metric and cutting off the stress-energy tensor at some \(r=s\) away from the throat, an estimate of the amount of NEC violating matter can be obtained by using the VIQ. For arbitrarily small quantities of NEC violating matter, the requirement is \(I_{v}\to 0\) as \(s\to r_{0}\)[52, 53].
Figure 5 shows the terms of the TOV equation and the profile of the VIQ. It is seen that that the hydrostatic force \(F_{\rm h}\), and the anisotropic force \(F_{\rm a}\) cancel each other asymptotically, signifying a stable configuration. In addition, it can be observed that the VIQ, \(I_{v}\to 0\) as \(s\to r_{0}\), indicates that the wormhole solution is feasible with arbitrarily small amounts of exotic matter. The terms of the TOV and VIQ equations are shown with \(\eta=0.5\). For \(\eta=0.1,\text{ and }0.3\) the corresponding terms of the TOV and VIQ equations depict a similar behaviour.
Thus, wormhole solutions can be formulated in a model independent manner using the novel shape function, Eq. (5), in the framework of \(f(R)\) gravity with a string cloud background. The next motive remains to check whether a feasible form of the shape function can be obtained in a particular form of \(f(R)\) gravity model with the string cloud cloud background.
\begin{table}
\begin{tabular}{l c c} \hline Terms & Result & Interpretation \\ \hline \multirow{2}{*}{\(\rho\)} & \(<0\) near throat, & WEC violated \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\rho+p_{r}\)} & \(>0\) near throat, & NEC satisfied \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\rho+p_{t}\)} & \(<0\) near throat, & NEC violated \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\rho+p_{r}+2p_{t}\)} & oscillates, & \\ & for \(\eta=0.1,0.3,0.5\) & SEC indeterminate \\ \hline \multirow{2}{*}{\(\omega\)} & \(<-1\) near throat, & phantom-like source \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\Delta\)} & \(<0\) near throat, & attractive geometry \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the energy conditions discussed in Sec. 3.1
### Traversable wormholes in \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity with a string cloud background
In this section, we obtain a form of the shape function \(b(r)\) by fixing \(f(R)\) and analyse its viability. For the analysis, we consider a simple form of \(f(R)\) gravity model as [39]
\[f(R)=\alpha R^{m}-\beta R^{n} \tag{21}\]
where \(\alpha\) and \(\beta\) are positive constants and \(m\) and \(n\) are positive integers satisfying the condition \(m>n\)[39]. Using Eq. (16) in Eq. (13), we obtain
\[-\frac{\eta^{2}}{r^{2}}=\frac{Fb^{\prime}}{r^{2}} \tag{22}\]
Considering the \(f(R)\) model as described in Eq. (21), and obtaining \(F\), the shape function can be found from Eq. (22). For simplifying the calculations, the integers \(m\) and \(n\) are set as \(m=2\) and \(n=1\) satisfying the condition \(m>n\). Again, using the expression for the Ricci scalar \(R=\frac{2b^{\prime}}{r^{2}}\), \(F\) is found in terms of \(r\). Eq. (22) reduces to a quadratic equation in \(b^{\prime}\) of the form
\[4\alpha{b^{\prime}}^{2}-b^{\prime}\beta r^{2}+\eta^{2}r^{2}=0 \tag{23}\]
Solving Eq. (23) and considering only the positive root for mathematical feasibility, we get
\[b^{\prime}=\frac{\beta r^{2}+\sqrt{\beta^{2}r^{4}-16\alpha\eta^{2}r^{2}}}{8 \alpha} \tag{24}\]
Integrating Eq. (24), we get
\[b=\frac{1}{8\alpha}\left[\frac{\beta r^{3}}{3}+\frac{\left(\beta^{2}r^{2}-16 \alpha\eta^{2}\right)\sqrt{\beta^{2}r^{4}-16\alpha\eta^{2}r^{2}}}{3\beta^{2}r }\right]+c \tag{25}\]
where \(c\) is a constant of integration. In order to evaluate the constant \(c\), we use the condition at the wormhole throat, \(b(r_{0})=r_{0}\). This leads to the following form of the shape function
\[b =r_{0}+\frac{1}{24\alpha\beta^{2}}\left[\frac{\beta^{3}r^{4}+\left( \beta^{2}r^{2}-16\alpha\eta^{2}\right)\sqrt{\beta^{2}r^{4}-16\alpha\eta^{2}r^{ 2}}}{r}\right.\] \[-\left.\frac{\beta^{3}r_{0}^{4}+\left(\beta^{2}r_{0}^{2}-16 \alpha\eta^{2}\right)\sqrt{\beta^{2}r_{0}^{4}-16\alpha\eta^{2}r_{0}^{2}}}{r_{0 }}\right] \tag{26}\]
With the shape function in Eq. (26), the various conditions required for the viability of the shape function as described in Sec 2 are analysed. It is interesting to note that the asymptotic flatness \(\frac{b}{r}\to 0\) as \(r\rightarrow\infty\), and the flaring out condition \(\frac{b-b^{\prime}r}{b^{2}}>0\) are satisfied only for negative values of the constants \(\alpha\) and \(\beta\) (\(\alpha=-0.5\) and \(\beta=-0.8\)). The profile of the asymptotic flatness and flaring out condition are shown in Figure 6 with different values of \(\eta\).
With the shape function in Eq. (26), and the \(f(R)\) model described by Eq. (21), the energy conditions, stability and the amount of exotic matter required for the wormhole configuration are analysed next.
Using Eqs. (13)-(15), the various energy conditions are analysed with the throat at \(r_{0}=1\), and by fixing the model parameters of the \(f(R)\) gravity model (Eq. (21)) as, \(m=2\), \(n=1\), \(\alpha=-0.5\) and \(\beta=-0.8\).
Figure 7 shows the NEC terms \(\rho+p_{r}\), and \(\rho+p_{t}\). It can be observed that the first NEC term \(\rho+p_{r}\) is violated at the wormhole throat. The second NEC term \(\rho+p_{t}\) is satisfied at the wormhole throat. However, owing to violation to the first NEC term, as a whole the NEC is inferred to be violated.
Figure 8 shows the WEC and the SEC. It can be seen that the WEC is violated at the wormhole throat, and this violation can again be attributed to the negative sign associated
with the non-null components of the stress-energy tensor of the string cloud as in Eq. (16). The SEC is satisfied at the throat and also asymptotically, which is again an interesting point to note as in \(f(R)\) gravity, the SEC should be asymptotically violated to account for the late-time accelerated expansion of the universe.
Figure 9 shows the variation of the EoS parameter \(\omega\) and the anisotropy parameter \(\Delta\). It can be seen that the EoS parameter \(\omega\) is \(\omega>0\) near the wormhole throat for all values of \(\eta\), signifying that the string cloud as the source behaves like ordinary matter without any phantom-like behaviour. The anisotropy parameter \(\Delta\) is positive near the wormhole throat for all the values of \(\eta\), signifying a repulsive geometry at the throat. The summary of the energy conditions are presented in Table 2.
Figure 10 shows the terms of the TOV equation and the VIQ.
Figure 8: Profile of the (a) WEC \(\rho\) and (b) SEC \(\rho+p_{r}+2p_{t}\) vs. \(r\) with \(b(r)\) as in Eq. 26
static force \(F_{\rm h}\), and anisotropic force \(F_{\rm a}\) does not cancel each other out, signifying that the wormhole configuration is unstable. From the VIQ, it is evident that \(I_{v}\to 0\) as \(s\to r_{0}\), indicating that the wormhole configuration can be obtained with arbitrarily small amounts of exotic matter.
With the results presented in Sec 3.1 and Sec 3.2, it is clear that wormhole solutions in \(f(R)\) gravity with background a string cloud is feasible with characteristic violation of the NEC. However, with \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity, the wormhole configuration is not stable. In order to have a better understanding, the next section presents results of wormhole solution in \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity supported by ordinary matter and with the form of the shape function as in Eq. (5).
\begin{table}
\begin{tabular}{l c c} \hline Terms & Result & Interpretation \\ \hline \multirow{2}{*}{\(\rho\)} & \(<0\) near throat, & WEC violated \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\rho+p_{r}\)} & \(<0\) near throat, & NEC violated \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\rho+p_{t}\)} & \(>0\) near throat, & NEC satisfied \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\rho+p_{r}+2p_{t}\)} & \(>0\) near throat, & SEC satisfied \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\omega\)} & \(>0\) near throat, & normal matter like source \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \multirow{2}{*}{\(\Delta\)} & \(>0\) near throat, & repulsive geometry \\ & for \(\eta=0.1,0.3,0.5\) & at throat \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the energy conditions discussed in Sec. 3.2
Figure 9: Profile of the (a) EoS parameter \(\omega\) and (b) anisotropy parameter \(\Delta\) vs. \(r\) with \(b(r)\) as in Eq. 26
### Traversable wormholes in \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity with ordinary matter
With the shape function in Eq. (5), the field equations Eqs. (13)-(15) are analysed with the form of the \(f(R)\) model as in Eq. (21). We assume that the wormhole geometry is threaded by ordinary matter with the stress-energy tensor \(T^{\mu}_{\nu}=\text{daig}[-\rho(r),p_{r}(r),p_{t}(r),p_{t}(r)]\). Owing to the properties of the shape function, the throat radius is fixed at \(r_{0}=3\), and the free parameter \(a\) is considered for the whole range \(0<a<1\). The model parameters of the \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity are fixed as, \(\alpha=0.8\), \(\beta=0.5\), and \(m=2\), \(n=1\). The various energy conditions, stability of the wormhole space-time and the VIQ are presented below.
Figure 11 shows the profile of the NEC terms \(\rho+p_{r}\) and \(\rho+p_{t}\). It is evident that that the first NEC term \(\rho+p_{r}\) is violated at the wormhole throat. However, the second NEC term \(\rho+p_{t}\) is satisfied at the throat. Owing to the violation of the first NEC term, as a whole the NEC is considered to be violated.
Figure 12 depicts the WEC and the SEC. It is seen that the WEC is violated near the wormhole throat. The SEC is satisfied at the wormhole throat and asymptotically for the whole range of \(a\), \(0<a<1\). It is interesting to note that even with ordinary matter as the source, the SEC is satisfied asymptotically for \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity.
Figure 13 shows the variation of the EoS parameter and the anisotropy parameter. It can be observed that as expected, the EoS parameter is \(\omega>0\) near the wormhole throat signifying that the source threading the wormhole geometry has no phantom-like behaviour. The anisotropy parameter \(\Delta>0\) near the throat, signifying a repulsive geometry at the throat. The energy conditions are summarised in Table 1.
Figure 14 shows the corresponding terms of the TOV equation, and the VIQ. It is seen that both the hydrostatic force \(F_{\text{h}}\), and the anisotropic force \(F_{\text{a}}\) does not cancel each other out asymptotically for the whole range of \(0<a<1\), signifying that the wormhole configuration
Figure 10: Profile of the (a) \(F_{\text{h}}\), and \(F_{\text{a}}\) vs. \(r\) and (b) VIQ with \(b(r)\) as in Eq. 26
is not stable. From the VIQ it is evident that \(I_{v}\to 0\) as \(s\to r_{0}\), indicating that the wormhole solution can be obtained with arbitrarily small amount of exotic matter. However, the VIQ is only shown for \(a=0.5\), and for the whole range of \(0<a<1\), the VIQ has a similar profile.
## 4 Discussion and Conclusion
In this study, we investigated the traversable wormholes in the framework of \(f(R)\) gravity with a background of string clouds. A novel form of the shape function using Pade approximation was proposed for the analysis. Using this shape function a specific form of \(F(r)\) was obtained in Eq. (17) in order to analyse the EFEs. However, it is observed that the \(F(r)\) depends on the parameters of the metric functions and on the string cloud parameter \(\eta\). Eliminating these dependence and obtaining a cosmologically viable form of \(f(R)\) remains an open issue.
Figure 11: Profile of the NEC terms (a) \(\rho+p_{r}\) and (b) \(\rho+p_{t}\) vs. \(r\) with \(b(r)\) as in Eq. 5
Figure 12: Profile of the (a) WEC \(\rho\) and (b) SEC \(\rho+p_{r}+2p_{t}\) vs. \(r\) with \(b(r)\) as in Eq. 5
The results demonstrate that stable wormhole solutions with characteristic violation of the energy conditions (especially the NEC) are feasible in the framework of \(f(R)\) gravity with the new shape function in the background of a string cloud. In addition, the EoS parameter near the wormhole throat indicates that the string cloud has phantom-like properties. The anisotropy parameter \(\Delta<0\) near the throat signifying an attractive geometry. In addition, the SEC shows an indeterminate behaviour as it oscillates between positive and negative values. Therefore, to have a better understanding of the wormhole solutions, traversable wormholes are considered in a string cloud background with a simple form of \(f(R)\) gravity model, \(f(R)=\alpha R^{m}-\beta R^{n}\). It is interesting to note that the shape function in this case can yield viable wormhole geometries for negative values of \(\alpha\) and \(\beta\). The results demonstrate characteristic violations of the NEC. Interestingly, the SEC is satisfied both at the throat
Figure 14: Profile of the (a) \(F_{\rm h}\), and \(F_{\rm a}\) vs. \(r\) and (b) VIQ with \(b(r)\) as in Eq. 5
Figure 13: Profile of the (a) EoS parameter \(\omega\) and (b) anisotropy parameter \(\Delta\) vs. \(r\) with \(b(r)\) as in Eq. 5
and asymptotically. The EoS parameter \(\omega>0\) at the throat and indicates that that the string cloud behaves as normal matter. The anisotropy parameter \(\Delta>0\) near the wormhole throat and signifies a repulsive geometry at the throat. However, the TOV equation shows that the wormhole space-time is unstable. For a comparative analysis, wormholes threaded by normal matter are analysed in \(f(R)=\alpha R^{m}-\beta R^{n}\) gravity, using the Pade approximate shape function given in Eq. (5). The constants \(\alpha\) and \(\beta\) are considered with positive values. The results demonstrate characteristic violation of the NEC and shows that the wormhole space-time is unstable. The SEC is satisfied at the wormhole throat and asymptotically.
Although wormholes have not been detected yet, studying these exact solutions of the EFEs provide with far reaching insight into the nature of space-time and fundamental building blocks of the universe. Wormholes also play a crucial role in several important issues such as the cosmic censorship conjecture [55]. Studies have reported strong indications for their presence in the form of black hole mimickers [56, 57]. Further studies regarding the observational constraints of these wormhole solutions remain an open issue to be addressed in the near future.
|
2302.12049 | Evaluating Automatic Speech Recognition in an Incremental Setting | The increasing reliability of automatic speech recognition has proliferated
its everyday use. However, for research purposes, it is often unclear which
model one should choose for a task, particularly if there is a requirement for
speed as well as accuracy. In this paper, we systematically evaluate six speech
recognizers using metrics including word error rate, latency, and the number of
updates to already recognized words on English test data, as well as propose
and compare two methods for streaming audio into recognizers for incremental
recognition. We further propose Revokes per Second as a new metric for
evaluating incremental recognition and demonstrate that it provides insights
into overall model performance. We find that, generally, local recognizers are
faster and require fewer updates than cloud-based recognizers. Finally, we find
Meta's Wav2Vec model to be the fastest, and find Mozilla's DeepSpeech model to
be the most stable in its predictions. | Ryan Whetten, Mir Tahsin Imtiaz, Casey Kennington | 2023-02-23T14:22:40Z | http://arxiv.org/abs/2302.12049v1 | # Evaluating Automatic Speech Recognition in an Incremental Setting
###### Abstract
The increasing reliability of automatic speech recognition has proliferated its everyday use. However, for research purposes, it is often unclear which model one should choose for a task, particularly if there is a requirement for speed as well as accuracy. In this paper, we systematically evaluate six speech recognizers using metrics including word-error-rate, latency, and the number of updates to already recognized words on English test data, as well as propose and compare two methods for streaming audio into recognizers for incremental recognition. We further propose Revokes per Second as a new metric for evaluating incremental recognition and demonstrate that it provides insights into overall model performance. We find that, generally, local recognizers are faster and require fewer updates than cloud-based recognizers. Finally, we find Meta's Wav2Vec model to be the fastest, and find Mozilla's DeepSpeech model to be the most stable in its predictions.
Ryan Whetten, Mir Tahsin Imtiaz, Casey Kennington Boise State University
Department of Computer Science
1910 W University Dr., Boise, ID 83725 Automatic Speech Recognition, Incremental, Spoken Dialogue Systems
## 1 Introduction
Performance in automatic speech recognition (asr) has improved dramatically in the last decade. Many asr models process _incrementally_ in that they produce word or sub-word output as the recognition unfolds, which is an important requirement for spoken dialogue systems (sds) that are multimodal or part of a robot platform because there is a high expectation of timely interaction from human dialogue partners [1]. Good asr is critical in sds applications because errors and delays produced by the asr propagate to the downstream modules and overall system function. Most asr models use the word-error-rate (wer) metric to evaluate the asr, even in conversational settings--they do not usually consider incremental metrics [2]. [3, 4] propose metrics for evaluation of incremental performance such as Edit Overhead, Word First Correct Response, Disfluency Gain, and Word Survival Rate. All of the metrics can be classified into one of the following three general areas of interest: overall accuracy, speed, and stability, but these metrics focus on discrete word-level output.
In this paper, we make three contributions: (1) we evaluate six recent incremental asr models on English data, and we also (2) propose a continuous metric that computes how much the model changes its output over time, and (3) a comparison of two methods for combining sub-word output incrementally. Following prior work [5, 6, 7], the evaluations provide for a useful guide in deciding which asr model one should use. All of the models are implemented as modules in the ReTiCo framework [8] for ease of use in incremental settings.
## 2 Models & Metrics
Following the evaluation strategy in [4], we adopt the _Incremental Unit_ (iu) framework from [9]. The iu framework is practical because it is well designed and has multiple implementations from which we can build our incremental asr evaluation. The framework is built around _incremental units_, a discrete piece of information that is produced by a specific module. In our case, we focus on the asr model as a module, and output is discretized into words (i.e., strings). The iu framework has provisions for handling cases where the asr output was found to be in error, given new information. The iu framework proposes three operations for ius: add, revoke, and commit. A perfect asr would only add new words to the growing recognition prefix. But as most asr's have errors--particularly when they work incrementally--the revoke operation allows the asr module to remove an erroneous iu and replace it (i.e., through another add operation) in the recognized output. An example is shown in Figure 1.
ReTiCo is a Python implementation of the iu framework [8]. We use a ReTiCo implementation for each of the asr models evaluated. We use six different, readily available asr models; 2 cloud-based and 4 local (i.e., on a local GPU), chosen due to their respective results and accessibility. The cloud-based models are Google Cloud's Speech-to-Text API and Microsoft Azure's Speech SDK. Due to the limited amount of information given about the online asr models, we can not go into depth about the architecture and training behind these models, but we explain the 4 local asr models below. The local models are summarized in Table 1.
**Wav2Vec (W2V)**: We use Meta's Wav2Vec model [10] from a checkpoint provided by HuggingFace where the model
has been pre-trained and fine-turned on 960 hours of Librispeech [11]. This architecture is unique in that it is pre-trained on hours of unlabeled raw audio data. While other models first convert the audio into a spectrogram, Wav2Vec operates directly on audio data.1
Footnote 1: [https://huggingface.co/facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h)
**DeepSpeech (DS)**: Mozilla's DeepSpeech engine, is based on work done by [12]. This architecture uses Recurrent Neural Networks that operate on spectrograms of the audio to make predictions. We use the 0.9.3 model and scorer for predictions. This model was trained using a wider variety of data from Fisher, LibriSpeech, Switchboard, Common Voice English, and approximately 1,700 hours of transcribed WAMU (NPR) radio shows explicitly licensed to them to be used as training corpora.2
Footnote 2: [https://deepspeech.readthedocs.io/en/r0.9/](https://deepspeech.readthedocs.io/en/r0.9/)
**PocketSphinx (PS)**: One of the lighter asrs we tested is CMU's PocketSphinx [13]. PS is a light-weight asr that is a part of the open source speech recognition tool kit called the CMUSphinx Project. This model was trained on 1,600 utterances from the RM-1 speaker-independent training corpus. Unlike the previously mentioned models, PS does not use neural networks and is instead based on traditional methods of speech recognition by using hidden Markov models, language models, and phonetic dictionaries.3
Footnote 3: [https://github.com/cmusphinx/rocketsphinx-python](https://github.com/cmusphinx/rocketsphinx-python)
**Vosk**: Alpha Cephei's Vosk (with the vosk-model-en-us-0.22 model) is built on top of Kaldi [14], and like PocketSphinx, uses an acoustic model, language model, and phonetic dictionary. Vosk uses a neural network for the acoustic model part of the engine.4
Footnote 4: [https://alphacephei.com/vosk/](https://alphacephei.com/vosk/)
### Metrics
As mentioned, all previously proposed metrics for evaluating incremental asr can be divided into three broad categories: overall accuracy (using wer), speed, and stability. We review the specific metrics used for the latter two and introduce our new metric which combines these last two categories of speed and stability into a single metric.
#### 2.1.1 Predictive Speed: Latency
In order to measure the general predictive speed of an asr model, we measure the time it takes from the time the asr engine gets the audio until the prediction is made. We then take this time and divide by the number of words in that particular prediction. With this, we define latency as the average amount of time per word it takes an asr engine to make a prediction: \(LAT=\frac{Time}{N}\), where time is measured in seconds and N is the total number words in a given prediction.
#### 2.1.2 Stability: Edit Overhead
For measuring stability, we measure the edit overhead (EO). EO is the total number of revokes divided by the total number of edits (additions and revokes) that the asr engine makes. In an incremental sds setting, this could be thought of as the
\begin{table}
\begin{tabular}{l l l} \hline \hline Name (abbreviation) & Model & Training Data \\ \hline Wav2Vec (W2V) & wav2vec2-base-960h & LibriSpeech \\ DeepSpeech (DS) & 0.9.3 & Fisher, LibriSpeech, Switchboard, Common Voice English \\ PocketSphinx (PS) & N/A & 1600 utterances from the RM-1 \\ Vosk & en-us-0.22 & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 1: Local asr engines along with their used models and training data if available.
Figure 1: An example of adds and revokes. The word _should_ is added, then revoked and replaced by _showed_. The diamonds represent the time when the predictions are made.
Figure 2: In the Sliding Window method, the asr engine makes predictions on partially overlapping portions of audio. Dictionaries are used to join the incoming predictions together.
fraction of text incremental units that are revokes: \(EO=\frac{R}{\#\ of\ Edits}\).
#### 2.1.3 Revokes per Second
Our proposed and final metric is the number of _Revokes per Second_ (RPS). We propose this metric as way to capture the relationship between both speed and stability in an interpretable fashion. In an incremental sds setting, this is the average number of asr word output ius per second that are labeled as type revoke.
We first calculate the average number of revokes per word, then divide the average number of revokes per word by our metric for latency to get the average number of Revokes per Second. We also look at the inverse _Seconds per Revoke_ (SPR) as a simple adjustment to this metric to see how many seconds will pass by before one can expect to see a revoke. This SPR value is useful in interpretations when the RPS is low. Taken together, the formulas for these metric are as follows:
\[RPS=\frac{R}{N}\frac{N}{Time(s)}=\frac{R}{Time(s)}\]
\[SPR=\frac{Time(s)}{R}=\frac{1}{RPS}\]
### Combining Sub-word Output
Both Google and Azure offer incremental asr results. For these two asrs, the audio files are sent to the cloud services in chunks, and the service returns a prediction with other meta-information. The local asr engines work at word and sub-word levels, necessitating a method of combining the sub-word output into words.5 We apply and compare two methods in this evaluation: Sliding Window and Concatenation.
Footnote 5: We used the same PC with a GTX1080TI GPU for the local models.
For Sliding Window, we pass the audio from the file in chunks that are a bit longer than one second. These are then concatenated together as an audio buffer and given to an asr model until it produces a prediction of at least 5 words or the audio buffer contains about 30 seconds of audio. At this point, we remove the first 35% and repeat. This results in a series of predictions on segments of audio containing around 2 to 5 words. When a prediction is received, it is joined together with previous predictions. Due to overlap in incoming predictions, the way that the predictions are joined together is non-trival. The lookup method joins predictions using dictionaries from WordNet and NLTK [15, 16].
For the Concatenation method, we present the audio in chunks into an audio buffer in the same manner as the Sliding Window method, except the buffer is a concatenation of all the audio (i.e., no audio ever gets removed from the buffer). Essentially, with this method, the asr model makes a prediction from the very beginning of the file to the most recent audio given to the buffer. This is computationally more expensive and takes more memory because the asr model has to make predictions on longer pieces of audio as time goes on, but this method eliminates the need for joining. Diagrams showing these two methods can be seen in Figures 2 and 3.
## 3 Experiment
In this section, we explain our experiment including the evaluation data we used, and how we systematically produced and evaluated the ius from our asr modules.
### Data & Procedure
For evaluation, we use 2 datasets, LibriSpeech and a recently assembled dialogue dataset of simulated medical conversations [17].6 The LibriSpeech test-clean dataset contains 5.4 hours of speech from 40 different speakers, 20 male and 20 female. This audio is divided into over 2,600 files with an average of about 20 words per file containing a vocabulary of over 8,100 words. To ensure the audio would work on all of our models, we converted the audio files to WAV files.
Footnote 6: We were unable to obtain the Switchboard corpus due to prohibitive costs.
The medical conversation dataset contains 272 audio files with corresponding transcripts. The audio files range from around 7 to 20 minutes in length or about 800 to 2,200 words. Due to the size of these audio files, we split up the files into utterances based on silence and then randomly sample a set of 40 utterances, 17 of which were able to be processed by all 6 asr engines (max 40 seconds, min 0.8 seconds, 6.1 seconds in length on average). This happened due to the length of some of the files and the constraints that each model can handle. The purpose of using this dialogue data is to 1) test each model on domain data that presumably none of them have been trained on (since this dataset was just made public in 2022), and 2) test
Figure 3: In this method, the incremental audio is concatenated together, and a prediction is made on the entire audio that has been given up to that point.
how each engine performs on a dialogue dataset that contains disfluencies such as fillers, corrections, and restarts.
### Results
The results can be seen in Table 2. When using the Sliding Window method, local models had lower latency than both the cloud models. Some of the local asr models using the Concatenation method were also faster than both of the cloud ones, but generally the concatenation tests were slower and had a higher EO than the Sliding Window method. Despite this, the Concatenated versions performed better than their corresponding Sliding Window version in terms of wer. For the cloud models, Google is less accurate and more revoke dependant than Azure. However, Google is considerably quicker which could be crucial in an interactive dialogue setting. The cloud models had surprisingly low latency (though the latency is dependent on the Internet speed), but the local asrs generally had the lowest latency.
The local asr engine which performed the best overall in terms of wer was the W2V model using the Concatenation method on the LibriSpeech data and Vosk on the Medical Dialogue data, while the model with the lowest Edit Overhead was the DS model using the Sliding Window method. Though a low wer is generally better, the number of revokes has implications for downstream modules in an sds; keeping the EO low and Revokes per Second low with a low wer means the model was correct early, which is ideal.
Our results are consistent with previous evaluations on Incremental asr[4] that show that Google's asr predictions, although fairly accurate overall, are not as stable as the others, with the highest Edit Overhead of 0.279/0.228 and an average of about 4.5/5.1 Revokes per Second on the LibriSpeech dataset and Medical Dialogue dataset respectively.
The DS model's wer was higher than other models, but the low EO and infrequent number of revokes make it a potentially good candidate for an sds that requires high accuracy as well as low latency and EO, for example in a robotic platform. We suggest Concatenation for live microphones because it is more accurate and does not require a dictionary.
## 4 Conclusion
In this work, we tested six different asr models in an incremental sds setting and evaluated using final wer, latency, and Edit Overhead. We also proposed a new metric, Revokes per Second. We showed that, generally, online asr (Google Cloud and Azure cloud services) is not as fast as most local asr engines tested, and while these are some of the most accurate asrs we tested, they both have a relatively high number of Revokes per Second which, in combination with the latency, could potentially lead to more issues in an incremental setting.
One of the challenges of the evaluation of asr models is that, as described, the cloud asrs do not publicly describe the precise architecture and training data used, and each of the local asrs differs greatly in architecture and in the training data used. With this, there are too many variables and unknowns to attribute good wer in a given model to its architecture, or due to training data. That being said, we do believe that in terms of testing the _out of box_ performance, our results are conclusive that online asr tend to have higher latency and Edit Overhead. Furthermore, we also believe that our proposed metric, Revokes per Second, is an interpretable useful metric that should be used asr becomes more prevalent in _live_ settings such as in Spoken Dialogue Systems on a robots or in live captioning in online meetings.
In future work, we plan on evaluating using different datasets and in different languages.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{Incremental asr Results on LibriSpeech} \\ \hline & Google & Azure & W2V & W2V (Con.) & DS & DS (Con.) & PS & PS (Con.) & Vosk & Vosk (Con.) \\ \hline
**WER** & 13.2 & 9.1 & 10.6 & **4.0** & 18.3 & 8.4 & _40.4_ & 31.8 & 33.4 & 6.4 \\
**LAT** & 0.197 & 0.539 & **0.099** & 0.127 & 0.181 & _1.443_ & 0.105 & 0.220 & 0.104 & 0.167 \\
**EO** & _0.279_ & 0.065 & 0.011 & 0.093 & **0.001** & 0.013 & 0.014 & 0.147 & 0.072 & 0.019 \\
**R/Sec** & _4.564_ & 0.679 & 0.141 & 1.919 & **0.008** & 0.012 & 0.178 & 1.688 & 0.910 & 0.143 \\
**Sec/R** & _0.219_ & 1.473 & 7.083 & 0.521 & **123.135** & 80.489 & 5.613 & 0.593 & 1.099 & 7.004 \\ \hline \multicolumn{10}{|c|}{Incremental asr Results on Medical Dialogue Dataset} \\ \hline & Google & Azure & W2V & W2V (Con.) & DS & DS (Con.) & PS & PS (Con.) & Vosk & Vosk (Con.) \\ \hline
**WER** & 41.1 & **21.0** & 47.8 & 42.3 & 42.5 & 38.7 & _85.6_ & 80.0 & 38.4 & 23.2 \\
**LAT** & 0.287 & 0.623 & **0.125** & 0.217 & 0.245 & _1.452_ & 0.131 & 0.394 & 0.307 & 1.296 \\
**EO** & _0.243_ & 0.055 & 0.016 & 0.211 & **0.000** & 0.014 & 0.005 & 0.240 & 0.048 & 0.025 \\
**R/Sec** & _5.944_ & 0.207 & 0.253 & 6.376 & **0.000** & 0.013 & 0.046 & 2.447 & 0.215 & 0.079 \\
**Sec/R** & _0.168_ & 4.837 & 3.953 & 0.157 & **inf** & 75.616 & 21.734 & 0.409 & 4.649 & 12.733 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of results. The bold indicates the best performance and the italicized indicates the lowest performance for the given metric in the far left column. Local asrs had lower latency than cloud-based asrs. The Concatenation method, shown in the columns that contain a (_Con._), had higher latency and resulted in a higher EO and RPS, but not as many revokes as the online asrs. _inf_ means zero revokes per second. |
2302.11540 | Kinetic models for systems of interacting agents with multiple
microscopic states | We propose and investigate general kinetic models %of Boltzmann type with
transition probabilities that can describe the simultaneous change of multiple
microscopic states of the interacting agents. These models can be applied to
many problems in socio-economic sciences, where individuals may change both
their compartment and their characteristic kinetic variable, as for instance
kinetic models for epidemics or for international trade with possible transfers
of agents. Mathematical properties of our kinetic model are proved, as
existence and uniqueness of a solution for the Cauchy problem in suitable
Wasserstein spaces. The quasi-invariant asymptotic regime, leading to simpler
kinetic Fokker-Planck-type equations, is investigated and commented on in
comparison with other existing models. Some numerical tests are performed in
order to show time evolution of distribution functions and of meaningful
macroscopic fields, even in case of non-constant interaction probabilities. | Marzia Bisi, Nadia Loy | 2023-02-22T18:33:05Z | http://arxiv.org/abs/2302.11540v1 | # Kinetic models for systems of interacting agents with multiple microscopic states
###### Abstract
We propose and investigate general kinetic models with transition probabilities that can describe the simultaneous change of multiple microscopic states of the interacting agents. These models can be applied to many problems in socio-economic sciences, where individuals may change both their compartment and their characteristic kinetic variable, as for instance kinetic models for epidemics or for international trade with possible transfers of agents. Mathematical properties of our kinetic model are proved, as existence and uniqueness of a solution for the Cauchy problem in suitable Wasserstein spaces. The quasi-invariant asymptotic regime, leading to simpler kinetic Fokker-Planck-type equations, is investigated and commented on in comparison with other existing models. Some numerical tests are performed in order to show time evolution of distribution functions and of meaningful macroscopic fields, even in case of non-constant interaction probabilities.
**Keywords:** Boltzmann equation, Markov process, multi-agent system, socio-economic modelling
## 1 Introduction
In the literature of kinetic models for multi-agent systems, there is an increasing interest in phenomena where the agents are characterized by a multiple microscopic state and they are, in particular, divided into subpopulations.
Kinetic theory for multi-agent systems has its roots in the classical kinetic theory related to the Boltzmann equation for the description of a rarefied gas [11], in which individuals are molecules identified by a microscopic state \(v\) that is the velocity and that changes because of _binary interactions_. The classical kinetic theory for gas dynamics has been generalized to various kinds of interacting systems, where the microscopic state is not necessarily the velocity, providing reasonable mathematical models for many socio-economic problems, as the evolution of wealth distribution [13, 40], the opinion formation [38, 39], the pedestrian or vehicular traffic dynamics [22, 23], birth and death processes [25, 30] and many others. Also in this field, models describing the interaction of different populations through a system of Boltzmann equations have been proposed for instance in [21, 19] for wealth exchanges, in [20] for opinion formation in presence of leaders, in [8] for multilane traffic models.
The classical Boltzmann equation has been extended some decades ago to mixtures of different gaseous species [12, 27], even in presence of chemical reactions [24], and also consistent BGK
approximations have been proposed and investigated [2, 5, 34]. In these models, when describing bimolecular chemical reactions, a given binary interaction between molecules may simultaneously cause both the change of the velocity and transfers of the involved molecules to different species. Also models in which the molecule is characterized by the belonging to a species, a molecular velocity and another inner variable modelling the internal energy have been proposed [9].
In the field of kinetic equations for multi-agent systems applied to socio-economic phenomena, a very interesting class of models in which individuals have a multiple microscopic state corresponds to models in which the total population is divided into subgroups and each agent is also described by a physical quantity (wealth, opinion, viral load, etc.). Each group is then characterized by a distribution function depending on the given microscopic physical quantity that can be exchanged both with individuals of the same subgroup and with individuals of a different subgroup, according to suitable interaction rules. For simplicity, interactions causing exchanges of goods and the ones giving rise to a change of subgroup of one agent are often modelled separately, by means of different kinetic operators [19, 21, 20, 18, 31, 17]. Formally, models belonging to this class have been derived in [30], where the authors describe multi-agent systems in which the agents are characterized by a double microscopic state: a physical quantity \(v\), changing because of binary interactions, and a label \(x\) denoting the subgroup of the agent, changing as a consequence of a Markovian process. The two stochastic processes for the evolution of \(v\) and \(x\) are _independent_ and occur with different frequencies, giving thus rise to different operators, where the one relevant to the variation of \(v\) may be the sum of _inter-_ or _intra-_ species interactions.
However, a class of models worth to be investigated is the one in which the agent changes the microscopic quantity \(v\) and the subgroup simultaneously as a consequence of the _same_ binary interaction. An example is given by the aforementioned bimolecular chemical reactions in gas mixtures. The classical kinetic description for chemically reacting gases was introduced in the pioneering work [36] where each gas species has a different distribution function of the microscopic velocity of its molecules. A pair of molecules belonging to potentially different species may interact exchanging both the velocity and the species. Such bimolecular reaction is described by means of a collisional operator that involves both distributions of reactants and products, and the change of the velocity is included by means of a transformation with unit Jacobian like in the classical strong Boltzmann equation. In the literature of kinetic theory for socio-economic sciences, the recent paper [4] describes the trade among different subpopulations, living in different countries, taking into account also possible transfers of individuals from one country to another, by means of suitable Boltzmann-type operators similar to the ones modelling bimolecular chemical reactions in gas mixtures. In this model, then, binary interactions between individuals may lead to both an exchange of wealth and to a transfer to another subpopulation as a consequence of the _same_ binary interaction, i.e. the microscopic state identifying the wealth and the one related to the label, that denotes the subpopulation, change _simultaneously_.
Another topic that has gained much interest in recent years also in kinetic theory, is the modelling of the spread of an epidemic: in this respect, compartmental Boltzmann models allowing the passage of individuals from an epidemiological compartment to another have been proposed, essentially of SIR type, where a susceptible individual could become infected and then removed because of healing or death [18, 15, 16]. A different kinetic description of infectious diseases consists in modelling interactions among different types of human cells, including the immune cells [31, 17]. For example, in some epidemic models, susceptible (carrying a vanishing viral load \(v\)) and infected individuals (with \(v>0\)) interact exchanging the quantity \(v\), and as soon as the susceptible individual's viral load becomes positive because of the binary interaction, then he/she becomes infected [15, 16]. In these works, the authors, similarly to [30], start from a microscopic description in which each agent is characterized by the microscopic quantity \(v\) and by the label
denoting, in this case, the compartment. The microscopic dynamics is then described through discrete in time stochastic processes in which the new microscopic physical quantity and label are modelled through Markov-type jump processes governed by suitable transition probabilities [15, 16]. As a consequence, the kinetic model implementing the prescribed microscopic dynamics is a kinetic equation with an operator in which the kernel is related to the transition probability.
This formulation of the Boltzmann equation is well known also in kinetic theory for a single gas. Indeed, besides the classical Boltzmann operator in which the kernel has a proper cross section taking into account the intermolecular potential (depending on the relative speed and on the impact angles) [11], other different forms have been used in the literature. The most common one is the so-called Waldmann representation [42], showing in the kernel the probability distribution of the collision process transforming pre-collision velocities \((v,w)\) into the post-collision ones \((v^{\prime},w^{\prime})\), and the Boltzmann integral over the unit sphere is replaced by integrals over post-collision velocity variables. An analogous scattering kernel formulation replaces Waldmann kernel by its integral over the velocity of the partner molecule [37]. The equivalence between these kinetic equations has been proved in [7] for microscopic interactions that conserve the average and the energy. In the literature of kinetic equations for socio-economic sciences, the authors in [30] show the equivalence between the collision-like Boltzmann equation and Markovian jump-processes described by transition probabilities that can be related to the Waldmann (probabilistic) representation of the Boltzmann equation.
In this paper we will present kinetic models for socio-economic problems in which agents have a multiple microscopic state, starting from a microscopic stochastic process ruled by transition probabilities that allow to describe the simultaneous change of all the microscopic variables and by a microscopic state-dependent interaction frequency. In the case in which the agent state is given by a microscopic quantity \(v\) and by a label \(x\) denoting the subgroup, we will show that this approach has some advantages with respect to the classical collision model [4]. Indeed, as explained also in [4], the construction of the gain term of the Boltzmann operators requires the invertibility of the collision process, that obviously holds in gas-dynamics (because of conservation of total momentum and energy in each collision), but not in human interactions, that are also influenced by non-deterministic (random) effects. This invertibility property is not needed in the operator with a transition probability in the kernel, because each single interaction has its own probability, not related to the reverse process. Moreover, realistic situations with non constant interaction probabilities are easier to manage in the stochastic Boltzmann formulation, therefore this approach could have many applications in kinetic modelling of social sciences. For these reasons in this paper we present a formal and organic treatment of kinetic equations involving a microscopic stochastic process that simultaneously changes several internal states of the interacting agents (typically, their compartment and the value of their kinetic variable). In more detail, the paper is organized as follows.
In Section 2 we formally derive the general form of a kinetic model implementing a microscopic dynamics in which each agent is characterized by a set \(\boldsymbol{z}\in\Omega\subset\mathbb{R}^{d}\) of microscopic states which may change simultaneously in each binary interaction, that is described by a transition probability and ruled by a microscopic state-dependent frequency. Even though the procedure is quite classical, stating the discrete in time stochastic process will be useful for writing the Nanbu-Babovski Algorithm for simulating the microscopic dynamics. Then we revise and establish the relation with some well-known models such as the collision-like Boltzmann equation and the kinetic equations describing transfers among different groups due to binary interactions. Finally, we explicitly derive the kinetic equation for a multi-agent system in which a binary interaction causes simultaneously a transfer and an exchange of a microscopic quantity. In Section 3, mathematical properties of the Cauchy problem associated to our general Boltzmann equation are discussed, proving existence and uniqueness of a solution in suitable Wasserstein spaces.
Then, the quasi-invariant limit commonly used to investigate socio-economic kinetic models is adapted to our general frame, allowing to derive suitable Fokker-Planck equations with additional terms taking into account transfers of agents. Section 4 is devoted to the investigation of a specific kinetic model fitting into our general framework, describing international trade with possible transfers of individuals: evolution of number density and mean wealth of each country are computed from the kinetic model, the quasi-invariant limit is performed, and analogies and differences with respect to analogous models for a single population [13] are discussed, with particular reference to the Pareto index of steady distributions. In Section 5 we show some numerical tests, simulating our kinetic equations by means of a Nanbu-Babovski Monte Carlo algorithm implementing the discrete in time stochastic process presented in Section 2: the evolution of distribution functions and of macroscopic quantities are commented on for varying parameters. Section 6 contains some concluding remarks and perspectives.
## 2 Kinetic models for binary interactions processes
In this section, we provide a formal derivation of kinetic equations implementing binary interactions among agents whose microscopic state is a vector, described by means of Markovian processes, where the interaction frequency depends on the microscopic state of the interacting agents. Then, we illustrate the relation to well-known kinetic models for binary interactions leading to exchange of physical quantities (the collision-like Boltzmann equation) and to label-switch processes, also named _transfers_. We eventually present a general framework for describing, through kinetic equations, microscopic binary interaction processes leading to both exchanges of a physical quantity and label-switches.
### Kinetic models with transition probabilities
Let us consider a large system of agents described by a microscopic state \(\mathbf{z}\in\Omega\subset\mathbb{R}^{d}\). We shall suppose that the change of the microscopic state (of all its components simultaneously) is due to stochastic _binary interactions_. A probabilistic description of such interactions may be given by means of _transition probability_ functions
\[T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})>0,\qquad\tilde{T}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{ y})>0\qquad\forall\mathbf{z},\mathbf{y}\in\Omega,\ t>0, \tag{1}\]
namely the _conditional_ probabilities that, given a binary interaction between an agent \(\mathbf{z}\) and an agent \(\mathbf{y}\), the first changes into \(\mathbf{z}^{\prime}\) while the second into \(\mathbf{y}^{\prime}\), respectively. Such a microscopic description may be assimilated to a _Markov-type jump process_. In order for \(T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y}),\tilde{T}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{y})\) to be conditional probability densities, they have to satisfy the following further property:
\[\int_{\Omega}T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})\,d\mathbf{z}^{\prime}=1,\quad\int_{ \Omega}\tilde{T}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{y})\,d\mathbf{y}^{\prime}=1\qquad \forall\mathbf{z},\mathbf{y}\in\Omega,\ t>0. \tag{2}\]
The binary interactions may happen with a frequency \(\lambda_{\mathbf{z}\mathbf{y}}\), namely the frequency of the binary interactions between two agents having microscopic states \(\mathbf{z},\mathbf{y}\) depends on the microscopic states themselves. We remark that the two transition probabilities \(T\) and \(\tilde{T}\) are given in order to take into account for possible asymmetries in the binary interactions. We remark that the symmetry of the binary interactions is here expressed by
\[T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})=\tilde{T}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{y}), \tag{3}\]
and \(\lambda_{\mathbf{z}\mathbf{y}}=\lambda_{\mathbf{y}\mathbf{z}}\). As classically done [32], a kinetic description of the multi-agent system can be derived by introducing discrete in time stochastic processes. Let \(\mathbf{Z}_{t},\mathbf{Y}_{t}\in\Omega\) be random variables
describing the microscopic state of two agents at time \(t>0\). Let \(f=f(\mathbf{z},t)\) be the probability density function associated to our multi-agent system, i.e. the probability density function of the random variable of a given agent \(\mathbf{Z}_{t}\), thus satisfying
\[\int_{\Omega}f(\mathbf{z},t)\,d\mathbf{z}=1. \tag{4}\]
During a sufficiently small time \(\Delta t>0\) the agents may or may not change their state \(\mathbf{Z}_{t},\mathbf{Y}_{t}\) depending on whether a binary interaction takes place or not. We express this discrete-in-time random process as
\[\mathbf{Z}_{t+\Delta t} =(1-\Theta)\mathbf{Z}_{t}+\Theta\mathbf{Z}_{t}^{\prime}, \tag{5}\] \[\mathbf{Y}_{t+\Delta t} =(1-\Theta)\mathbf{Y}_{t}+\Theta\mathbf{Y}_{t}^{\prime},\]
where \(\mathbf{Z}_{t}^{\prime},\,\mathbf{Y}_{t}^{\prime}\) are random variables describing the new microscopic state of \(\mathbf{Z}_{t}\) and \(\mathbf{Y}_{t}\) respectively after a binary interaction and having _joint_ probability density functions \(g=g(\mathbf{Z}_{t}^{\prime}=\mathbf{z}^{\prime};\mathbf{Z}_{t}=\mathbf{z},\mathbf{Y}_{t}=\mathbf{y}), \tilde{g}=\tilde{g}(\mathbf{Y}_{t}^{\prime}=\mathbf{y}^{\prime};\mathbf{Z}_{t}=\mathbf{z},\bm {Y}_{t}=\mathbf{y})\), while \(\Theta\in\{0,\,1\}\) is a Bernoulli random variable, which we assume to be independent of all the other random variables appearing in (5), discriminating whether a binary interaction takes place (\(\Theta=1\)) or not (\(\Theta=0\)) during the time \(\Delta t\). In particular, we set the probability to change the microscopic state
\[\mathrm{Prob}(\Theta=1)=\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\Delta t, \tag{6}\]
where \(\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\) is the interaction frequency between agents with microscopic states \(\mathbf{Z}_{t}\) and \(\mathbf{Y}_{t}\). Notice that, for consistency, we need \(\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\Delta t\leq 1\).
Let now \(\phi=\phi(\mathbf{z})\) be an observable quantity defined on \(\mathbf{z}\in\Omega\). From (5)-(6), together with the assumed independence of \(\Theta\), we see that the mean variation rate of \(\phi\) in the time interval \(\Delta t\) satisfies
\[\frac{\langle\phi(\mathbf{Z}_{t+\Delta t})\rangle-\langle\phi(\mathbf{Z} _{t})\rangle}{\Delta t}+\frac{\langle\phi(\mathbf{Y}_{t+\Delta t})\rangle-\langle \phi(\mathbf{Y}_{t})\rangle}{\Delta t}=\] \[=\frac{\langle(1-\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\Delta t)\phi(\mathbf{ Z}_{t})\rangle+\Delta t\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Z}_{t}^{ \prime})\rangle-\langle\phi(\mathbf{Z}_{t})\rangle}{\Delta t}\] \[+\frac{\langle(1-\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\Delta t)\phi(\mathbf{ Y}_{t})\rangle+\Delta t\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Y}_{t}^{ \prime})\rangle-\langle\phi(\mathbf{Y}_{t})\rangle}{\Delta t}\]
where \(\langle C_{t}\rangle\) denotes the average of the random variable \(C_{t}\) with respect to its probability density function. Whence, we deduce the instantaneous time variation of the average of \(\phi\) in the limit \(\Delta t\to 0^{+}\) as
\[\frac{d}{dt}\langle\phi(\mathbf{Z}_{t})\rangle=\frac{1}{2}\Big{(}\langle\lambda_{ \mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Z}_{t}^{\prime})\rangle+\langle\lambda_{\mathbf{Z}_ {t}\mathbf{Y}_{t}}\phi(\mathbf{Y}_{t}^{\prime})\rangle-\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y} _{t}}\phi(\mathbf{Z}_{t})\rangle-\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Y}_ {t})\rangle\Big{)} \tag{7}\]
where we used the fact that \(\langle\phi(\mathbf{Z}_{t})\rangle=\langle\phi(\mathbf{Y}_{t})\rangle\) that implies \(\langle\phi(\mathbf{Z}_{t})\rangle+\langle\phi(\mathbf{Y}_{t})\rangle=2\langle\phi(\bm {Z}_{t})\rangle\).
We now specify the gain terms as
\[\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Z}_{t}^{\prime})\rangle =\int_{\Omega}\int_{\Omega^{2}}\phi(\mathbf{z}^{\prime})\lambda_{\mathbf{ z}\mathbf{y}}g(\mathbf{z}^{\prime};\mathbf{z},\mathbf{y})\,d\mathbf{z}d\mathbf{y}\,d\mathbf{z}^{\prime},\] \[\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Y}_{t}^{\prime})\rangle =\int_{\Omega}\int_{\Omega^{2}}\phi(\mathbf{y}^{\prime})\lambda_{\mathbf{ z}\mathbf{y}}\tilde{g}(\mathbf{y}^{\prime};\mathbf{z},\mathbf{y})\,d\mathbf{z}d\mathbf{y}\,d\mathbf{y}^{ \prime},\]
where \(g\) and \(\tilde{g}\) are the joint probability density functions of \(\mathbf{Z}_{t}^{\prime}\) and \(\mathbf{Y}_{t}^{\prime}\), respectively, and of the samples of the random variables \(\mathbf{Z}_{t}=\mathbf{z},\mathbf{Y}_{t}=\mathbf{y}\) at time \(t\). The probability density functions \(g\) and \(\tilde{g}\) are defined as
\[g(\mathbf{z}^{\prime};\mathbf{z},\mathbf{y})=T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})\,f_{2}(\mathbf{z},\mathbf{y},t)\qquad\tilde{g}(\mathbf{y}^{\prime};\mathbf{z},\mathbf{y})=\tilde{T}(\mathbf{y}^{ \prime}|\mathbf{z},\mathbf{y})\,f_{2}(\mathbf{z},\mathbf{y},t), \tag{8}\]
where \(f_{2}(\mathbf{z},\mathbf{y},t)\) is the joint distribution of the couple \((\mathbf{z},\mathbf{y})\) at time \(t\). As typically done in kinetic theory, we assume _propagation of chaos_, i.e. \(\mathbf{z}\) and \(\mathbf{y}\) are independently distributed, which allows us to perform the factorization \(f_{2}(\mathbf{z},\mathbf{y},t)=f(\mathbf{z},t)f(\mathbf{y},t)\), so that we can write
\[g(\mathbf{z}^{\prime};\mathbf{z},\mathbf{y})=T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})\,f(\mathbf{z},t) f(\mathbf{y},t)\qquad\tilde{g}(\mathbf{y}^{\prime};\mathbf{z},\mathbf{y})=\tilde{T}(\mathbf{y}^{ \prime}|\mathbf{z},\mathbf{y})\,f(\mathbf{z},t)f(\mathbf{y},t).\]
It is immediate to verify that \(g\) and \(\tilde{g}\) are probability density functions thanks to (2) and (4). Analogously, the loss terms can be naturally written as
\[\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Z}_{t})\rangle =\int_{\Omega^{2}}\phi(\mathbf{z})\lambda_{\mathbf{z}\mathbf{y}}f(\mathbf{z},t)f (\mathbf{y},t)\,d\mathbf{z}d\mathbf{y},\] \[\langle\lambda_{\mathbf{Z}_{t}\mathbf{Y}_{t}}\phi(\mathbf{Y}_{t})\rangle =\int_{\Omega^{2}}\phi(\mathbf{y})\lambda_{\mathbf{z}\mathbf{y}}f(\mathbf{z},t)f (\mathbf{y},t)\,d\mathbf{z}d\mathbf{y}.\]
Therefore Eq. (7) can be stated as
\[\frac{d}{dt}\int_{\Omega}f(\mathbf{z},t)\phi(\mathbf{z})\,d\mathbf{z} =\frac{1}{2}\int_{\Omega}\int_{\Omega^{2}}\lambda_{\mathbf{z}\mathbf{y}} T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})\left(\phi(\mathbf{z}^{\prime})-\phi(\mathbf{z}) \right)f(\mathbf{z},t)f(\mathbf{y},t)\,d\mathbf{z}d\mathbf{y}d\mathbf{z}^{\prime} \tag{9}\] \[+\frac{1}{2}\int_{\Omega}\int_{\Omega^{2}}\lambda_{\mathbf{z}\mathbf{y}} \tilde{T}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{y})\left(\phi(\mathbf{y}^{\prime})\,-\phi( \mathbf{y})\right)f(\mathbf{z},t)f(\mathbf{y},t)\,d\mathbf{z}d\mathbf{y}d\mathbf{y}^{\prime},\]
where we have used (2) in order to write the loss terms.
In the following, we shall illustrate three meaningful examples of kinetic models describing binary interaction processes by means of transition probabilities: \(i)\) binary interactions causing exchange of physical quantities; \(ii)\) binary interactions leading to transfers of individuals; \(iii)\) binary interactions leading to both exchange of physical quantities and transfers of individuals.
### Boltzmann-type description of classical binary interaction dynamics
Let us consider the case in which the microscopic state of the agent is a non-negative physical quantity \(\mathbf{z}=v\in\Omega=\mathbb{R}_{+}\). Extensions to negative and possibly also bounded microscopic states are mostly a matter of technicalities. In general, as classically done in kinetic theory [32], if \(v,w\in\mathbb{R}_{+}\) denote the pre-interaction states of any two interacting agents, their post-interaction states \(v^{\prime},w^{\prime}\) will be given by general interaction rules in the form
\[v^{\prime}=I(v,w)+D(v,w)\eta,\qquad w^{\prime}=\tilde{I}(v,w)+\tilde{D}(v,w) \eta_{*} \tag{10}\]
where \(\eta\) and \(\eta_{*}\) are independent random variables satisfying \(\langle\eta\rangle=\langle\eta_{*}\rangle=0,\langle\eta^{2}\rangle=\langle\eta _{*}^{2}\rangle=1\), namely with zero average and unitary variance. It is known that an aggregate description of the (sole) binary interaction dynamics inspired by the principles of statistical mechanics can be obtained by introducing a probability density function \(f=f(v,t)\geq 0\) such that \(f(v,t)dv\) gives the proportion of agents having at time \(t\) a microscopic state comprised between \(v\) and \(v+dv\). Such a probability density function satisfies a Boltzmann-type kinetic equation, which in weak form reads
\[\frac{d}{dt}\int_{\mathbb{R}_{+}}f(v,t)\phi(v)\,dv=\frac{\lambda}{2}\langle \int_{\mathbb{R}_{+}^{2}}\big{(}\phi(v^{\prime})+\phi(w^{\prime})-\phi(v)-\phi( w)\big{)}f(v,t)f(w,t)\,dvdw\rangle \tag{11}\]
where \(\lambda\) is the interaction frequency that we here assume to be independent of the microscopic states of the agents.
On the other hand, if we want to describe the binary interactions through transition probabilities (1) with \(\mathbf{z}^{\prime}=v^{\prime},\mathbf{y}^{\prime}=w^{\prime}\), we have that (9) can be rewritten as
\[\begin{split}\frac{d}{dt}\int_{\mathbb{R}_{+}}f(v,t)\phi(v)\,dv& =\frac{\lambda}{2}\int_{\mathbb{R}_{+}^{2}}\left(\int_{\mathbb{R} _{+}}\phi(v^{\prime})T(v^{\prime}|v,w)\,dv^{\prime}-\phi(v)\right)f(v,t)f(w,t) \,dvdw\\ &+\frac{\lambda}{2}\int_{\mathbb{R}_{+}^{2}}\left(\int_{\mathbb{R }_{+}}\phi(w^{\prime})\tilde{T}(w^{\prime}|v,w)\,dw^{\prime}-\phi(w)\right)f(v, t)f(w,t)\,dvdw.\end{split} \tag{12}\]
We can define the first two statistical moments of the distribution function \(f\) as:
\[M(t):=\int_{\mathbb{R}_{+}}vf(t,\,v)\,dv,\qquad E(t):=\int_{\mathbb{R}_{+}}v^{ 2}f(t,\,v)\,dv,\]
that represent the average and energy, respectively. As done in [29] in the symmetric case, in order to establish a relation between (11)-(10) and (12), we investigate primarily the trend of the statistical moments as prescribed by the two different models. Setting \(\phi(v)=v,v^{2}\) in (12) and (11)-(10) yields the evolution equations for \(M,E\) for a system of agents obeying the microscopic dynamics expressed in terms of transition probabilities (1) or interaction rules (10), respectively. By comparing the evolution equations of \(M\) and \(E\) prescribed by the two different kinetic models, we see that the evolution is the same if we choose
\[\begin{split} I(v,\,w)=V_{T}(v,\,w),\quad D(v,\,w)=\sqrt{E_{T}(v,\,w)-V_{T}^{2}(v,\,w)}&=:D_{T}(v,\,w),\\ \tilde{I}(v,\,w)=V_{\tilde{T}}(v,\,w),\quad\tilde{D}(v,\,w)=\sqrt{ E_{\tilde{T}}(v,\,w)-V_{\tilde{T}}^{2}(v,\,w)}&=:D_{\tilde{T}}(v, \,w)\end{split}\]
where
\[V_{T}(v,\,w):=\int_{\mathbb{R}_{+}}v^{\prime}T(v^{\prime}\,|\,v,\,w)\,dv^{ \prime},\qquad E_{T}(v,\,w):=\int_{\mathbb{R}_{+}}v^{\prime 2}T(v^{\prime}\,|\,v, \,w)\,dv^{\prime}\]
and
\[V_{\tilde{T}}(v,\,w):=\int_{\mathbb{R}_{+}}w^{\prime}\tilde{T}(w^{\prime}\,| \,v,\,w)\,dw^{\prime},\qquad E_{\tilde{T}}(v,\,w):=\int_{\mathbb{R}_{+}}w^{ \prime 2}\tilde{T}(w^{\prime}\,|\,v,\,w)\,dw^{\prime}\]
denote the mean and the energy, respectively, of \(T\) and \(\tilde{T}\) for a given pair \((v,\,w)\in\mathbb{R}_{+}\times\mathbb{R}_{+}\) of pre-interaction states, while \(D_{T}(v,\,w)\) and \(D_{\tilde{T}}(v,\,w)\) are the standard deviations of \(T\) and \(\tilde{T}\), respectively. Therefore, if dealing with (11) we can consider the collisions
\[v^{\prime}=V_{T}(v,\,w)+D_{T}(v,\,w)\eta,\qquad w^{\prime}=V_{\tilde{T}}(v,\, w)+D_{\tilde{T}}(v,\,w)\eta_{*}, \tag{13}\]
and this choice makes formulations (11) and (12) equivalent _at the macroscopic level_ (at least for the mass, average and energy). As highlighted in [29], in general, (11) with (10) and (12) are not the same kinetic equation, although with the choice (13) they account for the same evolution of the first and second statistical moments of \(f\). Nevertheless, if in (12) we take
\[T(v^{\prime}|v,w)=\delta\Big{(}v^{\prime}-(V_{T}(v,w)+D_{T}(v,w)\eta)\Big{)}, \qquad\tilde{T}(w^{\prime}|v,w)=\delta\Big{(}w^{\prime}-(V_{\tilde{T}}(v,w)+D_ {\tilde{T}}(v,w)\eta_{*})\Big{)} \tag{14}\]
where in the right-hand side \(\delta\) is the Dirac delta, then we can formally show that (12) becomes exactly (11)-(10). Of course, in this case the right hand side (12) is meant to be written in brackets \(\langle\cdot\rangle\).
### Label switch process caused by binary interactions
Let us now consider the case in which \(\Omega=\mathcal{I}_{n}=\{1,...,n\}\) and the microscopic discrete variable \(x\in\mathcal{I}_{n}\) is regarded as a _label_, that may denote the belonging of the agent to a certain group or subpopulation. We assume that _label switches_, i.e. migrations across subpopulations, can be caused by binary interactions between agents, causing a transfer of (potentially) both of them, but in such a way that the total mass of the agents in the system is conserved. We say that this process is formally a Markov-type one because the probability to switch from the current labels \(x,y\) to new labels \(x^{\prime},y^{\prime}\) does not depend on how the agents reached previously the labels \(x,y\). In particular we denote
\[P_{xy}^{x^{\prime}y^{\prime}}:=P(x^{\prime},y^{\prime}|x,y) \tag{15}\]
the conditional probability density function of switching to the groups \(x^{\prime},y^{\prime}\) given the pre-interaction labels \(x,y\).
_Remark 2.1_.: Since the variables \(x,y\) are discrete, the mapping \((x^{\prime},y^{\prime})\mapsto P(x^{\prime},y^{\prime}|x,y)\) is a discrete probability measure. Consequently, we actually have
\[\int_{\mathcal{I}_{n}^{2}}P(x^{\prime},y^{\prime}|x,y)\,dx^{\prime}dy^{\prime} =\sum_{x^{\prime},y^{\prime}\in\mathcal{I}_{n}}P(x^{\prime},y^{\prime}|x,y)=1. \tag{16}\]
If we introduce the probability density function \(f=f(x,t)\geq 0\) of the agents with label \(x\) at time \(t\), its evolution can be modelled by a kinetic equation describing a Markov-type jump process:
\[\partial_{t}f(x^{\prime},t)=\lambda\left(\int_{\mathcal{I}_{n}^{3}}P_{xy}^{x^ {\prime}y^{\prime}}f(x,t)f(y,t)\,dxdydy^{\prime}-f(x^{\prime},t)\right), \tag{17}\]
where \(\lambda>0\) is the (constant) switch frequency. In weak form (17) reads
\[\frac{d}{dt}\int_{\mathcal{I}_{n}}\psi(x)f(x,t)\,dx=\lambda\int_{\mathcal{I}_ {n}^{2}}\int_{\mathcal{I}_{n}^{2}}(\psi(x^{\prime})-\psi(x))P_{xy}^{x^{\prime} y^{\prime}}f(x,t)f(y,t)\,dx^{\prime}dy^{\prime}\,dxdy, \tag{18}\]
where \(\psi:\mathcal{I}_{n}\to\mathbb{R}\) is an observable quantity (test function) defined on \(\mathcal{I}_{n}\). Equation (18) can be derived by (9) by setting \(\mathbf{z}=x\) and
\[P_{xy}^{x^{\prime}y^{\prime}}=\frac{T(x^{\prime}|x,y)+\tilde{T}(y^{\prime}|x,y )}{2}.\]
Since \(x\in\mathcal{I}_{n}\) is discrete, we may conveniently represent the distribution function \(f\) as
\[f(x,t)=\sum_{i=1}^{n}f_{i}(t)\delta(x-i), \tag{19}\]
where \(\delta(x-i)\) is the Dirac distribution centred in \(x=i\) and \(f_{i}=f_{i}(t)\geq 0\) is the probability that an agent is labelled by \(x=i\) at time \(t\). In this way, we reconcile the weak form (18) with the convention introduced in Remark 2.1, and (18) actually becomes
\[\sum_{i=1}^{n}\psi(i)f_{i}^{\prime}(t)=\lambda\sum_{i,l=1}^{n}\sum_{j,k=1}^{n} (\psi(i)-\psi(j))P_{jk}^{il}f_{j}(t)f_{k}(t). \tag{20}\]
We have conservation of the total mass thanks to (16), as we can verify setting \(\psi=1\) in (18). Using (19) and, then, setting \(\psi=1\) in (20), this corresponds to
\[\sum_{i=1}^{n}f_{i}(t)=1. \tag{21}\]
Choosing \(\psi\) such that \(\psi(s)=1\) for a certain \(s\in\mathcal{I}_{n}\) and \(\psi(x)=0\) for all \(x\in\mathcal{I}_{n}\setminus\{s\}\) we get in particular
\[f^{\prime}_{s}=\lambda\left(\sum_{j,k,l=1}^{n}P_{jk}^{sl}f_{j}f_{k}-f_{s} \right),\qquad s=1,\,\ldots,\,n, \tag{22}\]
where we have used (16) and (21). If we allow the interaction frequency \(\lambda_{xy}\) to depend on the labels of the interacting agents, then the equation becomes
\[\sum_{i=1}^{n}\psi(i)f^{\prime}_{i}(t)=\sum_{i,l=1}^{n}\sum_{j,k=1}^{n}(\psi(i )-\psi(j))\lambda_{jk}P_{jk}^{il}f_{j}(t)f_{k}(t)\]
so that
\[f^{\prime}_{s}=\sum_{j,k,l=1}^{n}\left(\lambda_{jk}P_{jk}^{sl}f_{j}f_{k}- \lambda_{sk}P_{sk}^{jl}f_{s}f_{k}\right),\qquad s=1,\,\ldots,\,n. \tag{23}\]
In general we define
\[\beta_{ij}^{kl}:=\lambda_{ij}P_{ij}^{kl} \tag{24}\]
the _rate_ of transfer for a couple from the subgroups \((i,j)\) to \((k,l)\) as a consequence of a binary interaction between two agents labelled \(i\) and \(j\).
### Interacting particles with label switch and exchange of physical quantities
Let us now consider the case in which an agent is characterized by a physical quantity \(v\in\mathbb{R}_{+}\) and by a label \(x\in\mathcal{I}_{n}\) that, again, denotes the belonging of the agent to a certain group. Hence, now, the microscopic state is \(\boldsymbol{z}=(x,v)\in\Omega=\mathcal{I}_{n}\times\mathbb{R}_{+}\). The present framework allows to describe a situation in which an agent, as a consequence of a single binary interaction, changes (simultaneously) both the microscopic quantity \(v\) and the label \(x\). Agents within the same group, i.e. with the same label, are assumed to be indistinguishable. We can take into account the possibility that the interactions among agents with the same label differ from those among agents with different labels. In general, if \((x,v),\,(y,w)\in\mathcal{I}_{n}\times\mathbb{R}_{+}\) denote the pre-interaction states of any two interacting agents, their post-interaction quantities \(v^{\prime},\,w^{\prime}\) will be given by (10), where \(I=I_{xy},\tilde{I}=\tilde{I}_{xy},D=D_{xy},\tilde{D}=\tilde{D}_{xy}\) may depend on \(x,y\). Moreover, agents can also migrate to other subgroups \(x^{\prime},y^{\prime}\) and this microscopic transfer process is described by the probability function (15).
We now want to derive a kinetic equation for the joint distribution function \(f=f(x,v,t)\geq 0\), such that \(f(x,v,t)dv\) gives the proportion of agents labelled by \(x\in\mathcal{I}_{n}\) and having microscopic state comprised between \(v\) and \(v+dv\) at time \(t\). The discreteness of \(x\) allows us to represent \(f\) as [30]
\[f(x,v,t)=\sum_{i=1}^{n}f_{i}(v,t)\delta(x-i), \tag{25}\]
where \(f_{i}=f_{i}(v,t)\geq 0\) is the distribution function of the microscopic state \(v\) of the agents with label \(i\) and, in particular, \(f_{i}(v,t)dv\) is the proportion of agents with label \(i\) whose microscopic state is comprised between \(v\) and \(v+dv\) at time \(t\).
Since both the interactions and the label switching conserve the total mass of the system, we may assume that \(f(x,v,t)\) is a probability distribution, namely:
\[\int_{\mathbb{R}_{+}}\int_{\mathcal{I}_{n}}f(x,v,t)\,dx\,dv=\sum_{i=1}^{n} \int_{\mathbb{R}_{+}}f_{i}(v,t)\,dv=1\qquad\forall\,t>0. \tag{26}\]
Notice, however, that the \(f_{i}\)'s are in general not probability density functions because their \(v\)-integral varies in time due to the label switching. We denote by
\[\rho_{i}(t):=\int_{\mathbb{R}_{+}}f_{i}(v,t)\,dv \tag{27}\]
the mass of the group of agents with label \(i\), thus \(0\leq\rho_{i}(t)\leq 1\) and
\[\sum_{i=1}^{n}\rho_{i}(t)=1\qquad\forall\,t>0.\]
Let us also define the first statistical moment of \(f_{i}\)
\[M_{i}(t):=\int_{\mathbb{R}_{+}}v\,f_{i}(v,t)\,dv\]
so that the average of the \(i-\)th group is
\[m_{i}(t):=\frac{M_{i}(t)}{\rho_{i}(t)}.\]
The kinetic evolution equation for \(f(x,v,t)\), expressed as in (25), is given by (9), where now \(\boldsymbol{z}=(x,v)\), which has to hold for every \(\phi=\phi(x,v):\mathcal{I}_{n}\times\mathbb{R}_{+}\to\mathbb{R}\). Hence the evolution equation for \(f\) is
\[\frac{d}{dt}\sum_{i=1}^{n}\int_{\mathbb{R}_{+}}\phi(i,v)f_{i}(v,t )\,dv\] \[\qquad\qquad=\frac{1}{2}\int_{\mathbb{R}_{+}^{2}}\sum_{i,j,k=1}^ {n}\lambda_{jk}\int_{\mathbb{R}_{+}}T((i,v^{\prime})|(j,v),(k,w))\big{(}\phi(i,v^{\prime})-\phi(j,v)\big{)}f_{j}(v,t)f_{k}(w,t)\,dvdwdv^{\prime}\] \[\qquad\qquad+\frac{1}{2}\int_{\mathbb{R}_{+}^{2}}\sum_{l,j,k=1}^ {n}\lambda_{jk}\int_{\mathbb{R}_{+}}\tilde{T}((l,w^{\prime})|(j,v),(k,w))\big{(} \phi(l,w^{\prime})-\phi(k,w)\big{)}f_{j}(v,t)f_{k}(w,t)\,dvdwdw^{\prime}. \tag{28}\]
Choosing \(\phi(x,v)=\psi(x)\varphi(v)\) with \(\psi\) such that \(\psi(s)=1\) for a certain \(s\in\mathcal{I}_{n}\) and \(\psi(x)=0\) for all \(x\in\mathcal{I}_{n}\setminus\{s\}\), we finally obtain the following system of equations for the subgroup distributions \(f_{s}\)
\[\frac{d}{dt}\int_{\mathbb{R}_{+}}\varphi(v)f_{s}(v,t)\,dv =\] \[=\frac{1}{2}\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,i=1}^{n}\int_{ \mathbb{R}_{+}}\Big{(}\lambda_{jk}\varphi(v^{\prime})T((s,v^{\prime})|(j,v),(k,w))f_{j}(v,t)\] \[-\lambda_{sk}\varphi(v)T((i,v^{\prime})|(s,v),(k,w))f_{s}(v,t) \Big{)}f_{k}(w,t)\,dvdwdv^{\prime}\] \[+\frac{1}{2}\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,l=1}^{n}\int_{ \mathbb{R}_{+}}\Big{(}\lambda_{jk}\varphi(w^{\prime})\tilde{T}((s,w^{\prime} )|(j,v),(k,w))f_{k}(w,t)\] \[-\lambda_{js}\varphi(w)\tilde{T}((l,w^{\prime})|(j,v),(s,w))f_{s} (w,t)\Big{)}f_{j}(v,t)dvdwdw^{\prime}. \tag{29}\]
In particular, in order to implement the microscopic process (10)-(15), we choose
\[T((x^{\prime},v^{\prime})|(x,v),(y,w)) =\langle\int_{\mathcal{I}_{n}}P_{xy}^{x^{\prime}y^{\prime}}(v,w) \delta(v^{\prime}-(I_{xy}(v,w)+D_{xy}(v,w)\eta))dy^{\prime}\rangle,\] \[\tilde{T}((y^{\prime},w^{\prime})|(x,v),(y,w)) =\langle\int_{\mathcal{I}_{n}}P_{xy}^{x^{\prime}y^{\prime}}(v,w) \delta(w^{\prime}-(\tilde{I}_{xy}(v,w)+\tilde{D}_{xy}(v,w)\eta_{*}))dx^{ \prime}\rangle, \tag{30}\]
where we remark that \(P_{xy}^{x^{\prime}y^{\prime}}=P_{xy}^{x^{\prime}y^{\prime}}(v,w)\) may depend on the microscopic physical quantities of the interacting agents. Considering (30), (29) becomes
\[\frac{d}{dt}\int_{\mathbb{R}_{+}}\varphi(v)f_{s}(v,t)\,dv =\] \[=\frac{1}{2}\langle\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,l=1}^{n} \Big{(}\beta_{jk}^{sl}\varphi(I_{xy}(v,w)+D_{xy}(v,w)\eta)f_{j}(v,t)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\beta_{sk}^{jl}\varphi(v)f _{s}(v,t)\Big{)}f_{k}(w,t)\,dvdw\rangle\] \[+\frac{1}{2}\langle\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,i=1}^{n} \Big{(}\beta_{jk}^{is}\varphi(\tilde{I}_{xy}(v,w)+\tilde{D}_{xy}(v,w)\eta_{*} )f_{k}(w,t)\] \[\qquad\qquad\qquad\qquad\qquad\qquad-\beta_{js}^{ik}\varphi(w)f _{s}(w,t)\Big{)}f_{j}(v,t)dvdw\rangle, \tag{31}\]
where we have used (24). We remark that now \(\beta_{ij}^{kl}\) may depend on the microscopic variables \(v\) and \(w\) through \(P_{ij}^{kl}\). Moreover, as done in [28] in a linear case, also the interaction frequency may depend on the microscopic state \(v\).
We remark that in this case where we consider both exchanges (10) (with \(I,\tilde{I},D\) and \(\tilde{D}\) that may depend on the labels) and transfers (15), asymmetric binary interactions arise quite commonly, even if the two processes (10) and (15), separately, are symmetric. This is mainly due to the fact that the microscopic rule (10) depends on the label of the agents. Indeed, if we consider a transfer \((i,j)\to(k,l)\), the reverse transfer \((k,l)\to(i,j)\) may occur with a different probability and with a different interaction law, losing thus the reversibility of the process, usually assumed in classical Boltzmann descriptions. Specifically, in gas mixtures the reversibility is guaranteed by conservations of momentum and total energy, and the post-collision velocities may be uniquely determined in terms of the pre-collision velocities and of the impact angles [12, 27], even in presence of chemical reactions [36]. The break of symmetry between the direct and the reverse collision is known to occur in presence of inelastic collisions (for instance in granular media [6]) causing a decay in time of the kinetic energy of the system. For interactions involving human beings the kinetic approach is much more complicated (even under simplistic assumptions), and exchanges of goods and transfers among different compartments may be non-symmetric. Just to give an example, in socio-economic problems the fraction of the own wealth that each agent is willing to give to the others may depend also on the proper amount of wealth [3], and moreover a transfer from a poor country to a rich one might be much more probable than the reverse transfer [4]. This is why the general approach with generally different transition probabilities \(T\) and \(\tilde{T}\) provides a useful tool for a correct description of this kind of processes. Moreover, it allows to build more easily exchange and transfer operators for a generic number \(n\) of subpopulations. Indeed, the usual way of extending Boltzmann theory to a set of \(n>1\) constituents consists in building up a set of \(n\) Boltzmann equations, each one
for the distribution function of the \(i\)-th constituent, with \(i=1,\ldots,n\)[24, 36, 21]. On the other hand, our transition probability approach includes the label of the individual compartment into the set of microscopic states characterizing the individual, dealing thus with only one kinetic equation, that could be separated into \(n\) different equations only when one needs to compute the pertinent moments of each compartment by choosing the appropriate test function as done in (29). This turns out to be a great advantage even from the computational point of view. As we will see in numerical tests shown in Section 5, in the present approach it is also straightforward to consider transition rates \(\beta_{ij}^{kl}\) explicitly dependent on the microscopic states, both through the binary interaction frequency and /or through the transfer probability \(P_{ij}^{kl}\), investigating thus more realistic cases with respect to classical kinetic descriptions that, for the sake of simplicity, assume constant interaction probabilities in the kernel of the Boltzmann operators. Furthermore, this approach allows to include the stochastic contributions \(\eta\) and \(\eta_{*}\) in the binary interaction rules, as invertibility is not required as in the construction of the operators.
## 3 Formal study of the kinetic equation with transition probabilities
In this section we intend to revise and illustrate some analytical tools that are useful for formally studying equation (9). After briefly stating some results on the existence and uniqueness of the solution, we consider the quasi-invariant limit in various regimes of equation (31) involving label switching and exchange of physical quantities.
### Basic theory of kinetic models with transition probabilities in Wasserstein spaces
The strong form of (9) coupled with an initial condition \(f_{0}(\boldsymbol{z})\) defines the following Cauchy problem
\[\begin{cases}&\frac{\partial}{\partial t}f(\boldsymbol{z},t)=Q^{+}(f,f)-f( \boldsymbol{z},t)\int_{\Omega}\lambda_{\boldsymbol{z}\boldsymbol{y}}f( \boldsymbol{y},t)\,d\boldsymbol{y},\qquad t>0,\quad\boldsymbol{z}\in\Omega,\\ &f(0,\boldsymbol{z})=f_{0}(\boldsymbol{z}),\qquad\boldsymbol{z}\in\Omega, \end{cases} \tag{32}\]
where
\[Q^{+}(f,f)=\frac{1}{2}\int_{\Omega^{2}}\lambda_{\boldsymbol{z}\boldsymbol{y}} T(\boldsymbol{z}|^{\prime}\!\boldsymbol{z},^{\prime}\!\boldsymbol{y})f(^{ \prime}\!\boldsymbol{z},t)f(^{\prime}\!\boldsymbol{y},t)\,d^{\prime}\! \boldsymbol{z}d^{\prime}\!\boldsymbol{y}+\frac{1}{2}\int_{\Omega^{2}}\lambda_ {\boldsymbol{z}\boldsymbol{y}}\tilde{T}(\boldsymbol{z}|^{\prime}\! \boldsymbol{y},^{\prime}\!\boldsymbol{z})f(^{\prime}\!\boldsymbol{z},t)f(^{ \prime}\!\boldsymbol{y},t)\,d^{\prime}\!\boldsymbol{z}d^{\prime}\! \boldsymbol{y}, \tag{33}\]
where \({}^{\prime}\!\boldsymbol{z},^{\prime}\!\boldsymbol{y}\) are the pre-interaction states, with the compatibility condition \(\int_{\Omega}f_{0}(\boldsymbol{z})\,d\boldsymbol{z}=1\) as (4) holds true. We remark that everything could be written for a generic mass \(\rho>0\). Let us now define
\[\bar{\lambda}:=\int_{\Omega}\lambda_{\boldsymbol{z},\boldsymbol{y}}f( \boldsymbol{y},t)\,d\boldsymbol{y}\]
that we assume to be constant throughout the whole text (this assumption includes the case of a constant \(\lambda_{\boldsymbol{z}\boldsymbol{y}}\)). If we multiply both sides of the equation by \(e^{\bar{\lambda}t}\) and we integrate in time we get
\[f(\boldsymbol{z},t)=e^{-\bar{\lambda}t}f_{0}(\boldsymbol{z})+ \int_{0}^{t}e^{\bar{\lambda}(s-t)}\left[\frac{1}{2}\int_{\Omega^{2}}\lambda_{ \boldsymbol{z},^{\prime}\!\boldsymbol{y}}T(\boldsymbol{z}|^{\prime}\! \boldsymbol{z},^{\prime}\!\boldsymbol{y})f(^{\prime}\!\boldsymbol{z},s)f(^{ \prime}\!\boldsymbol{y},s)\,d^{\prime}\!\boldsymbol{z}d^{\prime}\! \boldsymbol{y}\right.\\ \left.+\frac{1}{2}\int_{\Omega^{2}}\lambda_{\boldsymbol{z},^{ \prime}\!\boldsymbol{y}}\tilde{T}(\boldsymbol{z}|^{\prime}\!\boldsymbol{y}, ^{\prime}\!\boldsymbol{z})f(^{\prime}\!\boldsymbol{z},s)f(^{\prime}\! \boldsymbol{y},s)\,d^{\prime}\!\boldsymbol{z}d^{\prime}\!\boldsymbol{y} \right]ds, \tag{34}\]
where we have used (4). Let us define \((\Omega,d)\) a polish space. Analogously to what has been done in [23], we see that an appropriate space in which (34) can be studied is \(X:=\mathcal{C}([0,\bar{t}];\mathcal{M}_{+}(\Omega))\), where \(\bar{t}>0\) is a final time and \(\mathcal{M}_{+}(\Omega)\) is the space of positive measures on \(\Omega\) having unitary mass. Therefore \(f\in X\) is a continuous mapping as a function of time over \([0,\bar{t}]\) and it is a positive measure satisfying (4) as a function of the microscopic state \(\boldsymbol{z}\in\Omega\). In particular, \(X\) is a complete state with the distance
\[\sup_{t\in[0,\bar{t}]}W_{1}(f(t,\boldsymbol{z}),g(t,\boldsymbol{y}))\]
where
\[W_{1}(f(t,\boldsymbol{z}),g(t,\boldsymbol{y}))=\inf_{\mu\in\Gamma(f,g)}\int_{ \Omega^{2}}d(\boldsymbol{z},\boldsymbol{y})\underline{\mu}(t,\boldsymbol{z}, \boldsymbol{y})\,d\boldsymbol{z}d\boldsymbol{y} \tag{35}\]
is the 1-Wasserstein distance between \(f(t,\cdot)\) and \(g(t,\cdot)\in\mathcal{M}_{+}(\Omega)\), being \(\Gamma(f,g)\) the space of the probability density functions defined on \(\Omega^{2}\) having marginals \(f\) and \(g\).
As done in [23], we shall always assume that the transition probabilities \(T\) and \(\tilde{T}\) satisfy the following Lipschitz continuity property.
_Assumption 3.1_.: Let \(T(\boldsymbol{z}^{|}\!\boldsymbol{z},\!\boldsymbol{y}),\tilde{T}(\boldsymbol{ z}^{|}\!\boldsymbol{z},\!\boldsymbol{y})\in\mathscr{P}(\Omega)\) for all \({}^{\prime}\!\boldsymbol{z},{}^{\prime}\!\boldsymbol{y}\in\Omega\), where \(\mathscr{P}(\Omega)\) is the space of probability measures on \(\Omega\). We assume that there exists \(Lip(T)>0\), such that
\[W_{1}(T(\cdot|{}^{\prime}\!\boldsymbol{z},\!\boldsymbol{y}),T(\cdot|{}^{ \prime}\!\boldsymbol{z}_{*},\!\boldsymbol{y}_{*}))\leq\text{Lip}(T)(|{}^{ \prime}\!\boldsymbol{z}-{}^{\prime}\!\boldsymbol{z}_{*}|+|{}^{\prime}\! \boldsymbol{y}-{}^{\prime}\!\boldsymbol{y}_{*}|)\]
for all \({}^{\prime}\!\boldsymbol{z},\!\boldsymbol{y},\!\boldsymbol{z}_{*},{}^{\prime }\!\boldsymbol{y}_{*}\in\Omega\) and that the same holds for \(\tilde{T}\).
The following result holds.
**Theorem 3.2**.: _Let \(f_{0}\in\mathcal{M}_{+}(\Omega)\) and let us assume that \(T\), \(\tilde{T}\) satisfy assumption 3.1, that \(W_{1}(f_{0},T),W_{1}(f_{0},\tilde{T})<\infty\), that \(\bar{\lambda}\) is constant and that \(\lambda_{\boldsymbol{z}\boldsymbol{y}}\) is lower and upper bounded, i.e. \(\exists\,\bar{\lambda}^{m},\bar{\lambda}^{M}>0\) such that \(0<\bar{\lambda}^{m}<\lambda_{\boldsymbol{z}\boldsymbol{y}}<\bar{\lambda}^{M}<\infty\). Then, there exists a unique \(f\in X\) which solves (32), and (32) exhibits continuous dependence on the initial data. Moreover, if \(\bar{\lambda}^{M}\text{Lip}(T),\bar{\lambda}^{M}\text{Lip}(\tilde{T})<\frac{1 }{2}\), then (32) admits a unique equilibrium distribution \(f_{\infty}\), which is a probability measure on \(\Omega\) and which is also globally attractive, i.e._
\[\lim_{t\to\infty}W_{1}(f(t;\cdot);f_{\infty})=0\]
_for every solution \(f\) to (32)._
Proof.: The proof follows the same steps as done in [23] Appendix A, where the authors prove the results in the case of a bounded \(\Omega\) and, thus, use the dual form of the 1-Wasserstein distance due to the Rubinstein-Kantorovitch Theorem [1]. In the present case, as \(\Omega\) is arbitrary, we can use in the proof the definition (35) recalling the hypothesis \(W_{1}(f_{0},T),W_{1}(f_{0},\tilde{T})<\infty\) and \(\tilde{\lambda}^{m}<\lambda_{\boldsymbol{z}\boldsymbol{y}}<\tilde{\lambda}^{M}\).
Moreover, the following Theorem holds in the case of label switching and exchange of physical quantities.
**Theorem 3.3**.: _Let the transition probability distributions have the form_
\[T(x,v|({}^{\prime}\!x,{}^{\prime}\!v),({}^{\prime}\!y,{}^{\prime}\!w))=\sum_{i= 1}^{n}T_{i}(v|({}^{\prime}\!x,{}^{\prime}\!v),({}^{\prime}\!y,{}^{\prime}\!w)) \delta(x-i)\]
_where \(T_{i}\) satisfies_
\[|T_{i}(v|\langle{}^{\prime}\!x,{}^{\prime}\!v),({}^{\prime}\!y,{}^{\prime}\!w) \rangle|<Lip(T_{i})(|{}^{\prime}\!x-{}^{\prime}\!y|+|{}^{\prime}\!v-{}^{\prime }\!w|)\]
_and let the analogous property hold for \(\tilde{T}\) and \(\tilde{T}_{i}\). Let moreover_
\[f_{0}(x,v)=\sum_{i=1}^{n}f_{i}^{(0)}(v)\,\delta(x-i)\]
_be a prescribed kinetic distribution function at time \(t=0\) over the space of microscopic states \((x,v)\in\mathcal{I}_{n}\times\mathbb{R}_{+}\) such that \(f_{i}\geq 0\), \(\sum_{i=1}^{n}\int_{\mathbb{R}_{+}}f_{i}(v,t)\,dv=1,\,\forall t>0\). Then the unique solution to (32) is of the form_
\[f(x,v,t)=\sum_{i=1}^{n}f_{i}(v,t)\,\delta(x-i)\]
_(analogous to (19)), with coefficients \(f_{i}(v,t)\) given by (23) along with the initial conditions \(f_{i}(v,0)=f_{i}^{(0)}(v)\). In addition, it depends continuously on the initial datum as stated by Theorem 3.2._
### Quasi-invariant limit
One of the most interesting issues in the study of kinetic models is the characterisation of the stationary distributions arising asymptotically for \(t\to\infty\), which depict the emergent behaviour of the system. The jump process model (9) hardly allows one to investigate in detail the trend to equilibrium and the profile of the stationary distributions, and the explicit expression of the steady state \(f_{\infty}\) can be inferred only in particular cases [14, 35, 23].
It is widely known that the classic collisional Boltzmann equation (11)-(10) offers several analytical tools, which often permit to explicitly recover accurate approximations of \(f_{\infty}\) by means of suitable asymptotic procedures. The basic idea of such procedures is to approximate an integro-differential Boltzmann equation with an appropriate partial differential equation, more amenable to analytical investigations, at least in some regimes of the parameters of the microscopic interactions. A prominent framework in which this type of asymptotic analysis is successfully applied to (11) is that of the _quasi-invariant interactions_. This concept was first introduced in the kinetic literature on multi-agent systems in [13, 38] as a reminiscence of the _grazing collisions_ studied in the classical kinetic theory, see [41]. This corresponds to introducing a small parameter \(\epsilon\) such that the microscopic interaction rule can be written as
\[v^{\prime}\approx v+\mathcal{O}(\epsilon) \tag{36}\]
and analyzing the dynamics on a longer time scale, setting a new time variable
\[\tau:=\epsilon t \tag{37}\]
in order to compensate for the smallness of the interactions.
In this spirit, in [29], that concerns the investigation of a parallelism between the model (11)-(10) and (12), the authors propose a way to translate the concept of quasi-invariance, typically used in the context of collision-like Boltzmann equations (11)-(10), to the language of transition probabilities. The idea is the following. Let \({}^{\prime}\!\mathbf{Z}\), \(\mathbf{Z}\in\mathbb{R}_{+}\) be the random variables representing the pre- and post-interaction states, respectively, of an agent, and \({}^{\prime}\!\mathbf{Z}_{*}\in\mathbb{R}_{+}\) the one representing the pre-interaction state of the other agent involved in the interaction. In the probabilistic description via the transition probabilities, we say that interactions are quasi-invariant if, given \(0<\epsilon\ll 1\),
\[\mathrm{Prob}(|\mathbf{Z}-{}^{\prime}\!\mathbf{Z}|>\epsilon\,|\,^{\prime}\!\mathbf{Z},\,^{ \prime}\!\mathbf{Z}_{*})\leq\epsilon; \tag{38}\]
in other words, if the post-interaction state is, in probability, close to the pre-interaction state, so that the interactions produce a small transfer of microscopic state between the interacting agents.
In the present framework, in order to have a _quasi-invariant transition probability_, we can introduce rescaled transition probabilities defined by the following transform
\[T_{\epsilon}(\mathbf{z}^{\prime}|\mathbf{z},{}^{\prime}\!\mathbf{y})=\mathcal{F}_{\epsilon} [T](\mathbf{z}^{\prime}|\mathbf{z},{}^{\prime}\!\mathbf{y}),\qquad\tilde{T}_{\epsilon}(\bm {y}|{}^{\prime}\!\mathbf{z},{}^{\prime}\!\mathbf{y})=\mathcal{F}_{\epsilon}[\tilde{T}] (\mathbf{y}|{}^{\prime}\!\mathbf{z},{}^{\prime}\!\mathbf{y}) \tag{39}\]
where
\[\mathcal{F}_{\epsilon}[T]:\mathscr{P}_{1}(\mathbb{R}_{+})\longmapsto\mathscr{ P}_{1}(\mathbb{R}_{+})\]
is a family of operators (for \(\epsilon>0\)) defined on the space of the probability measures defined on \(\mathbb{R}_{+}\). We require that \(\mathcal{F}_{\epsilon}\) satisfies the following three properties:
* \(\mathcal{F}_{1}\) is the identity;
* \[\lim_{\epsilon\to 0}W_{1}(\mathcal{F}_{\epsilon}[T],\delta(\mathbf{z}^{ \prime}-\mathbf{z}))=0,\qquad\lim_{\epsilon\to 0}W_{1}(\mathcal{F}_{\epsilon}[ \tilde{T}],\delta(\mathbf{z}^{\prime}-\mathbf{z}))=0\] (40)
* \[\lim_{\epsilon\to 0}W_{1}(\frac{\mathcal{F}_{\epsilon}[T]}{\epsilon},T)=0, \qquad\lim_{\epsilon\to 0}W_{1}(\frac{\mathcal{F}_{\epsilon}[\tilde{T}]}{ \epsilon},\tilde{T})=0\] (41)
meaning that \(\epsilon=1\) corresponds to the basic regime (\(F1\)), that for small values of \(\epsilon\) the microscopic state tends not to change (\(F2\)), and that on the long time scale (37) the dynamics is ruled by \(T\) and \(\tilde{T}\) (\(F3\)) [29]. An example of properly rescaled transition probabilities is
\[T_{\epsilon}(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y})=(1-\epsilon)\delta(\mathbf{z}^{ \prime}-\mathbf{z})+\epsilon T(\mathbf{z}^{\prime}|\mathbf{z},\mathbf{y}),\qquad\tilde{T}_{ \epsilon}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{y})=(1-\epsilon)\delta(\mathbf{y}^{\prime}- \mathbf{y})+\epsilon\tilde{T}(\mathbf{y}^{\prime}|\mathbf{z},\mathbf{y}) \tag{42}\]
as introduced in [29], satisfying the properties \(F1\), \(F2\), \(F3\).
In the following we shall investigate the quasi invariant limit in the three examples illustrated in the previous section.
#### 3.2.1 Boltzmann-type description of classical binary interaction dynamics
The quasi-invariant limit procedure is classically applied to the collisional Boltzmann equation with microscopic interaction rules (11)-(10) (or (11)-(13)). Let us introduce a small parameter \(0<\epsilon\ll 1\), a time scale (37) and a corresponding probability density function \(f^{\epsilon}(\tau,v)=f(\tau/\epsilon,v)\). For what we have said in Section 2, as shown in [29], we have that the quasi-invariant microscopic rule for having the same evolution of the average and the energy of both \(f\) and \(f^{\epsilon}\) on the \(t\)- and \(\tau\)-time scale, respectively, is
\[v^{\prime}=V_{T_{\epsilon}}(v,w)+D_{T_{\epsilon}}(v,w)\eta,\]
and analogously for the microscopic rule for \(w^{\prime}\). If we consider the quasi-invariant transition probability (42), we have that \(V_{T_{\epsilon}}(v,w)=v+\epsilon(I(v,w)-v)\) and \(D_{T_{\epsilon}}(v,w)=\sqrt{\epsilon(1-\epsilon)(I(v,w)-v)^{2}+\epsilon D(v, w)^{2}}\) so that the quasi-invariant microscopic rules are
\[v^{\prime}=v+\epsilon(I(v,w)-v)+\sqrt{\epsilon(1-\epsilon)(I(v, w)-v)^{2}+\epsilon D(v,w)^{2}}\,\eta),\] \[w^{\prime}=w+\epsilon(\tilde{I}(v,w)-w)+\sqrt{\epsilon(1- \epsilon)(\tilde{I}(v,w)-w)^{2}+\epsilon\tilde{D}(v,w)^{2}}\,\eta_{*}). \tag{43}\]
Therefore, in terms of transition probabilities, in order to recover the same evolution of the first two moments in the two models (11)-(10) and (12), as shown in the previous section, we
must consider \(T_{\epsilon}\) having average \(\epsilon(I(v,w)-v)\) and variance \(\epsilon(1-\epsilon)(I(v,w)-v)^{2}+\epsilon D(v,w)^{2}\) and analogously for \(\tilde{T}_{\epsilon}\). In particular, in order for (11)-(10) and (12) to be the same model an appropriate choice is (42) with \(\mathbf{z}=v\) or
\[T_{\epsilon}(v^{\prime}|v,w) =\delta\Big{(}v^{\prime}-(v+\epsilon(I(v,w)-v)+\sqrt{\epsilon(1- \epsilon)(I(v,w)-v)^{2}+\epsilon D(v,w)^{2}}\,\eta)\Big{)},\] \[\tilde{T}_{\epsilon}(w^{\prime}|v,w) =\delta\Big{(}w^{\prime}-(w+\epsilon(\tilde{I}(v,w)-w)+\sqrt{ \epsilon(1-\epsilon)(\tilde{I}(v,w)-w)^{2}+\epsilon\tilde{D}(v,w)^{2}}\,\eta_ {*})\Big{)}. \tag{44}\]
We remark that for \(\epsilon=1\) we recover \(T\) and \(\tilde{T}\), while for \(\epsilon\to 0\) we have that
\[W_{1}(T_{\epsilon}(\cdot|v,w),\delta(\cdot-v)) \leq\int_{\mathbb{R}_{+}^{2}}|v^{\prime}-w^{\prime}|T_{\epsilon}( v^{\prime}|v,w)\delta(w^{\prime}-v)\,dv^{\prime}dw^{\prime}\] \[=|\epsilon(I(v,w)-v)+\sqrt{\epsilon(1-\epsilon)(I-v)^{2}+ \epsilon D^{2}}\,\eta|\longrightarrow_{\epsilon\to 0}0\]
and it can also be easily verified that \(F3\) holds true for \(T_{\epsilon}\) defined by (44), and analogously for \(\tilde{T}_{\epsilon}\). If we consider for a moment the symmetric case (3), for simplicity, we have that plugging (37) into (12) and considering (39) satisfying (41), letting \(\epsilon\) to \(0^{+}\) yields
\[\partial_{\tau}f=\int_{\mathbb{R}_{+}}\int_{\mathbb{R}_{+}}T(v\,|\,^{\prime} \!v,\,^{\prime}\!w)f(\tau,\,^{\prime}\!v)f(\tau,\,^{\prime}\!w)\,d^{\prime}\!v \,d^{\prime}\!w-f, \tag{45}\]
which is structurally identical to the very general equation (12) and does not give any further information. Therefore, in spite of the quasi-invariant structure of the interactions, it is in principle not easier to extract from (45) any more detailed information about the asymptotic trends.
Instead, if we consider (12) with (44) and (37), then we can perform the quasi-invariant limit, as in [29], and we obtain letting \(\epsilon\) to \(0^{+}\)
\[\partial_{\tau}f =\frac{1}{2}\partial_{v}^{2}\left\{\left[\int_{\mathbb{R}_{+}} \Big{(}(V_{T}(v,\,w)-v)^{2}+D_{T}^{2}(v,\,w)\Big{)}\,f(\tau,\,w)\,dw\right]f\right\}\] \[\quad-\partial_{v}\left[\left(\int_{\mathbb{R}_{+}}V_{T}(v,\,w)f( \tau,\,w)\,dw-v\right)f\right], \tag{46}\]
where \(V_{T},D_{T}\) are the average and variance of \(T\) defined in Section 2.2.
#### 3.2.2 Label switch process caused by binary interactions
Let us now consider the transfer process described by the kinetic equation (23). In order to write a quasi-invariant regime, we can express the fact that, given a collision, individuals have a small probability of jumping, i.e.
\[P_{\epsilon_{xy}}^{x^{\prime}y^{\prime}}=\epsilon P_{xy}^{x^{\prime}y^{\prime}} \quad\text{if}\quad(x,y)\neq(x^{\prime},y^{\prime}),\quad P_{\epsilon_{xy}}^{ x^{\prime}y^{\prime}}=1-\epsilon\quad\text{if}\quad(x,y)=(x^{\prime},y^{\prime}). \tag{47}\]
Then, considering a long time scale (37) we have that (23) is
\[\frac{df_{s}^{\epsilon}}{d\tau}=\frac{1}{\epsilon}\sum_{j,k,l=1,j\neq s,\,k \neq l}^{n}\Big{(}P_{\epsilon_{jk}}^{sl}\lambda_{jk}f_{j}^{\epsilon}-P_{ \epsilon_{sk}}^{jl}\lambda_{sk}f_{s}^{\epsilon}\Big{)}\,f_{k}^{\epsilon},\qquad s =1,\,\ldots,\,n. \tag{48}\]
Plugging (47) in (48) we obtain
\[\frac{df^{\epsilon}_{s}}{d\tau}=\sum_{j,k,l=1}^{n}\Big{(}\beta_{jk}^{sl}f^{ \epsilon}_{j}f^{\epsilon}_{k}-\beta_{sk}^{jl}f^{\epsilon}_{s}f^{\epsilon}_{k} \Big{)}\,,\qquad s=1,\,\ldots,\,n\]
which is structurally identical to the very general equation (23) with \(\tau\) instead of \(t\), meaning that it is the probability of jumping that rules the dynamics on the long time scale.
#### 3.2.3 Interacting particles with label switch and exchange of physical quantities
Let us now consider the case in which the binary interactions lead both to a transfer and to an exchange of the physical quantity \(v\). Therefore, for the two processes we shall consider a quasi-invariant regime given by (36)-(47). In this case the transition probability (30) may be rescaled as
\[T_{\epsilon}((x^{\prime},v^{\prime})|(x,v),(y,w))=P^{x^{\prime}y^{\prime}}_{ \epsilon_{xy}}\delta\Big{(}v^{\prime}-(v+\epsilon(I_{xy}-v)+\sqrt{\epsilon(1- \epsilon)(I_{xy}-v)^{2}+\epsilon D_{xy}^{2}}\,\eta)\Big{)} \tag{49}\]
and analogously for \(\tilde{T}_{\epsilon}\). Let us consider the symmetric case for simplicity of notation, bearing in mind that the asymmetric case can be treated analogously. Plugging the latter in (31) and considering the re-scaling (37) and reminding (47), if we let \(\epsilon\to 0^{+}\) we obtain
\[\frac{d}{d\tau}\int_{\mathbb{R}_{+}}\varphi(v)f_{s}(\tau,v)\,dv=\] \[=\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,l=1,j\neq s\lor k\neq l}^{n} \varphi(v)\big{(}\beta_{jk}^{sl}f_{j}(\tau,v)-\beta_{sk}^{jl}f_{s}(\tau,v) \big{)}f_{k}(\tau,w)\,dvdw \tag{50}\]
that means that the dynamics is ruled by the label switches and gives no further information. In case of symmetry, it is the symmetric form of (31).
Let us now consider a different regime and, in particular, let us only consider a quasi-invariant exchange rule, i.e. (36). Let us then rescale (31) with (37) and let us consider the quasi-invariant transition probability
\[T_{\epsilon}((x^{\prime},v^{\prime})|(x,v),(y,w))=P^{x^{\prime}y^{\prime}}_{ xy}\delta\Big{(}v^{\prime}-(v+\epsilon(I_{xy}-v)+\sqrt{\epsilon(1-\epsilon)(I_{ xy}-v)^{2}+\epsilon D_{xy}^{2}}\,\eta)\Big{)}, \tag{51}\]
i.e. the exchange of the physical quantity is actually quasi-invariant, whilst the label-switch process is not. We obtain
\[\frac{d}{d\tau}\int_{\mathbb{R}_{+}}\varphi(v)f_{s}(\tau,v)\,dv=\] \[=\langle\frac{1}{\epsilon}\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,l=1} ^{n}\big{(}\beta_{jk}^{sl}\varphi(v+\epsilon(I_{jk}-v)+\sqrt{\epsilon(1- \epsilon)(I_{jk}-v)^{2}+\epsilon D_{jk}^{2}}\eta)f_{j}(\tau,v)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\beta_{ sk}^{jl}\varphi(v)f_{s}(\tau,v)\big{)}f_{k}(\tau,w)\,dvdw\rangle. \tag{52}\]
As \(\epsilon\) is small, we can Taylor expand
\[\langle\varphi(v+\epsilon(I_{jk}(v,w)-v)+\sqrt{\epsilon(1- \epsilon)(I_{jk}-v)^{2}+\epsilon D_{jk}^{2}}\eta)\rangle=\] \[=\varphi(v)+\epsilon(I_{jk}(v,w)-v)\varphi^{\prime}(v)+\frac{1}{ 2}\varphi^{\prime\prime}(v)\epsilon\left((1-\epsilon)(I_{jk}-v)^{2}+D_{jk}^{2} \right)+\mathcal{O}(\epsilon^{2})\]
and plugging the latter in (52) we have that
\[\frac{d}{d\tau}\int_{\mathbb{R}_{+}}\varphi(v)f_{s}(\tau,v)\,dv=\] \[=\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,l=1}^{n}\beta_{jk}^{sl}\big{(} \varphi^{\prime}(v)(I_{jk}-v)+\frac{1}{2}((1-\epsilon)(I_{jk}-v)^{2}+D_{jk}^{2 })\varphi^{\prime\prime}(v)\big{)}f_{j}(\tau,v)f_{k}(\tau,w)\,dvdw\] \[+\frac{1}{\epsilon}\int_{\mathbb{R}_{+}}\sum_{j,k,l=1,j\neq s}^{n }\varphi(v)\big{(}\beta_{jk}^{sl}f_{j}(\tau,v)-\beta_{sk}^{jl}f_{s}(\tau,v) \big{)}\rho_{k}(\tau)\,dv\]
where we remind that \(\rho_{k}\) is the mass of the \(k\)-th population. Let us now consider an expansion for the probability density function of the whole population \(f\)
\[f(\tau,x,v)=f^{(0)}(\tau,x,v)+\epsilon f^{(1)}(\tau,x,v)+\mathcal{O}(\epsilon^ {2}) \tag{53}\]
where the zero-th and first order moments satisfy
\[\int_{\mathcal{I}_{n}\times\mathbb{R}_{+}}fdxdv=\rho^{(0)}:=\int _{\mathcal{I}_{n}\times\mathbb{R}_{+}}f^{(0)}\,dxdv,\qquad\rho^{(1)}:=\int_{ \mathcal{I}_{n}\times\mathbb{R}_{+}}f^{(1)}\,dxdv=0,\] \[\int_{\mathcal{I}_{n}\times\mathbb{R}_{+}}fvdxdv=M^{(0)}:=\int_{ \mathcal{I}_{n}\times\mathbb{R}_{+}}f^{(0)}v\,dxdv,\qquad M^{(1)}:=\int_{ \mathcal{I}_{n}\times\mathbb{R}_{+}}f^{(1)}v\,dxdv=0. \tag{54}\]
For each \(f_{i}\) this translates into
\[f_{i}(\tau,v)=f_{i}^{(0)}(\tau,v)+\epsilon f_{i}^{(1)}(\tau,v)+\mathcal{O}( \epsilon^{2})\]
and (54) translates to
\[\rho^{(0)}=\sum_{i=1}^{n}\rho_{i}^{(0)}=\int_{\mathcal{I}_{n} \times\mathbb{R}_{+}}fdxdv,\qquad\sum_{i=1}^{n}\rho_{i}^{(1)}=0. \tag{55}\]
Comparing equal orders of \(\epsilon\) and supposing symmetry, we obtain
\[\frac{1}{\epsilon}\int_{\mathbb{R}_{+}}\sum_{j,k,l=1,j\neq s}^{n }\varphi(v)\big{(}\beta_{jk}^{sl}f_{j}^{(0)}(\tau,v)-\beta_{sk}^{jl}f_{s}^{(0) }(\tau,v)\big{)}\rho_{k}^{(0)}(\tau)dv=0, \tag{56}\]
so that
\[f_{s}^{(0)}(\tau,v)=\frac{\sum_{j,k,l=1,j\neq s}^{n}\beta_{jk}^{sl}f_{j}^{(0)} (\tau,v)\rho_{k}^{(0)}(\tau)}{\sum_{j,k,l=1,j\neq s}^{n}\beta_{sk}^{jl}\rho_{k }^{(0)}(\tau)}, \tag{57}\]
while, at the first order
\[\frac{d}{d\tau}\int_{\mathbb{R}_{+}}\varphi(v)f_{s}^{(0)}(\tau,v )\,dv=\] \[=\int_{\mathbb{R}_{+}^{2}}\sum_{j,k,l=1}^{n}\beta_{jk}^{sl}\big{(} \varphi^{\prime}(v)(I_{jk}(v,w)-v)+\frac{1}{2}((I_{jk}(v,w)-v)^{2}+D_{jk}^{2} )\varphi^{\prime\prime}(v)\big{)}f_{j}^{(0)}(\tau,v)f_{k}^{(0)}(\tau,w)\,dvdw\] \[+\int_{\mathbb{R}_{+}}\sum_{j,k,l=1,j\neq s}^{n}\varphi(v)\big{(} \beta_{jk}^{sl}f_{j}^{(0)}(\tau,v)-\beta_{sk}^{jl}f_{s}^{(0)}(\tau,v)\big{)} \rho_{k}^{(1)}(\tau)dv\] \[+\int_{\mathbb{R}_{+}}\sum_{j,k,l=1,j\neq s}^{n}\varphi(v)\big{(} \beta_{jk}^{sl}f_{j}^{(1)}(\tau,v)-\beta_{sk}^{jl}f_{s}^{(1)}(\tau,v)\big{)} \rho_{k}^{(0)}(\tau)dv,\]
from which we can obtain, resorting to the strong form thanks to integration by parts, a Fokker-Planck-type equation with a reaction term for each \(f_{s}^{(0)}\) that is
\[\partial_{\tau}f_{s}^{(0)}(\tau,v) =\] \[-\partial_{v}\sum_{j,k,l=1}^{n}\beta_{jk}^{sl}\int_{\mathbb{R}_{+}} (I_{jk}(v,w)-v)f_{k}^{(0)}(\tau,w)\,dwf_{j}^{(0)}(\tau,v)\] \[+\partial_{vv}^{2}\frac{1}{2}\sum_{j,k,l=1}^{n}\beta_{jk}^{sl} \int_{\mathbb{R}_{+}}((I_{jk}(v,w)-v)^{2}+D_{jk}^{2})f_{k}^{(0)}(\tau,w)\,dwf_{ j}^{(0)}(\tau,v)\] \[+\sum_{j,k,l=1,j\neq s}^{n}\big{(}\beta_{jk}^{sl}f_{j}^{(0)}(\tau,v)-\beta_{sk}^{jl}f_{s}^{(0)}(\tau,v)\big{)}\rho_{k}^{(1)}(\tau)\] \[+\sum_{j,k,l=1,j\neq s}^{n}\big{(}\beta_{jk}^{sl}f_{j}^{(1)}(\tau,v)-\beta_{sk}^{jl}f_{s}^{(1)}(\tau,v)\big{)}\rho_{k}^{(0)}(\tau). \tag{59}\]
The latter reaction terms also involve the first order corrections \(f_{i}^{(1)},\,i=1,...,n\). In order to find univocally the solutions \(f_{s}^{(0)},f_{s}^{(1)}\) to (57) and (59) satisfying (55), we need a number of conditions (to be looked for example in conserved quantities) that is equal to the number of degrees of freedom.
## 4 Kinetic model for international trade allowing transfer of individuals
In this section, we are going to rephrase with the current framework the kinetic model for international trade allowing transfer of individuals investigated in [4], where the author presents a model of interacting individuals divided into two subpopulations and allowed, by means of binary interactions, to exchange wealth and to migrate to the other subgroup. Here, then, we have that \(n=2\), the physical quantity \(v\in\mathbb{R}_{+}\) is the wealth, while the label \(x\in\mathcal{I}_{2}\) denotes the subgroup. Note that, in the present work, we are only going to consider binary interactions giving rise to both exchanges of the wealth and transfer simultaneously.
### From the microscopic to the macroscopic model
For what concerns the exchange of the physical quantity \(v\), we are going to consider simple linear microscopic rules (10) with
\[I_{xy}(v,w)=(1-\omega_{x})v+\omega_{y}w,\qquad D_{xy}=\zeta_{xy}v \tag{60}\]
where \(\omega_{i}\in[0,1]\), \(i\in\mathcal{I}_{2}\) and we are dealing with symmetric interactions. We consider possible transfers given by
\[\text{(a)}\qquad 1+1\to 1+2,\qquad\text{(b)}\qquad 2+2\to 1+2,\] \[\text{(c)}\qquad 1+2\to 1+1,\qquad\text{(d)}\qquad 1+2\to 2+2, \tag{61}\]
therefore only one of the two interacting agents moves to the other subgroup. The latter implies that the only non-vanishing values of \(P_{ij}^{bl}\) correspond to the 4-plets
\[(i,j,k,l)\in\{(1,1,1,2),(1,1,2,1),(2,2,1,2),(2,2,2,1),(1,2,1,1),(1,2,2,2),(2,1,1,1),(2,1,2,2)\}. \tag{62}\]
The kinetic equation describing this microscopic dynamics is (29) with (30), where \(P_{ij}^{kl}\) is defined according to (62) and the microscopic exchange dynamics by (60). The evolution of the mass of each subpopulation is given by setting \(\varphi=1\) in (29), with the prescribed dynamics, for \(s=1,2\) and it results in
\[\partial_{t}\rho_{1} =(\beta_{22}^{12}\rho_{2}+\beta_{12}^{11}\rho_{1})\rho_{2}-(\beta _{11}^{12}\rho_{1}+\beta_{12}^{22}\rho_{2})\rho_{1}\] \[\partial_{t}\rho_{2} =(\beta_{11}^{12}\rho_{1}+\beta_{12}^{22}\rho_{2})\rho_{1}-(\beta _{22}^{12}\rho_{2}+\beta_{12}^{11}\rho_{1})\rho_{2} \tag{63}\]
while setting \(\varphi=v\) in (29) gives the evolution of the first moments for \(s=1,2\)
\[\partial_{t}M_{1} =\left(\beta_{12}^{11}\rho_{1}+\beta_{22}^{12}\rho_{2}\right)M_{ 2}-\left(\beta_{12}^{22}\rho_{2}+\beta_{11}^{12}\rho_{1}\right)M_{1},\] \[\partial_{t}M_{2} =\left(\beta_{12}^{22}\rho_{2}+\beta_{11}^{12}\rho_{1}\right)M_{ 1}-\left(\beta_{11}^{12}\rho_{1}+\beta_{22}^{12}\rho_{2}\right)M_{2}. \tag{64}\]
As a consequence, the averages of the wealth of population 1 and 2 evolve as
\[\partial_{t}m_{1} =\frac{\rho_{2}}{\rho_{1}}\left(\beta_{22}^{12}\rho_{2}+\beta_{12 }^{11}\rho_{1}\right)(m_{2}-m_{1}),\] \[\partial_{t}m_{2} =\frac{\rho_{1}}{\rho_{2}}\left(\beta_{11}^{12}\rho_{1}+\beta_{12 }^{22}\rho_{2}\right)(m_{1}-m_{2}). \tag{65}\]
We remark that we have assumed symmetry in the interaction rates, i.e.
\[\beta_{ii}^{12}=\beta_{ii}^{21},\quad\beta_{12}^{ii}=\beta_{21}^{ii},\quad \forall i=1,2.\]
We observe that the total mass and average
\[\bar{\rho}:=\rho_{1}+\rho_{2}=1,\qquad\bar{M}:=M_{1}+M_{2}\]
are conserved in time. Regarding the stationary states of the masses, we have that
\[\rho_{2}^{\infty}=\alpha\rho_{1}^{\infty}\]
where
\[\alpha=\frac{-(\beta_{12}^{11}-\beta_{12}^{22})+\sqrt{(\beta_{12}^{11}-\beta _{12}^{22})^{2}+4\beta_{11}^{12}\beta_{22}^{12}}}{2\beta_{22}^{12}},\]
and, taking into account that the sum of the two densities is constant, we have that
\[\rho_{1}^{\infty}=\frac{\bar{\rho}}{1+\alpha},\qquad\rho_{2}^{\infty}=\frac{ \alpha\bar{\rho}}{1+\alpha}.\]
Therefore
\[\rho_{1}^{\infty}=\rho_{2}^{\infty}=\frac{\bar{\rho}}{2}\qquad\text{if and only if}\qquad\alpha=1.\]
Bearing in mind that \(\beta_{ij}^{kl}=P_{ij}^{kl}\lambda_{ij}\), the latter condition is satisfied if \(P_{12}^{22}=P_{12}^{11}=0.5\) and \(\lambda_{11}=\lambda_{22}\), which means that the probability for interacting agents with different labels and going to the same subgroup is the same, and the frequency of interaction among agents of the same subgroup is the same for all subgroups. To this regard, we observe that, as the stationary state only depends on \(\alpha\), then there may be a switch in the population size \(((\rho_{2}^{\infty}-\rho_{1}^{\infty})(\rho_{2}(0)-\rho_{1}(0))<0)\) if
\[(\rho_{2}(0)-\rho_{1}(0))(\alpha-1)<0. \tag{66}\]
For what concerns the average, the sufficient and necessary condition to be met at the stationary state is
\[m_{1}^{\infty}=m_{2}^{\infty}=:m^{\infty} \tag{67}\]
for every choice of the parameters. The latter implies that \(M_{2}^{\infty}>M_{1}^{\infty}\) if and only if \(\alpha>1\). Moreover, because of conservation of mass and total momentum, we have that
\[m^{\infty}=\rho_{1}(0)m_{1}(0)+\rho_{2}(0)m_{2}(0) \tag{68}\]
that implies that the final average wealth is closer to the initial average wealth of the subgroup that was more populated at \(t=0\).
### Quasi-invariant limit
If we assume, for simplicity of notation, that the probability of transfer towards the \(i\)-th subgroup is independent of the countries of the interacting agents, i.e.
\[\beta_{1}^{2}:=\beta_{11}^{12}=\beta_{12}^{22},\qquad\beta_{2}^{1}:=\beta_{22 }^{12}=\beta_{12}^{11} \tag{69}\]
we have that
\[\rho_{2}^{\infty}=\alpha\rho_{1}^{\infty},\qquad M_{2}^{\infty}=\alpha M_{1} ^{\infty}, \tag{70}\]
where \(\alpha=\beta_{1}^{2}/\beta_{2}^{1}\), and, as \(\bar{M}\) and \(\bar{\rho}=1\) are conserved quantities, we have that
\[\rho_{1}^{\infty}=\frac{\bar{\rho}}{1+\alpha},\qquad M_{1}^{\infty}=\frac{ \bar{M}}{1+\alpha}.\]
Let us now consider the quasi-invariant regime defined by the transition probability
\[T_{\epsilon}((x^{\prime},v^{\prime})|(x,v),(y,w))=P_{xy}^{x^{\prime}y^{\prime }}\delta\Big{(}v^{\prime}-(v+\epsilon(I_{xy}(v,w)-v)+\sqrt{\epsilon D_{xy}(v,w )^{2}}\eta)\Big{)}. \tag{71}\]
The latter, even if it satisfies the requirements F1, F2, F3 prescribed in Section 3.2, differently from (51), does not guarantee the same evolution of the energy in the quasi-invariant regime. By expanding the distribution functions in powers of \(\epsilon\), imposing that the globally invariant quantities (zero-th and first order moments) remain unexpanded, we get that the constraints (55) are
\[\rho_{1}^{(1)}+\rho_{2}^{(1)}=0,\qquad M_{1}^{(1)}+M_{2}^{(1)}=0. \tag{72}\]
Therefore, we have two degrees of freedom and we can determine the values of \(\rho_{1}^{(1)},M_{1}^{(1)}\) as we have two conserved quantities. Following the same procedure as before, we find that (57) now is
\[f_{1}^{(0)}=\frac{1}{\alpha}f_{2}^{(0)}\,. \tag{73}\]
Then we have that \(\rho_{2}^{(0)}=\alpha\rho_{1}^{(0)}\) and \(M_{2}^{(0)}=\alpha M_{1}^{(0)}\), i.e. \(\rho_{i}^{(0)}=\rho_{i}^{\infty},M_{i}^{(0)}=M_{i}^{\infty}\), \(i=1,2\), which means that the masses and averages of order zero correspond to the equilibrium ones. Therefore
\[\rho_{1}^{(0)}=\frac{\bar{\rho}}{1+\alpha},\qquad M_{1}^{(0)}=\frac{\bar{M}} {1+\alpha}\,. \tag{74}\]
At the first order (59) for \(s=1\) (for \(s=2\) an analogous result applies) specialises into
\[\partial_{\tau}f_{1}^{(0)}=-\partial_{v}\Big{(}\beta_{1}^{2}( \omega_{1}M_{1}^{(0)}-\omega_{2}v\rho_{1}^{(0)})f_{1}^{(0)}+\beta_{1}^{2}( \omega_{2}M_{2}^{(0)}-\omega_{2}v\rho_{2}^{(0)})f_{1}^{(0)}\] \[\qquad\qquad+\beta_{1}^{2}(\omega_{1}M_{1}^{(0)}-\omega_{1}v\rho _{1}^{(0)})f_{1}^{(0)}+\beta_{2}^{1}(\omega_{2}M_{2}^{(0)}-\omega_{1}v\rho_{2 }^{(0)})f_{1}^{(0)}\Big{)}\] \[\qquad\qquad+\beta_{1}^{2}\left[\zeta_{11}^{2}v^{2}\right]f_{1}^ {(0)}\rho_{1}^{(0)}+\beta_{2}^{1}\left[\zeta_{12}^{2}v^{2}\right]f_{1}^{(0)} \rho_{2}^{(0)}\Big{)}\] \[\qquad\qquad+\left(f_{2}^{(1)}\beta_{2}^{1}-f_{1}^{(1)}\beta_{1}^ {2}\right)\bar{\rho} \tag{75}\]
where use of (73) has been made. Integrating (75) over \(\mathbb{R}_{+}\) and (75) multiplied by \(v\) over \(\mathbb{R}_{+}\), along with the conditions \(f_{1}(0)=0\) and \(\lim_{v\to+\infty}f_{1}(v)=0\), and remembering (72), we discover that both \(\rho_{1}^{(1)}=\rho_{2}^{(1)}=0\) and \(M_{1}^{(1)}=M_{2}^{(1)}=0\). This implies the fact that both the masses and the averages of \(f_{1}\) and \(f_{2}\) are at the equilibrium even at \(\mathcal{O}(\epsilon)\) accuracy. Using relations (73)-(74) in (75), we obtain
\[\begin{split}\partial_{\tau}f_{1}^{(0)}&=-\frac{ \beta_{1}^{2}}{1+\alpha}\partial_{v}\left[\Big{(}(\omega_{1}\bar{M}-\omega_{2} v\bar{\rho})+\alpha(\omega_{2}\bar{M}-\omega_{2}v\bar{\rho})+(\omega_{1}\bar{M}- \omega_{1}v\bar{\rho})+(\omega_{2}\bar{M}-\omega_{1}v\bar{\rho})\Big{)}f_{1}^ {(0)}\right]\\ &\quad+\frac{\zeta^{2}\beta_{1}^{2}\bar{\rho}}{2(1+\alpha)} \partial_{v^{2}}^{2}\Big{(}v^{2}(3+\alpha)f_{1}^{(0)}\Big{)}\\ &\quad+\left(f_{2}^{(1)}\beta_{2}^{1}-f_{1}^{(1)}\beta_{1}^{2} \right)\bar{\rho}\end{split} \tag{76}\]
where we have also assumed that the stochastic fluctuations are the same in each kind of interaction, i.e.
\[\zeta_{ij}=\zeta,\qquad\forall i,j\in\mathcal{I}_{2}.\]
Equation (76) is a Fokker-Planck equation with reaction term, where the advection-diffusion part (first and second line) only involves \(f_{1}^{(0)}\) and its (known) mass and average (74), while the reaction term (third line in (76)) only depends on \(f_{1}^{(1)},f_{2}^{(1)}\). As \(\rho_{1}^{(1)}=\rho_{2}^{(1)}=0\) and \(M_{1}^{(1)}=M_{2}^{(1)}=0\), the reaction term does not influence the mass and average of \(f_{1}^{(0)}\). It is therefore reasonable to look for the stationary solution to the Fokker-Planck equation without reaction term, i.e.
\[\begin{split}\partial_{\tau}\tilde{f}_{1}^{(0)}&=- \frac{\beta_{1}^{2}}{1+\alpha}\partial_{v}\left[\Big{(}(\omega_{1}\bar{M}- \omega_{2}v\bar{\rho})+\alpha(\omega_{2}\bar{M}-\omega_{2}v\bar{\rho})+( \omega_{1}\bar{M}-\omega_{1}v\bar{\rho})+(\omega_{2}\bar{M}-\omega_{1}v\bar{ \rho})\Big{)}\tilde{f}_{1}^{(0)}\right]\\ &\quad+\frac{\zeta^{2}\beta_{1}^{2}\bar{\rho}}{2(1+\alpha)} \partial_{v^{2}}^{2}\Big{(}v^{2}(3+\alpha)\tilde{f}_{1}^{(0)}\Big{)}\end{split} \tag{77}\]
that is (76) where we neglect the third term on the right hand side, as this does not contribute to a variation of mass and average of \(f_{1}^{(0)}\). Therefore we obtain
\[\tilde{f}_{1}^{(0)}=\frac{\bar{\rho}}{1+\alpha}v^{-2\Big{(}1+\frac{\gamma}{2} \Big{)}}\exp^{-\frac{\bar{M}}{\bar{\rho}}}\frac{\gamma}{v}\,,\qquad\gamma= \frac{B}{D},\quad B=2\omega_{1}+\omega_{2}(1+\alpha),\quad D=\zeta^{2}\frac{3 +\alpha}{2}, \tag{78}\]
The mass and average can be verified to be \(\bar{\rho}/(1+\alpha)\) and \(\frac{\bar{M}}{1+\alpha}\) respectively, while the energy is \(\frac{\bar{M}}{(1+\alpha)(\gamma-1)}\). Moreover, we can determine the _Pareto index_ of the first population that is (approximated by)
\[PI_{1}=\gamma+1=\frac{2\omega_{1}+\omega_{2}(1+\alpha)}{\frac{\zeta^{2}}{2}(3+ \alpha)}+1 \tag{79}\]
that depends on the trading propensity of both populations \(\omega_{1},\omega_{2}\), on the ratio \(\alpha\) that involves the rates \(\beta_{i}^{j}\) and on the stochasticity \(\zeta^{2}\). Since, according to (73), \(\tilde{f}_{2}^{(0)}=\alpha\,\tilde{f}_{1}^{(0)}\), both populations have the same (approximate) Pareto index.
We remark that considering only one population corresponds to setting \(\omega_{1}=\omega_{2}=\omega\) and \(\beta_{1}^{2}=\beta_{2}^{1}\). If \(\beta_{2}^{1}=\beta_{1}^{2}\), then there is no reaction term in (76) so that the stationary state (78) is exact and \(\alpha=1\) that implies \(f_{1}^{(0)}=f_{2}^{(0)}\). Moreover, the Pareto index (now exact) is
\[\frac{2\omega}{\zeta^{2}}+1,\]
that coincides with the one commonly obtained from a kinetic model for a single population [13]. Note that, even keeping \(\alpha\neq 1\), in the case \(\omega_{1}=\omega_{2}\) the Pareto index of each group coincides with that relevant to a single population.
## 5 Numerical tests
In this section we present some numerical tests that illustrate the dynamics of the model that we have introduced in the previous section. We integrate the kinetic equation (31) numerically using a modified version of the Nanbu-Babovski Monte Carlo algorithm (see Algorithm 1 in the Appendix). The latter is based on a direct implementation of the time discrete stochastic microscopic process (5)-(6)-(8) with (30) as illustrated in Section 2.4 for \(N\) agents, which in the limit \(\Delta t\to 0^{+}\) produces the kinetic equation (31). In particular, we shall consider the microscopic rules (60)-(62), with \(n=2\). We perform an empirical statistics of the \(N\) simulated agents and define the distribution functions \(f_{1}^{MC},f_{2}^{MC}\), their masses \(\rho_{1}^{MC},\rho_{2}^{MC}\) and first moments \(M_{1}^{MC},M_{2}^{MC}\).
### Interacting particles with label switch and exchange of physical quantities
In all numerical tests we consider \(\rho_{1}(0)=0.9\) and \(\rho_{2}(0)=0.1\), i.e. the subgroup labelled with \(x=1\) is initially more populated. We remark that the assumed symmetry in the process (62) implies \(P_{11}^{11}=P_{11}^{22}=P_{22}^{22}=P_{22}^{11}=0\) and then \(P_{11}^{12}=P_{11}^{21}=0.5\), being \(P_{ij}^{kl}\) a conditional probability. In the following numerical tests, we shall always consider \(\omega_{1}=\omega_{2}=0.5\), as the values of \(\omega_{1}\) and \(\omega_{2}\) do not affect the averages' evolution and stationary state (65). Moreover, we fix
\[\lambda_{11}=1\]
and we vary the other parameters.
In the first set of simulations (**Test 1** in the following), we choose \(\lambda_{22}=10,\lambda_{12}=1\), \(P_{12}^{11}=0.5,P_{12}^{22}=0.5\) and we consider two different initial conditions for the distributions \(f_{1}\) and \(f_{2}\). In **Case A** we have that the first population is poorer than the second population at time \(t=0\), i.e.
\[f_{1}(v,0)=\rho_{1}(0)\mathbb{1}_{[0,1]}(v),\qquad f_{2}(v,0)=\rho_{2}(0) \frac{1}{10}\mathbb{1}_{[5,15]}(v) \tag{80}\]
while in **Case B** we invert the initial wealths
\[f_{1}(v,0)=\rho_{1}(0)\frac{1}{10}\mathbb{1}_{[5,15]}(v),\qquad f_{2}(v,0)= \rho_{2}(0)\mathbb{1}_{[0,1]}(v),\]
i.e. the second population is poorer than the first population at time \(t=0\). We report the results in Figure 1. First of all, we observe that this choice of parameters prescribes \(\alpha<1\), that, as showed by the macroscopic equations (63), implies \(\rho_{1}^{\infty}>\rho_{2}^{\infty}\) (see Figure 1(a)). As forecast by theoretical results, the final average wealth \(m^{\infty}=m_{1}=m_{2}\) is closer to the initial wealth of the initially more populated subgroup: then \(m^{\infty}\) is smaller in case A) and larger in case B) (see Figure 1 (b)). This implies a different behaviour of the first moment of both \(f_{1}\) and \(f_{2}\) (see Figure 1 (c)): while in case B) the first population remains the richer one as it is the one that is initially more populated, in scenario A), the mean wealth is inverted as the first population becomes reacher. In each case we compare the evolution of the macroscopic quantities \(\rho_{1}^{MC},\rho_{2}^{MC},M_{1}^{MC},M_{2}^{MC},m_{1}^{MC},m_{2}^{MC}\) as prescribed by the microscopic model (5)-(6)-(8)-(30) with (60)-(62) and the ones whose evolution is given by the derived equations (63)-(64)-(65) for the macroscopic quantities \(\rho_{1},\rho_{2},M_{1},M_{2},m_{1},m_{2}\). Being \(\Delta t=1e-2\) and \(N=10^{6}\)
we observe a very good agreement between the solution of the microscopic model and the one of the macroscopic model. The integration of the kinetic equations also allows to approximate numerically the distribution functions \(f_{1}\) and \(f_{2}\) that we report at the equilibrium in Figure 1(d) in both cases A) and B). In Figure 1 (e)-(f) we report the time evolution of \(f_{1}^{MC}\) and \(f_{2}^{MC}\) in case A) with initial condition (80). In Fig. 1 (e)-(f) we report the time evolution for \(f_{1}^{MC},f_{2}^{MC}\).
In the second set of simulations (**Test 2**) we choose \(\lambda_{11}=1,\lambda_{22}=1,\lambda_{12}=10\), i.e. intra-species interactions have the same frequency in the two populations, while the inter-group interactions have a higher frequency. Moreover, we consider three cases for the inter-group interactions: in case i) \(P_{12}^{11}=0.5,P_{12}^{22}=0.5\), i.e. given an inter-group interaction, the probability for both agents of transferring is the same, while in case ii) \(P_{12}^{11}=0.2,P_{12}^{22}=0.8\), i.e. the probability of transferring to the subgroup 2 is higher and iii) \(P_{12}^{11}=0.8,P_{12}^{22}=0.2\), i.e. the probability of transferring to the subgroup 1 is higher. The initial condition is set as in (80). We observe that in the three cases i), ii) and iii) we have respectively \(\alpha=1,\alpha>1,\alpha<1\), that imply, see (63), \(\rho_{1}^{\infty}=\rho_{2}^{\infty},\rho_{1}^{\infty}<\rho_{2}^{\infty},\rho_ {2}^{\infty}<\rho_{1}^{\infty}\), respectively. In particular, because of (66), in case ii) we have a switch in the trend of the populations, as population 1 becomes the less populated (see Fig. 2 (a)). This also implies a different (non monotone) trend of the first moments as reported in Fig. 2 (b).
In **Test 3**, we consider a switching probability depending on the microscopic wealth. In this case, it is not immediate, in general, to derive equations for the macroscopic quantities, unless in special cases, for example \(P_{ij}^{kl}\) having a linear dependence on \(v\), or by imposing a monokinetic closure [32]. In Figure 3 we have that \(\lambda_{11}=.1,\lambda_{22}=1,\lambda_{12}=10\) and \(P_{12}^{11}=0.2\frac{1}{4}\left[1-\exp^{-v}+1-\exp^{-w}\right],P_{12}^{22}=0. 8\frac{1}{4}\left[1-\exp^{-v}+1-\exp^{-w}\right]\). We also present a comparison with the solution of macroscopic equations (63)-(65)-(64) (straight lines) where we consider constant switching probabilities \(P_{12}^{11}=0.2,P_{12}^{22}=0.8.\) We observe that both the microscopic model with \(v\)-dependent switching probabilities and the macroscopic model with constant switching probabilities forecast a similar behavior of the macroscopic quantities in the long run. The microscopic model with a \(v\)-dependent switching probability forecasts the same behaviour but with a delay, and this is due to the fact that the switching probabilities are smaller than the constant ones. In magenta and green we also present the results of the simulation of the microscopic model in case \(P_{12}^{11}=0.2\frac{1}{2}\left[\exp^{-v}+\exp^{-w}\right],P_{12}^{22}=0.8 \frac{1}{2}\left[\exp^{-v}+\exp^{-w}\right]\). In this case we can observe that the convergence is even slower. This is due to the fact that for large values of the wealth \(v\), the switching probability is very small.
### Quasi-invariant regime and Fokker-Planck equation
In this section, we consider the quasi-invariant regime (51), i.e. we analyse the dynamics on a long time-scale by considering small exchanges of wealth, while the switching probability is not rescaled. In this framework, we have seen that it is possible to approximate the leading order of the stationary solution \(f_{1}^{(0)}\) through (78). Here, we compare the stationary state (78) with the solution \(f_{1}^{MC}\) obtained by the numerical integration of the microscopic process (5)-(6)-(8)-(71) with the microscopic rules (60)-(62), where in the quasi-invariant regime (71) we have chosen \(\epsilon=10^{-3}\). In Figure 4 we represent the analytical \(\tilde{f}_{1}^{(0)}\) as given in (78) and the approximation of \(f_{1}^{MC}\). We can remark that, despite the fact that \(f_{1}^{MC}\) is obtained through a Monte Carlo simulation and \(\tilde{f}_{1}^{(0)}\) is an approximation, the agreement is quite good. In the right panel, we also represent the numerical approximation of the distribution function \(g_{1}^{MC}\) of the first population, in case we consider a quasi-invariant transition probability defined by (51). With this choice it is granted that the evolution of both the average and the energy in the quasi-invariant regime
Figure 1: **Test 1**: \(\lambda_{11}=1,\lambda_{22}=10,\lambda_{12}=1\), \(P_{12}^{11}=0.5,P_{12}^{22}=0.5\). In all figures we report the time evolution of the macroscopic quantities as prescribed by the kinetic model and by the derived macroscopic equations: masses (a), averages (b), first moments (c). Straight lines correspond to the solution to the macroscopic equations (63)-(65)-(64), while circles correspond to the solution of the kinetic equation that we obtain by simulating with the Monte Carlo algorithm 1 the microscopic model (5)-(6)-(8)-(30) as illustrated in Sec. 2.4 with the microscopic rules (60)-(62). In all figures we compare the results of the solutions given the two different initial conditions A and B. In figure (d) we show the steady states of distributions \(f_{1}\), \(f_{2}\) in both test cases A) and B), while in figures (e), (f) we report time evolution of distributions functions in the test case A).
\(\epsilon\ll 1\) is the same as in the standard regime defined by \(\epsilon=1\). In this case it was not possible to determine the stationary state explicitly like in the quasi-invariant regime leading to (78) that, on the other hand, only grants that the average is the same for each \(\epsilon\).
## 6 Conclusions
In this paper we have presented a general framework for modeling systems of interacting particles with multiple microscopic states changing simultaneously according to a given dynamics. In particular, the microscopic description relies on Markovian processes described by transition probabilities, as they depend on the pre-interaction states, and the interaction frequency depends on the microscopic states. The fact of starting from the microscopic stochastic process allows to describe in more detail the dynamics, by including parameters and quantities related to the phenomenon under study that can be observed. The derivation, through kinetic equations, of macroscopic equations allows to obtain also at the aggregate level a higher level of detail that is inherited from the underlying microscopic dynamics.
Under some assumptions, general results concerning well-posedness, existence and uniqueness of a solution for the Cauchy problem associated to our kinetic equation have been shown. We have also rephrased the concept of quasi-invariant limit in the present framework, leading to evolution equations of Fokker-Planck type.
We have applied the present modeling framework in order to describe systems of binarily interacting agents characterized by a physical quantity \(v\) (representing wealth, or opinion, or viral load, etc.) and by a label \(x\) denoting the belonging to a given subgroup. The physical quantity changes according to binary interaction rules, while the label changes, simultaneously with the physical quantity, through a switch process caused by the same binary interaction. In this context, we have seen that the description of the microscopic process by means of transition probabilities allows us to remove the reversibility assumption on the interaction rule, modeling thus also stochasticity in the binary encounters giving rise to transfers (not present in the paper [4]
Figure 2: **Test 2. Solutions to the macroscopic model (63)-(64). In (a) we report the masses \(\rho_{1},\rho_{2}\) and in (b) the first moments \(M_{1},M_{2}\). Population 1 is in red,while population 2 is in blue. The parameters are \(\lambda_{11}=1,\lambda_{22}=1,\lambda_{12}=10\) and we show three cases: i) (straight lines) \(P_{12}^{11}=0.5,P_{12}^{22}=0.5\), ii) (* marker) \(P_{12}^{11}=0.2,P_{12}^{22}=0.8\) and iii) (o marker) \(P_{12}^{11}=0.8,P_{12}^{22}=0.2\).**
Figure 3: **Test 3.** Solution to the microscopic model (5)-(6)-(8)-(30) as illustrated in Sec. 2.4 with the microscopic rules (60)-(62) with the Monte Carlo algorithm 1 (circles). Here \(\lambda_{11}=1,\lambda_{22}=.1,\lambda_{12}=10\) and \(P_{12}^{11}=0.2\dfrac{1}{2}\left[1-\exp^{-v}+1-\exp^{-w}\right],P_{12}^{22}=0.8 \dfrac{1}{2}\left[1-\exp^{-v}+1-\exp^{-w}\right]\). We also present a comparison with the solution of macroscopic equations (63)-(65)-(64) (straight lines) where we consider constant switching probabilities \(P_{12}^{11}=0.2,P_{12}^{22}=0.8.\) In magenta and green we also present the results of the simulation of the microscopic model in case \(P_{12}^{11}=0.2\dfrac{1}{2}\left[\exp^{-v}+\exp^{-w}\right],P_{12}^{22}=0.8 \dfrac{1}{2}\left[\exp^{-v}+\exp^{-w}\right]\).
using classical Boltzmann operators analogous to the reactive ones). Moreover, in our framework it is easier to consider a non-constant switching probability, i.e. depending on the microscopic physical quantity. We have analyzed and discussed various quasi-invariant regimes and performed some numerical tests showing a very good agreement between the microscopic Monte Carlo simulations and the derived macroscopic equations.
The modelling framework investigated in this paper is worth to be applied and generalized to many other problems. As first, the model for international trade with transfers presented in Section 4 could be extended by adding an extra independent microscopic process for \(v\), describing the exchange of goods without transfers; this would make the model even more similar to the kinetic description of gaseous mixtures, where elastic collisions (which do not change the nature of the particles) coexist with chemical reactions (changing the species of the reacting particles). Epidemic models based on a kinetic approach could be improved owing to our stochastic framework with multiple states as it allows to start from a microscopic description and to consider independent or simultaneous microscopic stochastic dynamics for the different variables of the microscopic state. For example, the so-called "non-conservative" interactions giving rise to the passage from one compartment to another could be made more realistic taking into account also the simultaneous change of viral load (of individuals) as done in [15, 16] or internal activity (of cells). Moreover, the relation with kinetic models with label switching and gradient descent could be established [10]. Eventually, applications to situations with many internal states is the final scope of our framework. It could provide for instance a physically reasonable description of mixtures of polyatomic gases, with each molecule characterized by its species label \(i\), its velocity \(\mathbf{v}\in\mathbb{R}^{3}\), and its internal energy that could also be separated into the vibrational part (typically described by a discrete variable) and the rotational part (typically approximated by a continuous variable) [26, 9]. Even in econophysics, the possible influence of the personal knowledge of the market on the strategy adopted in the trades (as sketched in [33] for a single population) could be described considering the individual knowledge as an additional microscopic state, besides the population label and the individual amount of wealth. Suitable quasi-invariant limits and properties of steady states of such non-standard kinetic descriptions of various interacting populations are completely open problems worth to be investigated in future research.
Figure 4: Solution of the Fokker-Planck equation (76). Left: comparison between the approximated stationary state \(\tilde{f}_{1}^{(0)}\) given by (78) and the solution \(f_{1}^{MC}\) of the microscopic model (5)-(6)-(8)-(71) with the microscopic rules (60)-(62), with \(\epsilon=10^{-3}\) in the quasi-invariant regime (71) obtained with \(N=10^{6},\Delta t=10^{-3}\). Right: comparison between \(f_{1}^{MC}\) and \(g_{1}^{MC}\) obtained integrating the same, with the quasi-invariant regime (51).
**Acknowledgments** This research was initiated during the post-doc contract of N.L. at the Department of Mathematical, Physical and Computer Sciences of Parma University, funded by the Italian National Research Project "Multiscale phenomena in Continuum Mechanics: singular limits, off-equilibrium and transitions" (Prin 2017YBKNCE). The authors also thank the support by University of Parma, by Politecnico di Torino, by the Italian National Group of Mathematical Physics (GNFM-INdAM), and by the Italian PRIN Research Project "Integrated Mathematical Approaches to Socio-Epidemiological Dynamics" (Prin 2020JLWP23, CUP: E15F21005420006).
## Appendix: Nanbu-Babovski algorithm
```
Data:
* \(N\in\mathbb{N}\) total number of agents of the system;
* \(N_{1}^{n}\), \(N_{2}^{n}\in\mathbb{N}\) numbers of agents in \(x=1\), \(x=2\), respectively, at time \(t^{n}:=n\Delta t\) and \(v_{1}^{n}\), \(v_{2}^{n}\) the microscopic states of agents in \(x=1\), \(x=2\), respectively, at time \(t^{n}\);
1 Fix \(\Delta t\leq\min\{\frac{1}{\max\lambda_{ij}}\}\);
2for\(n=0\), \(1\), \(2\),...do
3 Compute \(\rho_{1}^{MC,n}=\frac{N_{1}^{n}}{N}\), \(\rho_{2}^{MC,n}=\frac{N_{2}^{n}}{N}\), \(m_{1}^{MC,n}=\frac{1}{N_{1}^{n}}\underset{k=1}{\overset{N_{1}^{n}}{\sum}}v_{ k}^{n}\), \(m_{2}^{MC,n}=\frac{1}{N_{2}^{n}}\underset{k=1}{\overset{N_{2}^{n}}{\sum}}v_{k}^ {n}\);
4repeat
5 Pick randomly two agents (\(x_{i}^{n}\), \(v_{i}^{n}\)), (\(x_{j}^{n}\), \(v_{j}^{n}\)) with \(i\neq j\);
6for\(h=i\), \(j\)do
7 Sample \(\Theta\sim\text{Bernoulli}(\lambda_{x_{i}^{n}x_{j}^{n}}\Delta t)\);
8if\(\Theta=1\)then
9for\(\{x_{i}^{\prime}\), \(x_{j}^{\prime}\}\in\mathcal{I}_{n}^{2}\)do
10 Sample \(J\in\{1,\,0\}\) with law \(\text{Prob}(J=1)=P_{x_{i}^{\prime},x_{j}^{\prime}}^{x_{i}^{\prime}x_{j}^{ \prime}}\), \(\text{Prob}(J=0)=1-P_{x_{i}^{\prime},x_{j}^{\prime}}^{x_{i}^{\prime}x_{j}^{ \prime}}\);
11if\(J=1\)then
12 Set \((x_{i}^{n+1},x_{j}^{n+1})=(x_{i}^{\prime},x_{j}^{\prime})\);
13 Set \((v_{i}^{n+1}\), \(v_{j}^{n+1})=(v_{i}^{\prime},v_{j}^{\prime})\) where \((v_{i}^{\prime},v_{j}^{\prime})\) is given by (10)-(60) and break
14
15else
16 Set \(x_{h}^{n+1}=x_{h}^{n}\), \(v_{h}^{n+1}=v_{h}^{n}\);
17
18until no unused pairs of agents are left;
```
**Algorithm 1**Nanbu-Babovski algorithm with mass transfer for model (5)-(6)-(8)-(62)
|
2308.04670 | TRTM: Template-based Reconstruction and Target-oriented Manipulation of
Crumpled Cloths | Precise reconstruction and manipulation of the crumpled cloths is challenging
due to the high dimensionality of cloth models, as well as the limited
observation at self-occluded regions. We leverage the recent progress in the
field of single-view human reconstruction to template-based reconstruct
crumpled cloths from their top-view depth observations only, with our proposed
sim-real registration protocols. In contrast to previous implicit cloth
representations, our reconstruction mesh explicitly describes the positions and
visibilities of the entire cloth mesh vertices, enabling more efficient
dual-arm and single-arm target-oriented manipulations. Experiments demonstrate
that our TRTM system can be applied to daily cloths that have similar
topologies as our template mesh, but with different shapes, sizes, patterns,
and physical properties. Videos, datasets, pre-trained models, and code can be
downloaded from our project website: https://wenbwa.github.io/TRTM/ . | Wenbo Wang, Gen Li, Miguel Zamora, Stelian Coros | 2023-08-09T02:33:56Z | http://arxiv.org/abs/2308.04670v3 | # TRTM: Template-based Reconstruction and Target-oriented Manipulation of Crumpled Cloths
###### Abstract
Precise reconstruction and manipulation of the crumpled cloths is challenging due to the high dimensionality of cloth models, as well as the limited observation at self-occluded regions. We leverage the recent progress in the field of single-view human reconstruction to template-based reconstruct crumpled cloths from their top-view depth observations only, with our proposed sim-real registration protocols. In contrast to previous implicit cloth representations, our reconstruction mesh explicitly describes the positions and visibilities of the entire cloth mesh vertices, enabling more efficient dual-arm and single-arm target-oriented manipulations. Experiments demonstrate that our TRTM system can be applied to daily cloths that have similar topologies as our template mesh, but with different shapes, sizes, patterns, and physical properties. Videos, datasets, pre-trained models, and code can be downloaded from our project website: [https://wenbwa.github.io/TRTM/](https://wenbwa.github.io/TRTM/).
## I Introduction
Cloth products have been a crucial part of our daily life, where repeated human resources are spent on cloth arranging tasks. For this reason, several studies have been performed to identify [1], perceive [2], and organize [3, 4] different cloth items using both computer vision and robotic approaches.
However, manipulating while perceiving the entire state of one crumpled cloth is challenging due to the complex cloth model and the limited observation at self-occluded regions. In previous research, crumpled cloths are represented as either visible pixel values [5, 6, 7], sampled surface points [8], sparse feature groups [9, 10], or encoded latent vectors [11], as shown in Figure 1. They train or optimize their implicit manipulation policies with those implicit and simplified cloth representations by either reinforcement learning or dynamics learning mostly within the simulation environment. None of them fully and precisely understand the entire crumpled cloth configuration, not to mention explicitly locating and manipulating some visible cloth mesh vertices.
Different from the previous studies, we employ the template-based graph neural networks (GNNs) to explicitly reconstruct the entire meshes of crumpled cloths, using their top-view depth observations only. We demonstrate that, with our sim-real registration protocols, the distribution of cloth configurations in the real world can be properly described using adequate simulated cloth meshes, among which the ground truth mesh of each depth observation is known. Benefiting from our explicit mesh representation, the task of manipulating crumpled cloths to some target configurations can be more efficiently performed with fewer operation episodes. Our work contributes to the following:
1) a novel template-based reconstruction method that can explicitly predict the positions and visibilities of the entire cloth mesh vertices from its top-view depth observation only.
2) a synthetic dataset with 120k+ simulated cloth meshes and rendered top-view RGBD images, together with one real-world dataset consisting of 3k+ collected cloth configurations and keypoint-labeled top-view RGBD images.
3) a robot system that can manipulate crumpled cloths to some target or near-target configurations by querying and selecting corresponding visible mesh vertices.
## II Related Work
Research on the cloth perception and manipulation is extensive. Earlier approaches typically employ handcrafted or learning-based methods to identify specific cloth features, such as corners, edges, and wrinkles, within the top-view color or depth images [9, 10, 12]. Their manipulation policies are mostly generated from those independently detected image features [13, 14], which may be noisy, sparse, sensitive, and ambiguous. As shown in Figure 1, it is intrinsically difficult to pixel-wise distinguish all those visible cloth corners, edges, and wrinkles.
Other studies try to simplify the infinite configuration space by firstly hanging the crumpled cloth in midair, from which some feature detection [15], mesh matching [16, 17], and dual-arm manipulation [18] can be more easily performed. In contrast to these approaches, we focus on directly reconstructing and manipulating the initially crumpled cloths, using their top-view observations only.
Fig. 1: Different state representations of crumpled cloths. From left to right: top-view color images; top-view depth images or point clouds; sparse visible features or encoded latent vectors; our template-based reconstruction mesh and clustered mesh group for robot manipulation. From top to bottom: configurations of the randomly one-time dragged rectangle cloth, two-times folded template square cloth, and one-time dropped larger square cloth.
Recently, data-driven methods have been proposed to achieve more sophisticated perception and manipulation of crumpled cloths. In these studies, some parameterized single-arm or dual-arm actions, together with some folding and unfolding policies, are trained in the ways of reinforcement learning with image-based rewards [19, 20], deep imitation learning from predefined policies [21, 22], or value function learning at pixel observations [7, 23], during which the cloth deformations are not explicitly perceived. In addition to those model-free learning methods, some other studies directly optimize the random-shooting actions by dynamics learning within the simulation environment [24, 25, 8], where the synthetic cloth may look and perform differently from the real-world cloths that have various textures and physical properties. In summary, none of the above work fully understands the crumpled cloth configuration, not to mention explicitly locating the visible cloth vertices and manipulating them with some target configurations.
Inspired by the above work and the recent progress in the single-view human reconstruction [26], we aim to achieve precise mesh reconstruction of the randomly dragged, folded, and dropped cloths from their top-view depth observations only. Compared with the previous implicit and simplified cloth representations, our reconstruction mesh explicitly indicates the entire cloth vertices' positions and visibilities, as shown in Figure 1. Experiments demonstrate that our explicit mesh representation promotes more explicit dual-arm and single-arm target-oriented manipulations, which are more efficient and more similar to the human-wise decision, i.e., in front of a real-world cloth, we mostly generate actions from its 3D mesh embedding, instead of those 2D pixel observations, latent feature vectors, or random shooting optimizations, that are used by the previous work.
## III Methodology
### _Sim-Real Registration_
Different from the previous studies that mostly train their policies within simulation and employ the sim2real transfer either directly [27, 8] or rely on fine-tuning [23, 24], we perform several sim-real registrations to directly shrink the gap between the simulation and the real world.
_Cloth model registration_. Concretely, we register one \(0.3m\times 0.3m\) real-world cloth used in VCD [8] to one mass-spring cloth mesh within Blender [28], a physical engine that provides powerful simulation tools. The registered synthetic cloth mesh has \(21\times 21\) vertices, and its simulation parameters, like bending stiffness and thickness, are manually tuned from the real-world depth observations of the folded wrinkle size and thick difference, as shown in Figure 2.
_Depth observation registration_. Both in simulation and the real world, we centralize each cloth mesh around its image center and normalize the depth observation with a constant scale. Without loss of generality, the cloth image region is one-time scaled according to the longest canonical edge length with a constant ratio of \(l_{cloth}:l_{image}=2:3\), which generalizes our reconstruction model to other cloths with different shapes and sizes, as shown in the Ablation Study. Finally, we introduce Gaussian noises to imitate camera noise and non-smooth cloth surfaces observed in the real world. These sim-real registrations only need to be performed once when generating the synthetic dataset and training the GNN.
### _Single-view Reconstruction_
We employ the template-based GNN from the single-view human body reconstruction work [26] to our crumpled cloth setting. We design our mass-spring cloth GNN as the encoder, updater, and decoder, as described below.
Fig. 2: **System Overview.** a) Sim-real registration of one real-world cloth to a synthetic mass-spring cloth mesh with imitated top-view depth observations. b) Single-view template-based reconstruction of a crumpled cloth from its top-view depth observation only, using our template-based cloth GNN. c) Querying the best visible vertex pairs within the reconstructed and clustered mesh group, according to different target configurations: flat, triangle, and rectangle. d) Dual-arm manipulation using one ABB YuMi Robot at the selected cloth vertex pair with optimized grasp-hang-and-flip trajectories.
**Mass-spring Cloth Model.** As shown in Figure 3, one cloth can be represented as a mass-spring mesh \(M=(V,\,E)\)[29] which contains a vertex group \(V=\{v_{i}\}\) and a bidirectional edge group \(E=\left\{e_{ij}\right\}\), where \(i,\,j=1:N^{v}\). Each mesh vertex \(v_{i}\) here contains a 3D position vector \(p_{i}\) and a visible flag \(f_{i}\) inferred from its surrounding vertices. Each mesh edge \(e_{ij}\) here connects two neighboring vertices and consists of their relative position \(p_{j}-p_{i}\) and length \(||e_{ij}||_{2}=||p_{j}-p_{i}||_{2}\).
**Template Graph Encoding.** Our GNN encoder includes the image feature encoding and the template graph encoding. One ResNet is used here as a backbone feature extractor to encode the depth observation \(I\) into one image feature \(I_{f}\). This image feature, together with the vertices \(v_{i}\) and edges \(e_{i}\) of our template mesh \(M\), are encoded into a template graph \(G=(\tilde{V},\,\tilde{E})\) which contains one vertex feature group \(\tilde{V}=\{\tilde{v}_{i}\}_{i=1:N^{v}}\) and one edge feature group \(\tilde{E}=\left\{\tilde{e}_{ij}\right\}_{i,j=1:N^{v}}\):
\[\tilde{v}_{i}=MLP_{V}([p_{i},\,I_{f}]),\,I_{f}=ResNet(I) \tag{1}\]
\[\tilde{e}_{ij}=MLP_{E}([p_{j}-p_{i},\,\big{\|}p_{j}-p_{i}\big{\|}_{2}]) \tag{2}\]
The encoded template graph \(G^{(0)}=(\tilde{V}^{(0)},\,\tilde{E}^{(0)})\) will be iteratively updated through the attention message flow and finally decoded into the predicted cloth mesh \(\tilde{M}=(\tilde{V},\,\tilde{E})\).
**Graph Attention Updating.** In the vanilla GNN, edge features \(\tilde{e}_{ij}^{(l)}\) and vertex features \(\tilde{v}_{i}^{(l)}\) are updated iteratively by averaging their neighboring features. To further improve this message updating efficiency, we introduce the attention mechanism to update vertex features by pooling neighboring edge features with learnable attention weights, following the GAT work [30]. During the multi-step feature regression, the neural network will learn on its own which edge connection is more informative, with the weight \(\tilde{w}_{ij}^{(l+1)}\) shown below:
\[\tilde{e}_{ij}^{(l+1)}=\phi_{E}^{(t)}([\tilde{e}_{ij}^{(l)},\,\tilde{v}_{i}^{( l)},\,\tilde{v}_{j}^{(l)}]) \tag{3}\]
\[\tilde{w}_{ij}^{(l+1)}=\frac{exp(\phi_{A}^{(t)}([\tilde{e}_{ij}^{(l+1)}]))}{ \sum exp(\phi_{A}^{(t)}([\tilde{e}_{ik}^{(l+1)}]))},\,k\in Neighbor(i) \tag{4}\]
\[\tilde{v}_{i}^{(l+1)}=\phi_{V}^{(t)}([\tilde{v}_{i}^{(l)},\,\sum\tilde{w}_{ik }^{(l+1)}\,\tilde{e}_{ik}^{(l+1)}]),\,k\in Neighbor(i) \tag{5}\]
In our work, the edge \(\phi_{E}^{(t)}\), attention \(\phi_{A}^{(t)}\), and vertex \(\phi_{V}^{(t)}\) updaters at each iteration are parameterized using MLPs. The contribution of our learnable attention weights is around 11% vertex-wise loss decay, as shown in the Ablation Study.
**Cloth Mesh Decoding.** After \(L=15\) times of the above graph updating, we decode each vertex feature \(\tilde{v}_{i}^{(L)}\) into the predicted vertex position \(\tilde{p}_{i}\), which can be supervised within our synthetic dataset. The visible flag \(\tilde{f}_{i}\) at each vertex is determined by checking whether it's on the top layer of the cloth mesh within a vertical cylinder voxel: \(Voxel(\tilde{p}_{i})\).
\[\tilde{p}_{i}=MLP_{D}([\tilde{v}_{i}^{(L)}]) \tag{6}\]
\[\tilde{f}_{i}=TOP_{L}(\tilde{p}_{i},\,\tilde{p}_{k}),\,\tilde{p}_{k}\in Voxel (\tilde{p}_{i}) \tag{7}\]
In our work, we visualize visible vertices with blue and hidden vertices with red, while targeting to explicitly locate and manipulate different visible mesh vertices.
**Implementation Details.** The above template-based cloth GNN is supervised by the randomly dragged, folded, and dropped cloth meshes simulated within Blender, together with their rendered top-view depth images. The synthetic training loss consists of five terms, as described below:
Fig. 3: **Template-based GNN.** a) Synthetic training with simulated cloth meshes and depth images. From left to right: synthetic cloth dataset and template mesh, image feature encoding and template graph encoding, graph feature updating with attention message flow, mesh decoding and supervising. \(\mathrm{b}\) Real-world tuning with collected cloth configurations and depth observations. From left to right: real-world cloth dataset, point cloud observation, pixel-wise tuned result from the GNN prediction. In our work, we observe small improvements during this tuning process, as shown in the Ablation Study.
\[L_{train}=L_{vx,p}+\lambda_{k}L_{key,p}+\lambda_{s}L_{sil}+\lambda_{c}L_{chan}+ \lambda_{r}L_{rgsu} \tag{8}\]
The first loss term \(L_{vx,p}\) is the L1 distance between the predicted \(\tilde{p}_{i}\) and the ground truth vertex positions \(\hat{p}_{i}\). The second loss term \(L_{key,p}\) is one additional L1 loss at nine cloth keypoints: four corners, four middle points of edges, and one center. These two vertex-wise losses work together to assign the vertex corresponding between the predicted \(\tilde{M}\) and the ground truth cloth mesh \(\hat{M}\), as shown below:
\[L_{vx,p}=\frac{1}{N^{v}}\sum||\tilde{p}_{i}-\hat{p}_{i}\;||_{1},\;i=1:N^{v} \tag{9}\]
\[L_{key,p}=\frac{1}{9}\sum||\tilde{p}_{i}-\hat{p}_{i}\;||_{1},\;i\in Keypoints \tag{10}\]
The third loss term \(L_{sil}\) is the image difference between the ground truth \(\hat{S}\) and the predicted silhouette \(\tilde{S}\) rendered from a differentiable renderer Pytorch3D [31]. The fourth term \(L_{chan}\) is the unidirectional chamfer loss from the observed depth point cloud \(\hat{D}\) to the predicted vertex positions \(\tilde{P}\)[24]. These two self-supervised pixel-wise losses work together to refine the boundary and surface similarities between the predicted \(\tilde{M}\) and the ground truth mesh \(\hat{M}\). The last term \(L_{rgsu}\) is a regularization loss for edge lengths.
\[L_{sil}=\frac{1}{||\hat{S}||_{2}^{2}}\sum_{p\in\tilde{S}}||\tilde{S}_{p}-\hat {S}_{p}||_{2}^{2} \tag{11}\]
\[L_{chan}=\frac{1}{|\hat{D}|}\sum_{\hat{d}_{i}\in\hat{D}}\min_{\hat{p}_{k}\in \hat{P}}||\tilde{p}_{k}-\hat{d}_{i}||_{2}^{2} \tag{12}\]
\[L_{rgsu}=\frac{1}{N^{e}}\sum||\tilde{e}_{ij}||_{2}-||\tilde{e}_{ij}\;||_{2}|, \;i,j=1:N^{v} \tag{13}\]
We set the above loss ratios as \(\lambda_{k}=1\), \(\lambda_{s}=0.5\), \(\lambda_{c}=0.5\), and \(\lambda_{r}=1\) respectively. During the synthetic training, we augment the dataset by introducing Gaussian noises and rotating with random angles. Inspired by the previous study [23], we augment each test observation by rotating eight times while predicting their cloth meshes, among which the best mesh prediction is selected using the self-supervised pixel-wise losses. The contributions of these augmentation steps are demonstrated in the Ablation Study.
### _Target-oriented Querying Policy_
Unlike the previous implicit manipulation work [7, 23], we use our template-based reconstruction mesh to generate our dual-arm grasp-hang-and-flip actions. In principle, each visible vertex within the reconstruction mesh can be queried and selected for real-world robot manipulation. However, due to the non-negligible gripper size in practice and the prediction uncertainty at individual vertices, we cluster the reconstruction mesh \(\tilde{M}=(\tilde{V},\,\tilde{E})\) into a lower-dimensional mesh group \(\tilde{M}^{s}=(\tilde{V}^{s},\,\tilde{E}^{s})\) to make our querying policy more efficient and more robust, as shown in Figure 1 and 4.
The clustered mesh group explicitly indicates the positions and visibilities of different cloth regions, from which we can explicitly dual-arm flip a crumpled cloth to some target or near-target configurations. To do so, we dual-arm flip the real-world cloth through each pair of the group vertices. For each target configuration, such as flat, triangle, and rectangle, we rank the group vertex pairs by evaluating the silhouette difference between the flipped and the targeted cloth configurations, from which a hierarchical querying list can be generated. At each operation time, the manipulation agent will search through the above querying list within the clustered mesh group and report the best visible group vertex pair \((\tilde{p}_{L}^{Rval},\,\tilde{p}_{R}^{Rval})\) for the real-world robot manipulation.
### _Dual-arm Robotic Manipulation_
We formulate our dual-arm grasp-hang-and-flip actions according to the FlingBot work [23], as shown in Figure 2. One ABB YuMi robot is used here to achieve the manipulation task, during which the joint trajectories are optimized from the gripping trajectories using Newton's method [32]. We invite interested readers to check the supplementary video and our project website for manipulation demos.
## IV Experiments
We design and execute a series of synthetic and real-world experiments to both qualitatively and quantitatively evaluate our TRTM system with different cloths and tasks.
### _Single-view Cloth Reconstruction Results_
Three tiers of cloth configurations are generated within Blender by one-time dragging, two-times folding, and one-time dropping the synthetic template mesh \(T_{simu}\), which provides 120\(k\) cloth meshes and depth images for training.
Within the real world, we randomly generate the above three cloth tiers using our marked template cloth \(T_{real}\), with a total of 600 cloth configurations. Since it's impossible to label the ground truth mesh, we only label the marked keypoints, i.e., positions and visibilities of four corners, four middle-edges, and one center. We evaluate our reconstruction results using the supervised vertex-wise losses and pixel-wise losses, as well as the prediction error of visible flags \(L_{vx,f}/L_{key,f}\). The total losses \(T_{loss}\) are calculated using Equation (8) without the full-mesh loss term \(L_{vx,p}\) and the regularization term \(L_{rgsu}\), as shown in Table I.
The above reconstruction experiments demonstrate that our synthetic-trained GNN can explicitly and precisely reconstruct our template cloth both in simulation and the real world, with on average vertex-wise losses of 1.22 cm and 1.73 cm respectively. In addition, the direct reconstruction results of four other real-world cloths: one smaller square cloth \(Sq_{s}\), one larger square cloth \(Sq_{l}\), one rectangle cloth \(Rect\), and one shirt \(shirt\), are also demonstrated in Table I and will be further analyzed within the Ablation Study.
### _Dual-Arm Cloth Manipulation Results_
We dual-arm grasp-hang-and-flip the randomly dragged, folded, and dropped template cloths both in the simulation (\(3\times 500\times 3\)) and the real world (\(3\times 50\times 3\)), with three target configurations: flat, triangle, and rectangle, as shown in Figure 4 and Figure 5. In our work, the dual-arm manipulation episode is set as two for the flat target while only one for the triangle and rectangle targets. We evaluate the dual-arm flattened configurations using their top-view coverage values. For the triangle and rectangle targets, we evaluate through the top-view silhouette similarity between the flipped and the targeted configurations: \(1-L_{all}(S_{flipped},\,S_{targeted})\).
We compare our real-world dual-arm flattening results with the FlingBot work [23], where the flipping points are selected from the top-view color images using a value network trained within simulation with coverage rewards. For a fair comparison, we let the FlingBot agent flip \(3\times 50\) randomly dragged, folded, and dropped template cloths with two operations, the results are shown in Figure 5, in brown.
Experimentally, using our explicit mesh representation, our dual-arm flipping agent can flip most of the randomly dragged, folded, and dropped cloths to flat (97.6% coverage) within two operation episodes, outperforming the implicit FlingBot agent (78.3% coverage). For triangle and rectangle targets, our dual-arm agent can achieve on average 84.5% (Real-RE) / 90.8% (Simu-GT) and 82.3% (Real-RE) / 89.3% (Simu-GT) top-view similarities within only one operation episode. Among three cloth tiers, the two-times folded cloths are mostly difficult to be reconstructed and manipulated.
### _Single-arm Cloth Manipulation Results_
We modify one single-arm flattening strategy [21] to our mesh group setting, where a canonical group target is assigned around the cloth image center, as shown in Figure 4. At each operation episode, our single-arm agent will drag the visible group vertex to its target position that has the longest distance. Within the real world, we single-arm flatten \(3\times 50\) randomly dragged, folded, and dropped cloth configurations four times each, during which we report the increased top-view coverage values, as shown in Figure 5.
We compare our single-arm flattening results with the VCD work [8], where some random-shooting actions are optimized by a dynamic model trained within the simulation environment. For a fair comparison, we let the VCD agent flatten \(3\times 50\) randomly dragged, folded, and dropped template cloths, their results are shown in Figure 5, in gray.
Fig. 4: **Qualitative Evaluation of our Target-oriented Manipulation.** a) Dual-arm flipping by querying visible group vertex pairs according to different target configurations: flat, triangle, and rectangle. b) Single-arm flattening by sequentially dragging the visible group vertex to its canonical target position.
Fig. 5: **Quantitative Evaluation of our Target-oriented Manipulation.** a) Dual-arm flipping experiments with flat (red), triangle (blue), and rectangle (green) targets. We demonstrate the flipping results with the simulated ground truth meshes (Simu-GT), with the real-world reconstruction meshes (Real-GT), and value networks (Real-VA, flatten only). b) Real-world single-arm dragging for flattening experiments: ours (orange) and VCD (gray).
Experimentally, for those boundary-dragged cloth configurations, our single-arm flattening agent can directly recover the dragging action from the mesh distribution, and thus can unfold the cloth within two operation episodes. However, compared with the above dual-arm flipping agent, the single-arm dragging agent requires more operations to flatten a crumpled cloth, and usually sticks around some configurations where corners and edges are folded inside, like the last row in Figure 4. To flatten these states, some reveal-and-drag actions can be introduced in future work.
### _Ablation Studies_
**Templated-based GNN.** In this section, we first examine some training and testing designs of our reconstruction model in front of the real-world template cloth \(T_{real}\), as shown in Table II. The loss numbers represent the average reconstruction losses in front of the entire dragged, folded, and dropped real-world template cloth configurations.
We also employed some tuning strategies [23, 24] to our cloth reconstruction setting. Specifically, we additionally tune the synthetic GNN with part of the real-world depth observations (400) for another 50 epochs, supervised with the pixel-wise losses only. After this, we further optimize the entire reconstruction mesh with the pixel-wise losses only. The small improvements demonstrate that our reconstruction model learns numbly in front of small-sized real-world data with only pixel-wise supervision.
**Sim-real Registration.** In this section, we demonstrate that our square-template GNN can be directly and reasonably applied to other daily cloths with a similar topology but different shapes, sizes, textures, and physical properties. To do so, we find another four real-world cloths \((Sq_{s},Sq_{l},\mathit{Rect},\mathit{Shirt})\) and randomly generate their dragged, folded, and dropped cloth configurations with the size of 300 each cloth. We both qualitatively and quantitatively evaluate the direct reconstruction performance of our square-template GNN in front of the above real-world cloths, as shown in Table I and Figure 6. More synthetic and real-world cloth data can be downloaded from our project website.
These experiments demonstrate that our synthetic-trained square-template GNN can be directly applied to daily square and rectangle cloths with different sizes, textures, and physical properties. It achieves nearly the same pixel-wise reconstruction results and is slightly vertex-wise less accurate. However, in front of those two-layered human garments that have totally different topologies from our template mesh, our square-template GNN can still fit a reasonable crumpled square mesh to their top-view depth observations, but reaching nearly doubled vertex-wise losses, especially when two garment layers are randomly detached and expanded. In future work, a synthetic shirt dataset can be simulated, from which a shirt GNN can be trained to achieve better reconstruction of crumpled shirts.
## V Conclusion and Future Work
In this paper, we propose a TRTM system that can precisely reconstruct and explicitly manipulate the randomly dragged, folded, and dropped cloths from their top-view observations only. Compared with the previous implicit and simplified cloth representations, our template-based reconstruction mesh explicitly indicates the positions and visibilities of the entire cloth vertices. Experiments demonstrate that our explicit mesh representation promotes more explicit dual-arm and single-arm target-oriented manipulations, which can significantly outperform the previous implicit agents.
Regarding the future work, instead of fitting a TRTM system for each daily cloth with various template embeddings, we believe the fixed template used in our work can be further parameterized like the human body model [33]. From that stage, auto-template-registration [34, 35] can be introduced to improve the reconstruction robustness in front of different cloths with various canonical properties.
Fig. 6: **Qualitative Evaluation of our Template-based Reconstruction.** Direct reconstruction results of our synthetic-trained square-template GNN in front of different real-world cloths. From top to bottom: a) our sim-real registered \(0.3m\times 0.3m\) template cloth \(T_{real}\); b) another \(0.25nm\times 0.25nm\) stiffer square cloth \(Sq_{l}\); c) another \(0.4m\times 0.4m\) softer square cloth \(Sq_{l}\); d) another \(0.2m\times 0.3m\) rectangle cloth \(\mathit{Rect}\); e) another \(0.3m\times 0.4m\) softer \(\mathit{Shirt}\) with two layers of rectangle bodies and sleeves. From left to right: canonical configurations; randomly dragged, folded, and dropped cloth configurations. |
2303.06315 | DETA: Denoised Task Adaptation for Few-Shot Learning | Test-time task adaptation in few-shot learning aims to adapt a pre-trained
task-agnostic model for capturing taskspecific knowledge of the test task, rely
only on few-labeled support samples. Previous approaches generally focus on
developing advanced algorithms to achieve the goal, while neglecting the
inherent problems of the given support samples. In fact, with only a handful of
samples available, the adverse effect of either the image noise (a.k.a.
X-noise) or the label noise (a.k.a. Y-noise) from support samples can be
severely amplified. To address this challenge, in this work we propose DEnoised
Task Adaptation (DETA), a first, unified image- and label-denoising framework
orthogonal to existing task adaptation approaches. Without extra supervision,
DETA filters out task-irrelevant, noisy representations by taking advantage of
both global visual information and local region details of support samples. On
the challenging Meta-Dataset, DETA consistently improves the performance of a
broad spectrum of baseline methods applied on various pre-trained models.
Notably, by tackling the overlooked image noise in Meta-Dataset, DETA
establishes new state-of-the-art results. Code is released at
https://github.com/JimZAI/DETA. | Ji Zhang, Lianli Gao, Xu Luo, Hengtao Shen, Jingkuan Song | 2023-03-11T05:23:20Z | http://arxiv.org/abs/2303.06315v3 | # DETA: Denoised Task Adaptation for Few-Shot Learning
###### Abstract
Test-time task adaptation in few-shot learning aims to adapt a pre-trained task-agnostic model for capturing task-specific knowledge of the test task, rely only on few-labeled support samples. Previous approaches generally focus on developing advanced algorithms to achieve the goal, while neglecting the inherent problems of the given support samples. In fact, with only a handful of samples available, the adverse effect of either the image noise (a.k.a. X-noise) or the label noise (a.k.a. Y-noise) from support samples can be severely amplified. To address this challenge, in this work we propose **DE**noised **T**ask **A**daptation (**DETA**), a first, unified image- and label-denoising framework orthogonal to existing task adaptation approaches. Without extra supervision, DETA filters out task-irrelevant, noisy representations by taking advantage of both global visual information and local region details of support samples. On the challenging Meta-Dataset, DETA consistently improves the performance of a broad spectrum of baseline methods applied on various pre-trained models. Notably, by tackling the overlooked image noise in Meta-Dataset, DETA establishes new state-of-the-art results. Code is released at [https://github.com/nobody-1617/DETA](https://github.com/nobody-1617/DETA).
## 1 Introduction
Few-Shot Learning (FSL) refers to rapidly deriving new knowledge from a limited number of samples, a central capability that humans naturally possess, but "data-hungry" machines still lack. Over the past years, a community-wide enthusiasm has been ignited to narrow this gap, especially in fields such as computer vision [15, 47, 26], machine translation [5, 31, 54] and reinforcement learning [11, 39, 17].
The general formulation of FSL involves two stages: **1)**_training-time_ task-agnostic knowledge accumulation, and **2)**_test-time_ task-specific knowledge acquisition, a.k.a. task adaptation. In particular, the former stage seeks to pre-train a task-agnostic model on large amounts of training samples collected from a set of _base_ classes. While the latter targets adapting the pre-trained model for capturing task-specific knowledge of the few-shot (or test) task with _novel_ classes, given a tiny set of labeled support samples. Early progress in FSL has been predominantly achieved using the idea of meta-learning, which aligns the learning objectives of the two stages to better generalize the accumulated knowledge towards few-shot tasks [47, 39, 54]. Nevertheless, recent studies [13, 42, 25, 36] revealed that a good test-time task adaptation approach with any pre-trained models - no matter what training paradigms they were learned by, can be more effective than sophisticated meta-learning algorithms. Furthermore, with the recent success in model pre-training techniques [14, 34, 16], designing efficient adapter
Figure 1: Dual noises in the support samples of a few-shot task. **Image noise** (a.k.a. X-noise): the target object regions are often obscured by interfering factors such as cluttered backgrounds, image corruption, etc. **Label noise** (a.k.a. Y-noise): mislabeled samples. The goal of this work is to develop a first, unified image- and label-denoising framework for reliable task adaptation.
based [24, 25, 55] or finetuning-based [8, 19, 45] task adaptation algorithms that can flexibly borrow "free" knowledge from a wide range of pre-trained models is therefore of great practical value, and has made remarkable progress in FSL.
Despite the encouraging progress, existing approaches mostly focus on developing advanced algorithms to mine task-specific knowledge for few-shot tasks, while neglecting the inherent problems of the given support samples. Unfortunately, the set of support samples collected from the open world, no matter how small, can be unavoidably polluted by noises. As illustrated in Figure 1, either the image noise (a.k.a. X-noise) or the label noise (a.k.a. Y-noise) could arise at possibly every phase of the task lifecycle1. It has been well recognized that a tiny portion of image-noisy [21, 35] or label-noisy [32, 48] samples can compromise the model performance to a large extent. When it comes to test-time task adaptation, the adverse effects of the dual noises can be remarkably magnified owing to the _scarcity_ of support samples, as quantitatively proven in Figure 2. Despite being harmful and inevitable, as far as we know, both image noise and label noise have received considerably less attention in test-time task adaptation. **This begs the following questions: 1)** Is it possible to design a method to tackle the two issues in a unified framework? **2)** Whether the designed method can be orthogonal to existing task adaptation approaches, so as to achieve robust FSL?
Footnote 1: In more challenging FSL scenarios, some or even all of the few examples are collected by an agent from a dynamic environment rather than relying on humans, the dual noises become more common in this context.
In this work, we answer the above questions by proposing **DE**noised **T**ask **A**daptation (**DETA**), a first, unified image- and label-denoising framework for FSL. The key idea of DETA is to simultaneously filter out task-irrelevant (i.e. noisy) local region representations of _image-noisy_ samples, as well as global image representations of _label-noisy_ samples, relying only on the interrelation among the given support samples of few-shot tasks. To this end, a parameter-free _contrastive relevance aggregation (CoRA)_ module is first designed to determine the weights of regions and images in support samples, based on which two losses are proposed for noise-robust (or reliable) task adaptation: a _local compactness_ loss \(\mathcal{L}_{l}\) that promotes the intra-class compactness of _clean_ regions, along with a _global dispersion_ loss \(\mathcal{L}_{g}\) that encourages the inter-class dispersion of _clean_, image-level class prototypes. The two losses complement each other to take advantage of both global visual information and local region details of support samples to softly ignore the dual noises during the optimization. An overview of our DETA framework is shown in Figure 3.
**Flexibility and Strong Performance.** The proposed DETA is orthogonal to existing _adapter_-based task adaptation (A-TA) and _finetuning_-based task adaptation (F-TA) paradigms, therefore can be plugged into any types of these approaches to improve model robustness under the joint (image, label)-noise. On average, by performing image-denoising on the vanilla Meta-Dataset (MD) [51], DETA improves the classification accuracy of A-TA, F-TA baselines by **1.8%**\(\sim\)**1.9%**, **2.2%**\(\sim\)**4.1%**, respectively (Table 1). In particular, by tackling the overlooked image noise in the vanilla MD, DETA further boosts the state-of-the-art TSA [25] by **1.8%**\(\sim\)**2.1%** (Table 5). Also, by conducting label-denoising on the label-corrupted MD, DETA outperforms A-TA, F-TA baselines by **1.8%**\(\sim\)**4.2%**, **2.8%**\(\sim\)**6.1%**, respectively (Table 2).
**Contributions.** To summarize, our contributions are three-fold. **1)** We propose DETA, a first, unified image- and label-denoising framework for FSL. **2)** Our DETA can be flexibly plugged into both adapter-based and finetuning-based task adaptation paradigms. **3)** Extensive experiments on Meta-Dataset show the effectiveness and flexibility of DETA.
## 2 Related Work
**Few-shot Learning.** Generalizing from a limited amount of samples has been proven challenging for most existing deep learning models. Prevalent FSL approaches learn new concepts under scarce supervision by a meta-learning setting [7, 13, 18, 27, 29, 40, 57, 58]. In **Sup. Mat. (E.1)**, we present a review of the literature on FSL approaches.
**Test-time Task Adaptation in FSL.** Recent progress revealed that when there exists severe _category/domain shift_ between base classes and few-shot tasks, without performing test-time task adaptation, the generalization of any pre-trained models would decrease remarkably [4, 36, 42]. Various attempts have been made to adapt the pre-trained models to few-shot tasks by devising model-specific adapters, _e.g_., the residual adapter TSA [25] for ResNets [15], the self-attention adapter eTT [55] for ViTs [9]. A survey of
Figure 2: Quantitative evidence that image- or label-noisy support samples degrades test-time task adaptation performance. The results are averaged over 100 5-way 10-shot tasks sampled from the five classes in Figure 1. _Image-noisy_ samples here are manually selected from all samples of the five classes. _Label-noisy_ samples for each class are generated by uniformly changing the label to that of other four classes. The baseline scheme TSA [25] is applied to a RN-18 pre-trained on ImNet-MD [51] for task adaptation. As seen, the dual noises negatively impact task adaptation performance, and our method consistently improves the baseline under various ratios of image- or label-noisy support samples.
test-time task adaptation is presented in **Sup. Mat. (E.2)**.
**Data-denoising for FSL.** The training data collected from the open world are unavoidably polluted by image noise or label noise, which may compromise the performance of the learned models [2, 20, 48]. Limited works in FSL considered the influence of image noise [35, 56] or label noise [28, 37] on model generalization. Additionally, they mainly focus on dealing with noises in base classes rather than in the few-shot task. Particularly, Liang et al. [28] for the first time explored the label noise problem in FSL. Differences between the work [28] and ours are threefold. **1)** We aim to address both the image and label noises from support samples, where every sample is of great value in characterizing the few-shot task. **2)** We take advantage of both global visual information and local region details to achieve the goal. **3)** Our method is orthogonal to both adapter-based and finetuning-based task adaptation methods. Even so, Liang et al. [28] do bring a lot of inspiration to our method.
**Cross-image Alignment for Representation Learning.** A plethora of cross-image alignment based FSL methods have recently been developed to extract more discriminative representations [1, 58, 52, 23, 18, 23]. Those methods highlight important local regions by aligning the local features between the support and query samples of few-shot tasks. Despite the impressive performance, those _none-adaptation_ methods are unable to capture task-specific representations when there exists severe _category shift_ or _domain shift_ between base classes and few-shot tasks, as has been fully demonstrated in [19, 36]. What's more, we often overlook an important fact that owing to the small sample size in few-shot tasks, negligible computational cost is required to model the relationships between the given support samples.
## 3 Methodology
In this section, we elaborate our proposed DETA. Before that, we introduce some preliminary concepts.
### Preliminary
Assume we have a pre-trained task-agnostic model \(f_{\theta}\) parameterized by \(\theta\), which serves as a feature backbone to output a \(d\)-dimensional representation for each input image. Test-time task adaptation seeks to adapt \(f_{\theta}\) to the test task \(T=\{S,Q\}\), by deriving task-specific knowledge on the few-labeled support samples \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N_{g}}\), consisting of \(N_{s}\)_image-label_ pairs from \(C\) novel classes, _i.e._, \(y_{i}\in\{1,...,C\}\). It is expected that the adapted model can correctly partition the \(N_{q}\) query samples \(Q=\{(\mathbf{x}_{i})\}_{i=1}^{N_{q}}\) to the \(C\) classes in the _representation_ space. If there are exactly \(K\) support samples in each of these \(C\) classes, the task is also called a \(C\)-way \(K\)-shot task.
**Adapter-based Task Adaptation (A-TA)**. The goal of A-TA is to capture the knowledge of the test task by attaching a model-specific adapter \(\mathcal{A}_{\alpha}\) parameterized by \(\alpha\) to the pre-trained model \(f_{\theta}\). During task adaptation, the parameters of \(f_{\theta}\), \(\theta\), are frozen and only the parameters \(\alpha\) are optimized
Figure 3: An overview of the proposed **DETA** (in a 2-way 3-shot exemple). During each iteration of task adaptation, the images together with a set of randomly cropped local regions of support samples are first fed into a pre-trained model \(f_{\theta}\) (w/ or w/o a model-specific adapter \(\mathcal{A}_{\alpha}\)) to extract image and region representations. Next, a **Contrastive Relevance Aggregation(CoRA)** module takes the region representations as input to determine the weight of each region, based on which we can refine the image weights by a momentum accumulator. Finally, a **Local Compactness** loss \(\mathcal{L}_{l}\), along with a **Global Dispersion** loss \(\mathcal{L}_{g}\) are devised in a weighted embedding space for promoting the mining of task-specific (or clean) representations. At inference, we only retain the adapted model \(f_{\theta^{*}}\) (or \(f_{[\theta,\alpha^{*}]}\)) to produce image representations of support samples, on which we can build a noise-robust classifier guided by the refined image weights in the accumulator.
from scratch using the support samples:
\[\alpha:=\alpha-\gamma\nabla_{\alpha}\mathcal{L}^{S}\big{(}[\theta;\alpha]\big{)}, \tag{1}\]
where \(\gamma\) is the learning rate, and
\[\mathcal{L}^{S}([\theta;\alpha])=\frac{1}{N_{s}}\sum_{(\mathbf{x},y)\in S}\ell\big{(} h(f_{[\theta;\alpha]}(\mathbf{x});S),y\big{)}, \tag{2}\]
where \(\ell\) is cross-entropy loss, \(f_{[\theta;\alpha]}\) indicates the feature backbone appended with the adapter, \(h\) is a non-parametric classifier head capable of producing a softmax probability vector whose dimensionality equals \(C\). Notably, the recent A-TA scheme TSA [25] achieved state-of-the-art results on Meta-Dataset [51], by integrating a residual-adapter into the pre-trained URL [24] model (w/ RN-18), and setting \(h\) to the nonparametric Nearest Centroid Classifier (NCC) [38].
**Finetuning-based Task Adaptation (F-TA).** A-TA requires model-specific adapters to adapt different pre-trained models, e.g., the residual adapter TSA [25] for ResNets [15], the self-attention adapter eTT [55] for ViTs [9]. In contrast, F-TA, originated from transfer learning literature [22] and introduced into FSL by MAML [11], directly finetunes the parameters \(\theta\) of any pre-trained model \(f_{\theta}\) at test time, i.e., \(\theta:=\theta-\gamma\nabla\mathcal{L}^{S}(\theta)\), and is thus model-agnostic.
### Overview
Our framework DETA is illustrated in Figure 3, which mainly consists of the following steps. **For each iteration:****Step-1.** A feature backbone \(f\) takes the images and a set of randomly cropped image regions of the support samples as input to obtain image and region representations.
**Step-2.** A _contrastive relevance aggregation_ (CoRA) module takes the region representations as input to calculate the weights of different regions, based on which we can determine the image weights by a momentum accumulator.
**Step-3.** A projection head maps the high-dimensional image and region representations to a lower dimensional embedding space, where a _local compactness_ loss \(\mathcal{L}_{l}\) and a _global dispersion_ loss \(\mathcal{L}_{g}\) are developed on the weighted region and image embeddings to promote the mining of task-specific knowledge from support samples.
**Step-4.** The calculated \(\mathcal{L}_{l}\) and \(\mathcal{L}_{g}\) are jointly used to update the parameters of the projection head and the feature backbone \(f\), i.e., \(\alpha\) in \(f_{[\theta;\alpha]}\) for A-TA, \(\theta\) in \(f_{\theta}\) for F-TA.
### Contrastive Relevance Aggregation
The motivation of CoRA is that a region, which shows higher relevance (or similarity) to _in-class_ regions while lower relevance to _out-of-class_ regions, is more likely to be the object region and should be assigned a larger weight.
Given the support samples \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N_{s}}\) of a test-time task, we first randomly crop \(k\) local regions of size \(M\times M\) for every image \(\mathbf{x}_{i}\). Next, the original image together with all cropped regions of each sample are fed into \(f\) to generate image representation \(\mathbf{z}_{i}\) and region representations \(Z_{i}=\{\mathbf{z}_{ij}\}_{j=1}^{k}\). Let \(Z^{(c)}=\bigcup_{i=1}^{N_{c}}Z_{i}\) denote the collection of representations of cropped regions in class \(c\), \(\mathbf{Z}=\bigcup_{c=1}^{C}Z^{(c)}\) the set of all representations of cropped regions, where \(N_{c}\) is the number of images in class \(c\). For each region representation \(\mathbf{z}_{ij}\) in \(Z_{i}\), we construct its _in-class_ and _out-of-class_ region representation sets as \(I(\mathbf{z}_{ij})=Z^{(c)}\setminus Z_{i}\) and \(O(\mathbf{z}_{ij})=\mathbf{Z}\setminus Z^{(c)}\), respectively. Note that in \(I(\mathbf{z}_{ij})\), the other \(k-1\) intra-image representations are dropped to alleviate their dominating impacts. CoRA calculates the weight of each region based on the global statistics of in-class and out-of-class relevance scores, respectively formulated as
\[\phi(\mathbf{z}_{ij})=\frac{1}{|I(\mathbf{z}_{ij})|}\sum_{\mathbf{z}^{\prime}\in I(\mathbf{z}_ {ij})}\zeta(\mathbf{z}_{ij},\mathbf{z}^{\prime}), \tag{3}\]
\[\psi(\mathbf{z}_{ij})=\frac{1}{|O(\mathbf{z}_{ij})|}\sum_{\mathbf{z}^{\prime}\in O(\mathbf{z}_ {ij})}\zeta(\mathbf{z}_{ij},\mathbf{z}^{\prime}), \tag{4}\]
where \(\zeta(\cdot)\) indicates cosine similarity. These scores are then normalized inside each class:
\[\widetilde{\phi}(\mathbf{z}_{ij})\!=\!\frac{e^{\phi(\mathbf{z}_{ij})}}{\sum_{\mathbf{z}^{ \prime}\in Z^{(c)}}\!e^{\phi(\mathbf{z}^{\prime})}},\,\widetilde{\psi}(\mathbf{z}_{ ij})\!=\!\frac{e^{\psi(\mathbf{z}_{ij})}}{\sum_{\mathbf{z}^{\prime}\in Z^{(c)}}\!e^{ \psi(\mathbf{z}^{\prime})}}. \tag{5}\]
Therefore, the final calculated region weight for \(\mathbf{z}_{ij}\) can be defined as \(\lambda_{ij}=\widetilde{\phi}(\mathbf{z}_{ij})/\widetilde{\psi}(\mathbf{z}_{ij})\in\mathbb{R}\). A pytorch-like pseudocode for CoRA is illustrated in Figure 3.
**A Momentum Accumulator for Image-weighting.** Aside from weighting the local regions, we also need to assess the _quality_ of the images themselves for filtering out label-noisy samples. Intuitively, the most direct way to determine the weight of an image \(\mathbf{x}_{i}\), \(\omega_{i}\), is to average the weights of all \(k\) cropped regions belonging to it, i.e., \(\omega_{i}=\frac{1}{k}\sum_{j=1}^{k}\lambda_{ij}\).
However, the randomly cropped regions in different task adaptation iterations may have large variations, resulting the frailty of the calculated image weights. A momentum accumulator is thus developed to cope with this issue by
\[\omega_{i}^{t}=\begin{cases}\frac{1}{k}\sum_{j=1}^{k}\lambda_{ij},&\text{if }t=1\\ \gamma\omega_{i}^{t-1}+\frac{1-\gamma}{k}\sum_{j=1}^{k}\lambda_{ij},&\text{if }t>1 \end{cases} \tag{6}\]
where \(\omega_{i}^{t}\) denotes the accumulated image weight of \(\mathbf{x}_{i}\) in the \(t\)-th iteration of task adaptation, \(\gamma\) is the momentum hyper-parameter, and we set it to 0.7 in our method. For brevity, we omit the superscript \(t\) in the following sections.
### Noise-robust Task Adaptation
DETA performs image- and label-denoising in a unified framework to achieve noise-robust task adaptation. To this end, DETA simultaneously **1 promotes the intra-class
compactness of _clean_ regions - to filter out noisy local representations (e.g. cluttered backgrounds of image-noisy samples), and 2 encourages the inter-class dispersion of _clean_, image-level class prototypes - to filter out noisy global representations (i.e. images of label-noisy samples). To formalize our idea, we first map each image representation \(\mathbf{z}_{i}\) and its region representations \(Z_{i}=\{\mathbf{z}_{ij}\}_{j=1}^{k}\) to a low-dimensional embedding space by a projection head. The \(l_{2}\) normalized image embedding and \(k\) region embeddings are denoted as \(\mathbf{e}_{i}\) and \(E_{i}=\{\mathbf{e}_{ij}\}_{j=1}^{k}=\{\mathbf{r}_{i}\}_{k=1}^{k}\), respectively. Define \(E^{(c)}\) and \(\mathbf{E}\) similar to \(Z^{(c)}\) and \(\mathbf{Z}\).
**To achieve 1**, we softly pull together (resp. push away) _clean_ regions from the same class (resp. different classes), guided by the calculated region weights of CoRA. For every pair of region embeddings \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\) from the same class and their region weights \(\lambda_{i}\) and \(\lambda_{j}\), the loss function is
\[l(\mathbf{r}_{i},\mathbf{r}_{j})=-\log\frac{\exp(\lambda_{i}\mathbf{r}_{i} \cdot\lambda_{j}\mathbf{r}_{j}/\tau)}{\sum_{\mathbf{r}_{v}\in\mathbf{E}\backslash\mathbf{r}_ {i}}\exp(\lambda_{i}\mathbf{r}_{i}\cdot\lambda_{v}\mathbf{r}_{v}/\tau)}, \tag{7}\]
where \(\tau\) is a temperature parameter. The objective function is equivalent to minimizing the following loss:
\[\mathcal{L}_{l}=\frac{1}{\sum_{c=1}^{C}\frac{kN_{c}\times(kN_{c}-1)}{2}}\sum _{c=1}^{C}\sum_{\mathbf{r}_{i},\mathbf{r}_{j}\in E^{(c)}}\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip 14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.26378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.26378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.26378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.26378pt\hskip-14.26378pt\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.26378pt\hskip-14.26378pt\hskip-14.226378pt \hskip-14.26378pt\hskip-14.226378pt\hskip-14.26378pt\hskip-14.26378pt \hskip-14.226378pt\hskip-14.2637
is worth mentioning that in the standard task-sampling protocol for MD, the generated test tasks are way/shot imbalanced, _a.k.a. varied-way varied-shot_. To avoid cases where the number of support samples in a class is less than 10, we adopt a unified task sampling protocol for the two MD versions by fixing the shot of every inner-task class to 10, _i.e._, _varied-way 10-shot_. However, when conducting comparisons with state-of-the-arts, we still employ the standard varied-way varied-shot protocol for fair comparison.
**Baseline Methods.** We verify the effectiveness and flexibility of DETA by applying it to a broad spectrum of baseline methods applied on various diverse pre-trained models. For A-TA, we consider the two strong baselines TSA [25] and eTT [55]. Both of them integrate a model-specific adapter to the pre-trained model: TSA integrates a residual adapter to the single-domain URL (w/ RN-18 pre-trained on \(84\times 84\) ImN-MD) [24] and eTT attaches a self-attention adapter to DINO (ViT-S) [6]. As for F-TA, motivated by [25]2, we use the NCC head instead of a linear classifier which is common in transfer learning literature. We denote this F-TA scheme F-NCC, and use it for adapting different pre-trained models including MOCO (w/ RN-50) [16], CLIP (w/ RN-50) [41], DeiT (w/ ViT-S) [49] and Swin Transformer (Tiny) [34]. All models are trained on Imagenet-1k, except for CLIP, which is trained on large-scale image captions. For all baseline methods, we match the image size in model pre-training and task adaptation, _i.e._, the image size is set to \(84\times 84\) for _TSA_[25], and \(224\times 224\) for other methods.
Footnote 2: The nonparametric NCC has been proven in [25] to be more effective for adapter-based or finetuning-based task adaptation than other competitors such as logistic regression, support vector machine and Mahal. Dist.
**Implementation Details.** Following [25, 55], we perform task adaptation by updating the pre-trained model (or the appended task adapter) for 40 iterations on each few-shot task. During each iteration of our DETA, 4 and 2 image regions are cropped from every support sample for TSA and other methods, respectively. The projection head in our network is a two-layer MLP, and the embedding dimension is 128. The two temperatures \(\tau\) and \(\pi\), are set to 0.5 and 0.07, respectively. The hyperparameter \(\beta\) is set to 0.1. More detailed settings are provided in **Sup. Mat. (B)**.
**Evaluation Metric.** We evaluate our method on 600 randomly sampled test tasks for each MD dataset, and report average accuracy (in %) and 95% confidence intervals.
### Experimental Results
In this part, we seek to answer the following questions.
**Q1.** Can DETA consistently enhance task adaptation results for any types of baselines by performing image-denoising on support samples?
**Q2.** Can DETA perform robustly in the presence of various ratios of label-noisy support samples?
**Q3.** Can DETA boost the current state-of-the-art, after tackling the overlooked image noise in the MD benchmark?
**Image-denoising**. To validate the effectiveness of DETA on image-denoising, we conduct experiments on the vanilla MD with six baseline approaches shown before. The quantitative results of the baseline methods w/ or w/o DETA are reported in Table 1. We can observe from the results: **1)** DETA consistently improves adapter-based and finetuning-based task adaptation methods, which confirms that DETA is orthogonal to those methods and able to improve model robustness to image noise for them. **2)** DETA achieves significant performance gains on both TSA (for \(84\times 84\)-size input images) and other methods (for \(224\times 224\)-size images), suggesting DETA is agnostic to image size. **3)** DETA can tackle both the two types of image noise: _background clutter_ (in ImgN-MD, etc) and _image corruption_ (in Omglot and QkDraw), qualitative results are shown in Section 4.3.
**Label-denoising.** We further demonstrate the effectiveness of DETA on label-denoising on the label-corrupted MD. Concretely, we manually corrupt the labels of different ratios (10%\(\sim\)70%) of support samples for each task, by uniformly changing the correct image labels to the other \(C-1\) classes. Table 2 reports the average accuracy of different baselines methods w/ or w/o our DETA on the ten MD datasets, under different ratios of corrupted support samples. We have the following observations. **1)** The few-shot classification performance gradually decreases as the ratio of label-noisy support samples increases. **2)** DETA consistently improves the baseline methods by a large margin in all settings, demonstrating its effectiveness to improve model robustness to label noise. **3)** Compared with the obtained image-denoising results in Table 1, the performance gains of DETA w.r.t. label-denoising are more significant. Possible reasons are twofold. **i)** The negative impact of label noise on performance is more significant than that of image noise, as the label-noisy samples contain almost no valuable object features associated with the correct classes. **ii)** When one class contains samples from other classes, our designed CoRA can identify the harmful regions more precisely by taking advantage of out-of-class relevance information.
**State-of-the-art Comparison.** So far, we can see that our DETA can be flexibly plugged into both adapter-based and finetuning-based task adaptation methods to improve model robustness to the dual noises. It is interesting to investigate whether DETA can further boost the current state-of-the-art after tackling the image-noisy samples in the vanilla MD. Hence, we apply our DETA to the state-of-the-art scheme TSA [25] and conduct experiments on MD with a group of competitors, _e.g._, FLUTE [50], URL [24], eTT [55]. In Table 5, we can observe DETA considerably improves the strong baseline TSA and establishes new state-of-the-art results on nearly all ten MD datasets, which further confirm
the effectiveness and flexibility of our DETA. More importantly, the achieved results also uncover the ever-overlooked image noise problem of the MD benchmark. More qualitative evidence for this problem are discussed in Section 4.3 and demonstrated in Figure 4.
### Ablation Studies
In this section, we conduct ablative analysis to investigate the designed components of DETA in Table 3. We also study the impact of data augmentation caused by cropped regions on model performance in Table 4. Unless stated otherwise, the baseline is MoCo (w/ RN-50), the ratio of label-noisy support samples is 30%, and the average results over the ten MD datasets are reported.
**Effectiveness of the Designed Components.** DETA contains three key components, including a CoRA module, a _local compactness_ loss \(\mathcal{L}_{l}\) and a _global dispersion_ loss \(\mathcal{L}_{g}\). We conduct component-wise analysis by alternatively removing one of them to understand the influence of each component in Table 3 (\(\mathbb{A}\)-\(\mathbb{D}\)). In the table, \(\mathbb{A}\) means the full model, \(\mathbb{B}\) indicates that the weights of all images and cropped regions are set to 1. As can be observed, all devised components in DETA contributes to the final performance. In particular, \(\mathcal{L}_{g}\) achieves superior performance over \(\mathcal{L}_{l}\), reflecting that \(\mathcal{L}_{g}\) also complements \(\mathcal{L}_{l}\) to improve the intra-class compactness of clean regions. From \(\mathbb{B}\), we observe the remarkable contribution of our CoRA module, verifying its importance for guiding the two losses. In \(\mathbb{E}\) and \(\mathbb{F}\), we investigate other different variants of our method. The results in \(\mathbb{E}\) and \(\mathbb{F}\) respectively verify the effectiveness of the out-of-class relevance information for CoRA, and the designed momentum accumulator for building noise-robust classifier.
**Influence of Data Augmentation.** DETA leverages both the images and cropped regions of support samples to perform test-time task adaptation. It is important to answer the question: are the performance improvements are mostly attributed to data augmentation? To this end, we remove all the designed components of DETA, and jointly use the images and cropped regions for task adaptation. The results are reported in Table 4. Not surprisingly, without filtering out task-irrelevant, noisy representations, the joint utilization of images and regions for task adaptation does not result in significant performance gains.
**Analysis of the Number of Region, Region Size, \(\beta\), \(\zeta(\cdot)\).** We show that **1)** a too larger number of regions or a too small region size does not result in significant performance gains, and **2)** DETA is in general not sensitive to \(\beta\) and the choice of \(\zeta(\cdot)\) within a certain range, in **Sup. Mat. (C)**.
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c} \hline Model & Method & ImN-MD & Omglot & Acraft & CUB & DTD & QkDraw & Fungi & Flower & COCO & Sign & _Avg_ \\ \hline URL & _TSA_[25] & \(58.3\pm 0.9\) & \(80.7\pm 0.2\) & \(61.1\pm 0.7\) & \(83.2\pm 0.5\) & \(72.5\pm 0.6\) & \(78.9\pm 0.6\) & \(64.7\pm 0.8\) & \(92.3\pm 0.3\) & \(75.1\pm 0.7\) & \(87.7\pm 0.4\) & \(75.5\) \\ (RN-18) & \(\mathbf{+DETA}\) & \(\mathbf{58.7\pm 0.9}\) & \(\mathbf{82.7\pm 0.2}\) & \(\mathbf{63.1\pm 0.7}\) & \(\mathbf{85.0\pm 0.5}\) & \(\mathbf{72.7\pm 0.6}\) & \(\mathbf{80.4\pm 0.6}\) & \(\mathbf{66.7\pm 0.8}\) & \(\mathbf{93.8\pm 0.3}\) & \(\mathbf{76.3\pm 0.7}\) & \(\mathbf{92.1\pm 0.4}\) & \(\mathbf{77.3\pm (1.8)}\) \\ \hline DINO & _eTT_[55] & \(73.2\pm 0.8\) & \(93.0\pm 0.4\) & \(\mathbf{68.1\pm 0.7}\) & \(89.6\pm 0.3\) & \(74.9\pm 0.5\) & \(79.3\pm 0.7\) & \(76.2\pm 0.5\) & \(96.0\pm 0.2\) & \(72.7\pm 0.6\) & \(86.3\pm 0.7\) & \(80.9\) \\ (ViT-S) & \(\mathbf{+DETA}\) & \(\mathbf{75.6\pm 0.8}\) & \(\mathbf{93.6\pm 0.4}\) & \(67.7\pm 0.8\) & \(\mathbf{91.8\pm 0.3}\) & \(\mathbf{76.0\pm 0.5}\) & \(\mathbf{81.9\pm 0.7}\) & \(\mathbf{77.2\pm 0.5}\) & \(\mathbf{96.9\pm 0.3}\) & \(\mathbf{78.5\pm 0.6}\) & \(\mathbf{88.5\pm 0.7}\) & \(\mathbf{82.8}\) (+1.9) \\ \hline MoCo & _F-NCC_ & \(70.7\pm 1.0\) & \(82.5\pm 0.4\) & \(55.1\pm 0.8\) & \(67.0\pm 0.8\) & \(\mathbf{81.3\pm 0.5}\) & \(73.8\pm 0.7\) & \(54.8\pm 0.9\) & \(89.2\pm 0.5\) & \(76.8\pm 0.7\) & \(79.6\pm 0.6\) & \(73.0\) \\ (RN-50) & \(\mathbf{+DETA}\) & \(\mathbf{73.6\pm 1.0}\) & \(\mathbf{83.9\pm 0.4}\) & \(\mathbf{59.1\pm 0.8}\) & \(\mathbf{73.9\pm 0.8}\) & \(80.9\pm 0.5\) & \(\mathbf{76.1\pm 0.7}\) & \(\mathbf{60.7\pm 0.9}\) & \(\mathbf{92.3\pm 0.5}\) & \(\mathbf{79.0\pm 0.7}\) & \(\mathbf{84.2\pm 0.6}\) & \(\mathbf{76.4\pm (3.4)}\) \\ \hline CLIP & _F-NCC_ & \(67.0\pm 1.0\) & \(89.2\pm 0.5\) & \(61.2\pm 0.8\) & \(84.0\pm 0.7\) & \(74.5\pm 0.6\) & \(75.5\pm 0.7\) & \(57.6\pm 0.9\) & \(92.1\pm 0.4\) & \(72.1\pm 0.8\) & \(79.8\pm 0.7\) & \(75.3\) \\ (RN-50) & \(\mathbf{+DETA}\) & \(\mathbf{69.6\pm 0.9}\) & \(\mathbf{92.2\pm 0.5}\) & \(\mathbf{97.0\pm 0.8}\) & \(\mathbf{85.7\pm 0.7}\) & \(\mathbf{76.2\pm 0.6}\) & \(\mathbf{77.2\pm 0.7}\) & \(\mathbf{64.5\pm 0.9}\) & \(\mathbf{94.5\pm 0.3}\) & \(\mathbf{72.6\pm 0.8}\) & \(\mathbf{80.7\pm 0.7}\) & \(\mathbf{77.6\pm (2.3)}\) \\ \hline DeiT & _F-NCC_ & \(90.0\pm 0.6\) & \(92.5\pm 0.2\) & \(65.3\pm 0.7\) & \(89.8\pm 0.4\) & \(73.9\pm 0.6\) & \(83.3\pm 0.5\) & \(70.3\pm 0.5\) & \(92.2\pm 0.4\) & \(83.0\pm 0.6\) & \(85.0\pm 0.6\) & \(82.5\) \\ (ViT-S) & \(\mathbf{+DETA}\) & \(\mathbf{90.8\pm 0.6}\) & \(\mathbf{93.3\pm 0.2}\) & \(\mathbf{71.6\pm 0.7}\) & \(\mathbf{92.4\pm 0.4}\) & \(\mathbf{78.0\pm 0.6}\) & \(\mathbf{84.1\pm 0.6}\) & \(\mathbf{75.2\pm 0.8}\) & \(\mathbf{84.4\pm 0.4}\) & \(\mathbf{95.5\pm 0.6}\) & \(\mathbf{90.0\pm 0.6}\) & \(\mathbf{85.2}\) (+2.7) \\ \hline Vanilla & _F-NCC_ & \(90.8\pm 0.8\) & \(91.2\pm 0.3\) & \(57.6\pm 1.0\) & \(88.3\pm 0.5\) & \(76.4\pm 0.6\) & \(81.9\pm 0.8\) & \(67.8\pm 0.9\) & \(92.3\pm 0.4\) & \(82.5\pm 0.6\) & \(83.9\pm 0.8\) & \(81.3\) \\ SwinT & \(\mathbf{+DETA}\) & \(\mathbf{81.8\pm 0.9}\) & \(\mathbf{92.5\pm 0.3}\) & \(\mathbf{68.9\pm 0.9}\) & \(\mathbf{92.7\pm 0.5}\) & \(\mathbf{79.5\pm 0.7}\) & \(\mathbf{82.8\pm 0.6}\) & \(\mathbf{76.6\pm 0.8}\) & \(\mathbf{96.4\pm 0.4}\) & \(\mathbf{82.9\pm 0.4}\) & \(\mathbf{89.9\pm 0.7}\) & \(\mathbf{85.4}\) (+4.1) \\ \hline \end{tabular}
\end{table}
Table 1: Few-shot classification
### Qualitative Results
Here, we provide some visualization results to qualitatively see how our method works. In Figure 4, we present the visualization of the cropped regions and the calculated weights of CoRA for few-shot tasks from MD. As observed, CoRA successfully assigns larger (resp. smaller) weights to task-specific clean (resp. task-irrelevant noisy) regions for each task. In Figure 5, we show the CAM [44] visualization of the activation maps for two tasks from the representative ImgN-MD and CUB. As shown, our method helps the baseline method accurately locate the task-specific discriminative regions in label-clean but image-noisy samples.
## 5 Conclusions
This work proposes DETA, a first, unified and plug-and-play framework to tackle the joint (image, label)-noise issue
Figure 4: Visualizations of the cropped regions and calculated weights for ten 5-way 10-shot tasks sampled from the 10 MD datasets. We record the region weights after the last iteration. To facilitate comparison, the weights in each class are divided by their maximum value.
Figure 5: CAM visualizations on two 5-way 10-shot tasks sampled from ImgN-MD and CUB, respectively. Two images per class are listed for each task. _Please zoom in for details._
\begin{table}
\begin{tabular}{l|c c c c c c c c c c} \hline \multirow{2}{*}{Method (w/ RN-18)} & \multirow{2}{*}{\(\blacktriangle\)} & \multirow{2}{*}{\(\blacktriangle\)} & \multicolumn{6}{c}{_Out-of-Domain_} & \multirow{2}{*}{\(\blacktriangle\)} & \multirow{2}{*}{\(\blacktriangle\)} \\ \cline{3-10} & & ImN-MD & & & & & & & & & & \\ \hline ProtoNet [51] & \(50.5\pm 1.1\) & \(60.0\pm 1.4\) & \(53.1\pm 1.0\) & \(68.8\pm 1.0\) & \(66.6\pm 0.8\) & \(49.0\pm 1.1\) & \(39.7\pm 1.1\) & \(85.3\pm 0.8\) & \(41.0\pm 1.1\) & \(47.1\pm 1.1\) & \(56.1\) \\ Alfa-FoProMA [51] & \(52.8\pm 1.1\) & \(61.9\pm 1.5\) & \(63.4\pm 1.1\) & \(69.8\pm 1.1\) & \(70.8\pm 0.9\) & \(59.2\pm 1.2\) & \(41.5\pm 1.2\) & \(86.0\pm 0.8\) & \(48.1\pm 1.1\) & \(60.8\pm 1.3\) & \(61.4\) \\ BOHB [43] & \(51.9\pm 1.1\) & \(67.6\pm 1.2\) & \(54.1\pm 0.9\) & \(70.7\pm 0.9\) & \(68.3\pm 0.8\) & \(50.3\pm 1.0\) & \(41.4\pm 1.1\) & \(87.3\pm 0.6\) & \(48.0\pm 1.0\) & \(59.1\pm 1.0\) & \(59.1\) \\ FLUTE [50] & \(46.9\pm 1.1\) & \(61.6\pm 1.4\) & \(48.5\pm 1.0\) & \(47.9\pm 1.0\) & \(63.8\pm 0.8\) & \(57.5\pm 1.0\) & \(31.8\pm 1.0\) & \(80.1\pm 0.9\) & \(41.4\pm 1.0\) & \(46.5\pm 1.1\) & \(52.6\) \\ eTTd [5] & \(56.4\pm 1.1\) & \(72.5\pm 1.4\) & \(72.8\pm 1.0\) & \(73.8\pm 1.1\) & \(77.6\pm 0.8\) & \(68.0\pm 0.9\) & \(\mathbf{51.2\pm 1.1}\) & \(\mathbf{93.3\pm 0.6}\) & \(55.7\pm 1.0\) & \(84.1\pm 1.0\) & \(70.5\) \\ \hline _URL (Base Model)_[24] & \(56.8\pm 1.0\) & \(79.5\pm 0.8\) & \(49.4\pm 0.8\) & \(71.8\pm 0.9\) & \(72.7\pm 0.7\) & \(53.3\pm 1.0\) & \(40.9\pm 0.9\) & \(85.3\pm 0.7\) & \(52.6\pm 0.9\) & \(47.3\pm 0.1\) & \(61.1\) \\ + _Beta_[24] & \(58.1\pm 1.1\) & \(81.1\pm 0.8\) & \(51.9\pm 0.9\) & \(73.6\pm 1.0\) & \(74.0\pm 0.7\) & \(55.6\pm 1.0\) & \(42.0\pm 0.9\) & \(86.2\pm 0.8\) & \(55.1\pm 1.0\) & \(59.1\pm 1.1\) & \(63.7\) \\ + _TSA_[25] & \(59.5\pm 1.1\) & \(78.2\pm 1.2\) & \(72.2\pm 1.0\) & \(74.9\pm 0.9\) & \(77.3\pm 0.7\) & \(67.6\pm 0.9\) & \(44.7\pm 1.0\) & \(90.9\pm 0.6\) & \(59.0\pm 1.0\) & \(82.5\pm 0.8\) & \(70.7\) \\ + _TSA_+ _DETA_ & \(\mathbf{60.7\pm 1.0}\) & \(\mathbf{81.6\pm 1.2}\) & \(\mathbf{73.0\pm 1.0}\) & \(\mathbf{77.0\pm 0.9}\) & \(\mathbf{78.3\pm 0.7}\) & \(\mathbf{69.5\pm 0.9}\) & \(47.6\pm 1.0\) & \(92.6\pm 0.6\) & \(\mathbf{60.3\pm 1.0}\) & \(\mathbf{86.8\pm 0.8}\) & \(\mathbf{72.8}\) \\ \hline \multicolumn{10}{c}{_In-Domain_} & \multicolumn{6}{c}{_Out-of-Domain_} & \multirow{2}{*}{\(\blacktriangle\)} \\ \cline{3-10} & ImN-MD & & & & & & & & & & & \\ \hline CNAPS [42] & \(50.8\pm 1.1\) & \(91.7\pm 0.5\) & \(83.7\pm 0.6\) & \(73.6\pm 0.9\) & \(59.5\pm 0.7\) & \(74.7\pm 0.8\) & \(50.2\pm 1.1\) & \(88.9\pm 0.5\) & \(39.4\pm 1.0\) & \(56.5\pm 1.1\) & \(66.9\) \\ SimpCNAPS [4] & \(58.4\pm 1.1\) & \(91.6\pm 0.6\) & \(82.0\pm 0.7\) & \(74.8\pm 0.9\) & \(68.8\pm 0.9\) & \(76.5\pm 0.8\) & \(46.6\pm 1.0\) & \(90.5\pm 0.5\) & \(48.9\pm 1.1\) & \(57.2\pm 1.0\) & \(69.5\) \\ TransCNAPS [3] & \(57.9\pm 1.1\) & \(94.3\pm 0.4\) & \(84.7\pm 0.5\) & \(78.8\pm 0.7\) & \(66.2\pm 0.8\) & \(77.9\pm 0.6\) & \(48.9\pm 1.2\) & \(92.3\pm 0.4\) & \(42.5\pm 1.1\) & \(59.7\pm 1.1\) & \(70.3\) \\ SUR [10] & \(56.2\pm 1.0\) & \(94.1\pm 0.4\) & \(85.5\pm 0.5\) & \(71.0\pm 1.0\) & \(71.0\pm 0.8\) & \(81.8\pm 0.6\) & \(64.3\pm 0.9\) & \(82.9\pm 0.8\) & \(52.0\pm 1.1\) & \(51.0\pm 1.1\) & \(71.0\) \\ URT [30] & \(56.8\pm 1.1\) & \(94.2\pm 0.4\) & \(85.8\pm 0.5\) & \(76.2\pm 0.8\) & \(71.6\pm 0.7\) & \(2.4\pm 0.6\) & \(64.0\pm 1.0\) & \(87.9\pm 0.6\) & \(48.2\pm 1.1\) & \(51.5\pm 1.1\) & \(71.9\) \\ FLUTE [50] & \(58.6\pm 1.0\) & \(92.0\pm 0.6\) & \(82.8\pm 0.7\) & \(75.3\pm 0.8\) & \(71.2\pm 0.8\) & \(77.3\pm 0.7\) & \(73.4\pm 0.5\pm 1.0\) & \(90.5\pm 0.5\) & \(52.8\pm 1.6\) & \(63.0\pm 1.0\) & \(71.2\) \\ Tri-M [33] & \(51.8\pm 1.1\) & \(93.2\pm 0.5
in test-time task adaptation. We evaluate DETA on the challenging Meta-Dataset and demonstrate that it consistently improves the performance of a wide range of baseline methods applied to various pre-trained models. We also uncover the overlooked image noise in Meta-Dataset, by tackling this issue DETA establishes new state-of-the-art results.
**Limitations.** In this work, we only consider noises inside support samples, however query samples from test tasks may also be noisy or even from out-of-distribution in reality. Thus, it is of great practical value to further boost model robustness to noisy or out-of-distribution query samples in future work. In addition, DETA may fail in the case when the overwhelming majority (e.g. \(>\)90%) of support samples are noisy, due to confirmation-bias issues.
|
2303.11669 | Universal Smoothed Score Functions for Generative Modeling | We consider the problem of generative modeling based on smoothing an unknown
density of interest in $\mathbb{R}^d$ using factorial kernels with $M$
independent Gaussian channels with equal noise levels introduced by Saremi and
Srivastava (2022). First, we fully characterize the time complexity of learning
the resulting smoothed density in $\mathbb{R}^{Md}$, called M-density, by
deriving a universal form for its parametrization in which the score function
is by construction permutation equivariant. Next, we study the time complexity
of sampling an M-density by analyzing its condition number for Gaussian
distributions. This spectral analysis gives a geometric insight on the "shape"
of M-densities as one increases $M$. Finally, we present results on the sample
quality in this class of generative models on the CIFAR-10 dataset where we
report Fr\'echet inception distances (14.15), notably obtained with a single
noise level on long-run fast-mixing MCMC chains. | Saeed Saremi, Rupesh Kumar Srivastava, Francis Bach | 2023-03-21T08:23:37Z | http://arxiv.org/abs/2303.11669v1 | ###### Abstract
###### Abstract
We consider the problem of generative modeling based on smoothing an unknown density of interest in \(\mathbb{R}^{d}\) using factorial kernels with \(M\) independent Gaussian channels with equal noise levels introduced by Saremi and Srivastava (2022). First, we fully characterize the time complexity of learning the resulting smoothed density in \(\mathbb{R}^{Md}\), called M-density, by deriving a universal form for its parametrization in which the score function is by construction permutation equivariant. Next, we study the time complexity of sampling an M-density by analyzing its condition number for Gaussian distributions. This spectral analysis gives a geometric insight on the "shape" of M-densities as one increases \(M\). Finally, we present results on the sample quality in this class of generative models on the CIFAR-10 dataset where we report Frechet inception distances (14.15), notably obtained with a single noise level on long-run fast-mixing MCMC chains.
**Universal Smoothed Score Functions for Generative Modeling**
Saeed Saremi\({}^{1,\ 2}\) Rupesh Kumar Srivastava\({}^{3}\) Francis Bach\({}^{4}\)
\({}^{1}\)UC Berkeley \({}^{2}\)Prescient Design, Genentech, Roche \({}^{3}\)NNAISENSE
\({}^{4}\)Inria, Ecole Normale Superieure, PSL Research University
## 1 Introduction
Smoothing a density with a kernel is a technique in nonparametric density estimation that goes back to Parzen (1962) at the birth of modern statistics. There has been a recent interest in estimating smoothed densities that are obtained by Gaussian convolution (Saremi and Hyvarinen, 2019; Goldfeld et al., 2020), where instead of the random variable \(X\) the problem is to model the random variable \(Y=X+\mathcal{N}(0,\sigma^{2}I_{d})\) based on a finite number of independent samples \(\{x_{i}\}_{i=1}^{n}\) drawn from \(p_{X}\). There is a subtle difference between this problem and the problem addressed by Parzen (1962). In the original problem (estimating \(p_{X}\)), the bandwidth of the Gaussian kernel \(\sigma\) is adjusted depending on the number of samples (typically tending to zero when the sample size goes to infinity). Here the kernel bandwidth \(\sigma\) is _fixed_. As one might expect, learning \(p_{Y}\) is simpler than learning \(p_{X}\), but the problem is quite rich with deep connections to empirical Bayes and score matching as we highlight next.
Empirical Bayes formulated by Robbins (1956) is concerned with the problem of estimating the random variable \(X\) given a single noisy observation \(Y=y\) assuming the noise model \(p_{Y|X}\) is known. A classical result states that the least-squares estimator of \(X\) is the Bayes estimator:
\[\widehat{x}(y)=\frac{\int xp(y|x)p(x)dx}{\int p(y|x)p(x)dx}.\]
But the remarkable result obtained in Robbin's seminal paper is that the Bayes estimator can be written in closed form purely in terms of \(p_{Y}\) for a variety of noise models: the explicit knowledge of \(p_{X}\) is not required in this estimation problem. In addition, this dependency is only in terms of the _unnormalized_\(p_{Y}\) for all known empirical Bayes estimators, although this fact was not highlighted by Robbins (1956). For isotropic Gaussian the estimator takes the form1
Footnote 1: This result for Gaussian noise models was first derived by Miyasawa (1961) but has a rich history of its own; see Raphan and Simoncelli (2011) for a survey.
\[\widehat{x}(y)=y+\sigma^{2}g(y),\]
where \(g(y)=\nabla\log p(y)\) is known in the literature as the score function (Hyvarinen, 2005).
This is the starting point in neural empirical Bayes (NEB) (Saremi and Hyvarinen, 2019), where the score function is parametrized using a neural network, arriving at the following learning objective
\[\mathcal{L}(\theta)=\mathbb{E}_{(x,y)\sim p(y|x)p(x)}\|x-\widehat{x}_{\theta}(y )\|^{2}.\]
Algorithmically, NEB is attractive on two fronts: (i) the learning/estimation problem is reduced to the optimization of the least-squares denoising objective, where MCMC sampling is not required during learning, (ii) generative modeling is reduced to sampling \(p_{Y}\) which is better conditioned than sampling \(p_{X}\) (see Sec. 4) combined with the estimation of \(X\) which is a deterministic computation itself, referred to as walk-jump sampling (WJS).
The main problem with NEB, from the perspective of generative modeling (sampling \(p_{X}\)), is that we cannot sample from \(p(x|y)\), and we do not have a control over how concentrated \(p(x|y)\) is around its mean \(\widehat{x}(y)=\mathbb{E}[X|Y=y]\). A solution to this problem was formulated by Saremi and Srivastava (2022), where the noise model in NEB was replaced with a multimeasurement noise model (MNM):
\[p(\mathbf{y}|x)=\prod_{m=1}^{M}p(y_{m}|x),\]
where the bold-faced \(\mathbf{y}\) denotes the multimeasurement random variable \(\mathbf{y}=(y_{1},\ldots,y_{M})\). As we review in Sec. 2, the algorithmic attractions of NEB carry over to Gaussian MNMs with the added benefits that by simply increasing the number of measurements \(M\), the posterior \(p(x|\mathbf{y})\) automatically concentrates around its mean. Of particular interest is the case where the \(M\) noise levels are identical, therefore the M-density \(p(\mathbf{y})\) is permutation invariant--this class of models is denoted by \((\sigma,M)\) which is our focus in this paper.
### Contributions
Our theoretical contributions are concerned with answering the following two questions:
* _What is the time complexity of learning M-densities?_ We show that the M-densities associated with \((\sigma,M)\) and \((\sigma^{\prime},M^{\prime})\) can be mapped to each other if \(\sigma/\sqrt{M}=\sigma^{\prime}/\sqrt{M^{\prime}}\). The permutation-invariant Gaussian M-densities are therefore grouped into universality classes \([\sigma_{\mathsf{eff}}]\) where \(\sigma_{\mathsf{eff}}\coloneqq\sigma/\sqrt{M}\). We arrive at a parametrization scheme for the score function associated with M-densities, called \(\mathsf{GPS}\), that is by construction permutation equivariant with respect to the permutation of the measurement indices. As a side effect of the \(\mathsf{GPS}\) parametrization, we derive a single estimator of \(X\) given \(\mathbf{Y}=\mathbf{y}\) instead of \(M\) (approximately equal) estimators by Saremi and Srivastava (2022).
* _What is the time complexity for sampling M-densities?_ Knowing that M-densities are grouped into universality classes, the more subtle question is: which member has better mixing time properties? This is an important question to answer in understanding the generative modeling properties of M-densities. This question is a difficult one in its full generality, but to shed light on it we assume the original density \(p_{X}\) is a non-isotropic Gaussian, and we study the full spectrum of the corresponding M-density. The calculation gives insight on the "geometry" of M-densities as one increases \(M\). See Fig. 1 for a schematic.
Experiments are focused on the generative modeling problem on the CIFAR-10 dataset (Krizhevsky et al., 2009). This dataset has proved to be challenging for generative models due to the diversity of image classes present. The performance of the generative models are measured in FID score (Heusel et al., 2017) (the lower score is better). As an example, a sophisticated model like BigGAN (Brock et al., 2019), in the generative adversarial networks (Goodfellow et al., 2014) family, achieves the FID score of 14.73. Our framework is based on a simple denoising objective with a single noise level, yet despite its simple structure we can achieve the FID score of **14.15**, which is remarkable in this class of models. Our experimental results question the current perception in the field that denoising models with a single noise level cannot be good generative models.
### Related Work
The work directly related to this paper is by Saremi and Srivastava (2022) who introduced a generative modeling framework based on smoothing an unknown density of interest with factorial kernels with \(M\) channels. Our focus here is on Gaussian kernels and our main contribution is to show any single measurement model \((\sigma,1)\) can be mapped to a multimeasurement one \((\sigma\sqrt{M},M)\); no additional learning is required. In particular, in our work the neural network inputs are in \(\mathbb{R}^{d}\) as opposed to \(\mathbb{R}^{Md}\) in the earlier work. In addition, there were open questions regarding the role \(M\) in the sampling complexity in the earlier work that this work aims to address.
At a broader level, this work is related to the research on denoising density models, a class of probabilistic models that grew from the literature on score matching and denoising autoencoders (Hyvarinen, 2005; Vincent, 2011; Alain and Bengio, 2014; Saremi et al., 2018). These models were not successful in the past on challenging generative modeling tasks, e.g. on the CIFAR-10 dataset, which in turn led to research on denoising objectives with multiple noise levels (Song and Ermon, 2019). In our experiments, we revisit denoising density models with a single noise level. In particular, our experimental results question the current perception in the field around the topic of single vs. multiple noise scales. As an example, we point out that the FID score in this paper (14.15) is significantly lower than the one reported by Song and Ermon (2019) (25.32) which was obtained using annealed Langevin MCMC with multiple noise scales. Also see Jain and Poole (2022) for a recent work on score-based generative modeling with a single noise level.
Figure 1: (_The geometry of M-densities_) (a) Schematic of a complex density in \(\mathbb{R}^{d}\) (\(d=2\)). (b) The plot represents the manifold associated with the corresponding permutation-invariant M-density in \(\mathbb{R}^{Md}\). The schematic is meant to capture the fact that the M-density is symmetric and it is smoother than the original density.
Background
In this section, we review smoothing with factorial kernels and their use for generative modeling. We refer to Saremi and Srivastava (2022) for more details and references.
Factorial Kernels.Smoothing a density with a (Gaussian) kernel is a well-known technique in nonparametric density estimation that goes back to Parzen (1962); see Hastie et al. (2009) for a general introduction to kernels. Given a density \(p_{X}\) in \(\mathbb{R}^{d}\) one can construct a smoother density \(p_{Y}\) in \(\mathbb{R}^{d}\) by convolving it with a positive-definite kernel \(k:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\):
\[p(y)=\int k(x,y)p(x)dx.\]
In this paper we only consider (translation-invariant) isotropic Gaussian kernels, where one can also have a dual perspective on the kernel as the conditional density \(k(x,y)=p(y|x)\), where
\[p(y|x)=\frac{1}{Z(\sigma)}\exp\left(-\frac{\|y-x\|^{2}}{2\sigma^{2}}\right)=: \mathcal{N}(y;x,\sigma^{2}I_{d}).\]
Here \(Z(\sigma)\) is the partition function associated with the isotropic Gaussian. From this angle, smoothing a density with an isotropic Gaussian kernel can also be expressed in terms of random variables as follows:
\[Y=X+\mathcal{N}(0,\sigma^{2}I_{d}).\]
A factorial kernel is a generalization of the above where the kernel \(k(x,y)\) takes the following factorial form with \(M\) kernel components:
\[k(x,\mathbf{y})=\prod_{m=1}^{M}k(x,y_{m}),\,\text{where}\,\,\mathbf{y}=(y_{1}, \ldots,y_{M}).\]
For isotropic Gaussian kernels this is equivalent to \(Y_{m}=X+\mathcal{N}(0,\sigma^{2}I_{d})\) for \(m\in[M]\) (with independent Gaussians). The kernel \(k(x,\mathbf{y})\) is referred to as multimeasurement noise model (MNM).2 The result of the convolution with a Gaussian MNM is referred to as Gaussian M-density associated with the random variable \(\mathbf{Y}=(Y_{1},\ldots,Y_{M})\) which takes values in \(\mathbb{R}^{Md}\). Given \(\{x_{i}\}_{i=1}^{n}\), independent draws from the density \(p_{X}\), we are interested in estimating the associated M-density for a given fixed noise level \(\sigma\).
Footnote 2: One can view smoothing with a factorial kernel in the context of a communication system (Shannon, 1948) with \(M\) independent noise channels, i.e., given \(X=x\) samples \((y_{1},\ldots,y_{m})\) are obtained by adding \(M\) independent isotropic Gaussian noise to \(x\).
Multimeasurement Bayes Estimators.How can we go about estimating the M-density? One approach is to formulate it as a learning problem (Vapnik, 1999) by parametrizing the M-density (say with a neural network) and devising an appropriate learning objective. For \(M=1\), the learning objective is given by:
\[\mathcal{L}(\theta)=\mathbb{E}_{(x,y)\sim p(x)p(y|x)}\|x-\widehat{x}_{\theta}( y)\|^{2}, \tag{1}\]
where \(\widehat{x}_{\theta}(y)\) is a parametrization of the Bayes estimator of \(X\) given \(Y=y\) in terms of \(p_{\theta}(y)\). This approach heavily relies on the fact that the Bayes estimator \(\widehat{x}(y)\) can indeed be expressed in closed form in terms of \(p(y)\), which is a key result at the heart of empirical Bayes (Robbins, 1956). In addition, for Gaussian kernels, the Bayes estimator can be expressed in terms of the score function
\(\nabla\log p(y)\), a result that goes back to Miyasawa (1961), therefore for learning \(p(y)\) one can ignore its partition function (Saremi and Hyvarinen, 2019). This key result in the empirical Bayes literature was extended by Saremi and Srivastava (2022) to Poisson and Gaussian MNMs, where for Gaussian MNMs, \(\widehat{x}(\mathbf{y})=\mathbb{E}[X|\mathbf{Y}=\mathbf{y}]\) takes the following form:
\[\widehat{x}(\mathbf{y})=y_{m}+\sigma^{2}\nabla_{m}\log p(\mathbf{y}), \tag{2}\]
where \(m\in[M]\) is an arbitrary measurement index (the result is invariant to this choice). Note that the Bayes estimator \(\widehat{x}(\mathbf{y})\) only depends on the score function associated with the M-density, therefore one can use Eq. 1 as the objective for learning the energy/score function associated with the M-density by simply replacing \(y\) with \(\mathbf{y}=(y_{1},\ldots,y_{M})\).
Walk-Jump Sampling.Following learning the score function \(\nabla\log p(\mathbf{y})\), one can use Langevin MCMC to draw exact samples from \(p(\mathbf{y})\). For \(M=1\) the density \(p(y)\) is smoother than \(p(x)\) and MCMC is assured to mix faster. What about drawing samples from \(p(x)\)? The idea behind walk-jump sampling (WJS) is that one can indeed use the score function \(\nabla\log p(y)\) to estimate \(X\) thus arriving at approximate samples from \(p(x)\)(Saremi and Hyvarinen, 2019). There is clearly a trade-off here: by decreasing \(\sigma\) the estimate of \(X\) becomes more and more accurate (WJS becomes more and more exact) but this comes at the cost of sampling a less smooth \(p(y)\) where MCMC has a harder time. On the surface, what is intriguing about M-density is that one can keep \(\sigma\) to be "large" and still have a control on generating exact samples from \(p(x)\) by simply increasing \(M\). The full picture on the effects of increasing \(M\) is more complex which we discuss after our analysis in Sec. 4.
## 3 Universal M-densities
In this section we derive a general expression for the M-density \(p(\mathbf{y})\) and the score function \(\nabla\log p(\mathbf{y})\) for Gaussian MNMs with equal noise levels \(\sigma\). For equal noise levels, the M-density (resp. score function) is permutation invariant (resp. equivariant) under the permutation of measurement indices (Saremi and Srivastava, 2022). However, it is not clear a priori how this invariance/equivariance should be reflected in the parametrization. The calculation below clarifies this issue, where we arrive at a general permutation invariant (resp. equivariant) parametrization for the M-density (resp. score function) in which the empirical mean of the \(M\) measurements
\[\overline{y}=M^{-1}\sum_{m=1}^{M}y_{m}\]
plays a central role. We start with a rewriting of the log p.d.f. of the factorial kernel:
\[-2\sigma^{2}\log p(\mathbf{y}|x)=\sum_{m=1}^{M}\|y_{m}-x\|^{2}+C \tag{3}\] \[=M\|x\|^{2}-2\langle\sum_{m=1}^{M}y_{m},x\rangle+\sum_{m=1}^{M} \|y_{m}\|^{2}+C\] \[=M(\|x\|^{2}-2\langle\overline{y},x\rangle+\overline{\|y\|^{2}} \rangle+C\] \[=M(\|x-\overline{y}\|^{2}+\overline{\|y\|^{2}}-\|\overline{y}\|^ {2})+C,\]
where \(C=2\sigma^{2}M\log Z(\sigma)\) and \(\|y\|^{2}\) is short for
\[\overline{\|y\|^{2}}=M^{-1}\sum_{m=1}^{M}\big{\|}y_{m}\big{\|}^{2}.\]
Now, we view the smoothing kernel \(p(\mathbf{y}|x)\) as a Gaussian distribution over \(X\) centered at \(\overline{y}\):
\[\log p(\mathbf{y})=\log\int p(\mathbf{y}|x)p(x)dx \tag{4}\] \[=\log\int\mathcal{N}(x;\overline{y},\sigma_{\mathsf{eff}}^{2}I_{ d})p(x)dx+\frac{\|\overline{y}\|^{2}-\overline{\|y\|^{2}}}{2\sigma_{\mathsf{ eff}}^{2}}+C^{\prime}\] \[=\log\mathbb{E}_{X\sim N(\overline{y},\sigma_{\mathsf{eff}}^{2}I _{d})}[p(X)]+\frac{\|\overline{y}\|^{2}-\overline{\|y\|^{2}}}{2\sigma_{\mathsf{ eff}}^{2}}+C^{\prime},\]
where \(\sigma_{\mathsf{eff}}:=\sigma/\sqrt{M}\), and \(C^{\prime}=\log Z(\sigma_{\mathsf{eff}})-M\log Z(\sigma)\). Note that the first term above is a function of \(\overline{y}\) for any distribution \(p_{X}\), therefore the energy function \(f(\mathbf{y})\) takes the following form:
\[f(\mathbf{y})=\frac{1}{2\sigma_{\mathsf{eff}}^{2}}\left(\overline{\|y\|^{2}}- \|\overline{y}\|^{2}\right)+\varphi(\overline{y}). \tag{5}\]
Next we consider the functional form for the score function \(\mathbf{g}(\mathbf{y})=-\nabla f(\mathbf{y})\). Taking gradients leads to:
\[g_{m}(\mathbf{y})=\frac{1}{2\sigma_{\mathsf{eff}}^{2}}\left(2\overline{y}/M-2y _{m}/M\right)-\nu(\overline{y})/M,\]
where \(\nu=\nabla\varphi.\) The expression above is written more compactly (to be used below) as
\[\sigma^{2}g_{m}(\mathbf{y})=(\overline{y}-y_{m})-\sigma_{\mathsf{eff}}^{2} \cdot\nu(\overline{y}). \tag{6}\]
Finally, the expression for the Bayes estimator \(\widehat{x}(\mathbf{y})\) is derived (combine Eq. 2 and Eq. 6):
\[\widehat{x}(\mathbf{y})=\overline{y}-\sigma_{\mathsf{eff}}^{2}\cdot\nu( \overline{y}). \tag{7}\]
### Gps Parametrization
The results above leads to the following parametrization for the score function:
**Definition 1** (Gps).: _The GPS parametrization of the score function associated with the M-density for \((\sigma,M)\) models is given by (replace \(\nu\) with \(\nu_{\theta}\) in Eq. 6)_
\[\sigma^{2}g_{m}(\mathbf{y};\theta)=(\overline{y}-y_{m})-\sigma_{\mathsf{eff}} ^{2}\cdot\nu_{\theta}(\overline{y}), \tag{8}\]
_where \(\nu_{\theta}:\mathbb{R}^{d}\to\mathbb{R}^{d}\) is either parametrized directly/explicitly or indirectly/implicitly by parametrizing the function \(\varphi_{\theta}:\mathbb{R}^{d}\to\mathbb{R}\). In the later case, \(\nu_{\theta}\) is derived as follows:_
\[\nu_{\theta}=\nabla\varphi_{\theta}.\]
The GPS parametrization has two important properties captured by the following propositions:
**Proposition 1** (Permutation equivariance property of Gps).: _The score function parametrized in Gps is permutation equivariant:_
\[\mathbf{g}_{\theta}(\pi(\mathbf{y}))=\pi(\mathbf{g}_{\theta}(\mathbf{y})), \tag{9}\]
_where \(\pi:[M]\to[M]\) is a permutation of the noise/measurement channels whose action on \(\mathbf{y}=(y_{1},\dots,y_{M})\) and \(\mathbf{g}=(g_{1},\dots,g_{M})\) is to permute the measurement channels:_
\[\pi((y_{1},\dots,y_{M})) =(y_{\pi(1)},\dots,y_{\pi(M)}).\] \[\pi((g_{1},\dots,g_{M})) =(g_{\pi(1)},\dots,g_{\pi(M)}).\]
Proof.: The proof is straightforward since \(\overline{y}\) in Gps is permutation invariant.
The naming Gps has been derived from the statement of Proposition 1: \(\mathbf{g}_{\theta}\) is a permutation-equivariant score function. Informally (alluding to GPS as the "global positioning system"), in the Gps parametrization, the "coordinates" of the M-density manifold in high dimensions is validly encoded in the sense of respecting its permutation invariance. This important symmetry is broken in the MDAE parametrization studied in (Saremi and Srivastava, 2022).
In addition, in Gps, the Bayes estimator (Eq. 7) takes the following parametric form:
\[\widehat{x}_{\theta}(\mathbf{y})=\overline{y}-\sigma_{\mathsf{eff}}^{2}\cdot \nu_{\theta}(\overline{y}), \tag{10}\]
where the measurement index \(m\) does not appear in the final expression--this is in contrast to the parametrization studied by Saremi and Srivastava (2022).
### Gps is Universal
Before formalizing the universality of the Gps parametrization in Proposition 2 below, we define the notion of universality classes associated with M-densities:
**Definition 2** (M-density Universality Classes).: _We define the universality class \([\sigma_{\mathsf{eff}}]\) as the set of all \((\sigma,M)\) models_
\[[\sigma_{\mathsf{eff}}]:=\{(\sigma_{1},M_{1}),(\sigma_{2},M_{2}),\dots\},\]
_such that for all \((\sigma_{i},M_{i})\in[\sigma_{\mathsf{eff}}]\):_
\[\frac{\sigma_{i}}{\sqrt{M_{i}}}=\sigma_{\mathsf{eff}}.\]
_In particular, the models \(\{(\sigma_{\mathsf{eff}}\sqrt{M},M):M\in\mathbb{N}\}\) belong to the universality class \([\sigma_{\mathsf{eff}}]\)._
The universality property of Gps is captured by the following proposition:
**Proposition 2** (The universal property of Gps).: _For any parameter \(\theta\), all \((\sigma,M)\) models that belong to the same universality class and parametrized by Gps are identical in the sense that they incur the same loss_
\[\mathcal{L}_{\sigma,M}(\theta)=\mathcal{L}_{\sigma^{\prime},M^{\prime}}(\theta) \text{ if }\frac{\sigma}{\sqrt{M}}=\frac{\sigma^{\prime}}{\sqrt{M^{\prime}}}, \tag{11}\]
_where_
\[\mathcal{L}_{\sigma,M}(\theta)=\mathbb{E}_{(x,\mathbf{y})\sim p(x)p(\mathbf{y }|x)}\big{\|}x-\widehat{x}_{\theta}(\mathbf{y})\big{\|}^{2}.\]
Proof.: Using Eq.10, we have
\[\mathcal{L}_{\sigma,M}(\theta)=\mathbb{E}_{(x,\mathbf{y})\sim p(x)p(\mathbf{y}|x )}\big{\|}x-\overline{y}+\sigma_{\mathsf{eff}}^{2}\cdot\nu_{\theta}(\overline{ y})\big{\|}^{2}. \tag{12}\]
Note that \(x-\overline{y}\) has the same law as \(\mathcal{N}(0,\sigma_{\mathsf{eff}}^{2}I_{d})\), where \(\sigma_{\mathsf{eff}}=\sigma/\sqrt{M}\). Therefore, the learning objective has the interpretation that \(\nu_{\theta}\) makes predictions on the residual noise left in the empirical mean of the noisy measurements since \(\overline{y}=x+\bar{\gamma}\), where \(\bar{\gamma}=M^{-1}\sum_{m=1}^{M}\gamma_{m}\) and \(\gamma_{m}\) are independent samples from \(\mathcal{N}(0,\sigma^{2}I_{d})\). This observation can be made explicit by rewriting \(\mathcal{L}_{\sigma,M}(\theta)\) as:
\[\mathbb{E}_{x\sim p(x),\{\gamma_{m}\sim\mathcal{N}(0,\sigma^{2}I_{d})\}_{m=1} ^{M}}\big{\|}\overline{\gamma}-\sigma_{\mathsf{eff}}^{2}\cdot\nu_{\theta}(x+ \overline{\gamma})\big{\|}^{2}.\]
The statement of the proposition follows since \(\bar{\gamma}\) has the same law as \(\mathcal{N}(0,\sigma_{\mathsf{eff}}^{2}I_{d})\) for \((\sigma,M)\) and \((\sigma^{\prime},M^{\prime})\) models since they are both in the universality class \([\sigma_{\mathsf{eff}}]\).
**Corollary 1**.: _The laws of \(\widehat{x}_{\theta}(\mathbf{Y})\) are identical for all \((\sigma,M)\) models in the same universality class and for all \(\theta\) in the \(\mathsf{GPS}\) parametrization._
Proof.: In the \(\mathsf{GPS}\) parametrization \(\widehat{x}_{\theta}(\mathbf{Y})=\overline{Y}-\sigma_{\mathsf{eff}}^{2}\cdot \nu_{\theta}(\overline{Y})\) (Eq.10). The proof then follows from the proof in the above proposition since \(\overline{Y}\) has the same law as \(X+\mathcal{N}(0,\sigma_{\mathsf{eff}}^{2}I_{d})\) for all models in \([\sigma_{\mathsf{eff}}]\).
## 4 On the shape of M-densities
In the previous section we established that \((\sigma,M)\) models in the same universality class \([\sigma_{\mathsf{eff}}]\) (Definition2) are equivalent in the sense formalized in Proposition2 and Corollary1. In this section, we switch our focus to how difficult it is to sample universal M-densities. In particular, we formalize the intuition that \((\sigma_{\mathsf{eff}}\sqrt{M},M)\in[\sigma_{\mathsf{eff}}]\) become more spherical by increasing \(M\). The analysis also sheds some light on the geometry of M-densities as one increases \(M\) (for a fixed \(\sigma\)).
An important parameter characterizing the shape of log-concave densities is the condition number denoted by \(\kappa\) which measures how elongated the density is (Cheng et al., 2018, Section1.4.1). The condition number appears in the mixing time analysis of log-concave densities, e.g. in the form \(\kappa^{2}\) in (Cheng et al., 2018). Intuitively, for poorly conditioned densities one has to use a small step size and that will lead to long mixing times.
For Gaussian densities
\[p(x)=\mathcal{N}(x;\mu,\Sigma),\]
the condition number denoted by \(\kappa\) is given by
\[\kappa=\lambda_{\max}(F)/\lambda_{\min}(F), \tag{13}\]
where \(\lambda(F)\) denotes the spectrum of the inverse covariance matrix \(F\coloneqq\Sigma^{-1}\).
Next, we study the full spectrum of the corresponding \(M\)-density and give an expression for the condition number of \((\sigma,M)\) models. We assume without loss of generality a basis in \(\mathbb{R}^{d}\) where the density is centered at \(\mu=0\) and the covariance matrix is diagonalized: \(\Sigma_{ij}=\tau_{i}^{2}\delta_{ij}\). Therefore,
\[p(x)=\prod_{i=1}^{d}\frac{1}{Z(\tau_{i})}\exp\left(-\frac{x_{i}^{2}}{2\tau_{i} ^{2}}\right),\]
and \(\kappa=\tau_{\max}^{2}/\tau_{\min}^{2}\). Next we study the condition number for \((\sigma,M)\) models. The case \(M=1\) is simple since
\[F=(\Sigma+\sigma^{2}I_{d})^{-1},\]
therefore
\[\kappa(\sigma,1)=\frac{\tau_{\max}^{2}+\sigma^{2}}{\tau_{\min}^{2}+\sigma^{2}}. \tag{14}\]
We switch to studying the full spectrum of the covariance matrix for \((\sigma,M)\) models where \(M>1\). We start with the expression for \(p(\mathbf{y})\) given below up to a normalizing constant for the general M-density defined by the noise levels \((\sigma_{1},\sigma_{2},\ldots,\sigma_{M})\):
\[\begin{split} p(\mathbf{y})&\propto\int\prod_{m=1} ^{M}\mathcal{N}(x;y_{m},\sigma_{m}^{2}I_{d})\cdot\mathcal{N}(x;0,\Sigma)\;dx\\ &\propto\prod_{i=1}^{d}\int\exp{\left(-\sum_{m=1}^{M}\frac{(y_{ mi}-x_{i})^{2}}{2\sigma_{m}^{2}}-\frac{x_{i}^{2}}{2\tau_{i}^{2}}\right)}dx_{i}\\ &=\prod_{i=1}^{d}\int\exp{\left(-\frac{(x_{i}-\alpha_{i})^{2}}{2 \beta_{i}^{2}}-\gamma_{i}\right)}dx_{i}\\ &\propto\prod_{i=1}^{d}\exp(-\gamma_{i}).\end{split} \tag{15}\]
The expressions for \(\alpha_{i}\), \(\beta_{i}\), and \(\gamma_{i}\) are given next by completing the square via matching second, first and zeroth derivative (in that order) of the left and right hand sides below
\[-\sum_{m=1}^{M}\frac{\left(y_{mi}-x_{i}\right)^{2}}{2\sigma_{m}^{2}}-\frac{x_ {i}^{2}}{2\tau_{i}^{2}}=-\frac{(x_{i}-\alpha_{i})^{2}}{2\beta_{i}^{2}}-\gamma _{i} \tag{16}\]
evaluated at \(x_{i}=0\). The following three equations follow:
\[\frac{1}{\beta_{i}^{2}}=\sum_{m=1}^{M}\frac{1}{\sigma_{m}^{2}}+\frac{1}{\tau_ {i}^{2}}\hskip 28.452756pt, \tag{17}\]
\[\frac{\alpha_{i}}{\beta_{i}^{2}}=\sum_{m=1}^{M}\frac{y_{mi}}{\sigma_{m}^{2}} \Rightarrow\alpha_{i}=\omega_{i}^{2}\sum_{m=1}^{M}\frac{y_{mi}}{\sigma_{m}^{ 2}}, \tag{18}\]
\[-\frac{\alpha_{i}^{2}}{2\beta_{i}^{2}}-\gamma_{i}=-\sum_{m=1}^{M}\frac{y_{mi} ^{2}}{2\sigma_{m}^{2}}\Rightarrow\gamma_{i}=\sum_{m=1}^{M}\frac{y_{mi}^{2}}{2 \sigma_{m}^{2}}-\frac{1}{2}\omega_{i}^{2}\left(\sum_{m=1}^{M}\frac{y_{mi}}{ \sigma_{m}^{2}}\right)^{2}, \tag{19}\]
where
\[\omega_{i}^{2}\coloneqq\left(\sum_{m=1}^{M}\frac{1}{\sigma_{m}^{2}}+\frac{1}{ \tau_{i}^{2}}\right)^{-1}. \tag{20}\]
Therefore the energy function associated with the random variable \(\mathbf{y}\) is given by:
\[f(\mathbf{y})=\sum_{m}\frac{\|y_{m}\|^{2}}{2\sigma_{m}^{2}}-\frac{1}{2}\sum_{ i=1}^{d}\omega_{i}^{2}\left(\sum_{m=1}^{M}\frac{y_{mi}}{\sigma_{m}^{2}} \right)^{2}. \tag{21}\]
The energy function can be written more compactly by introducing the matrix \(\mathbf{F}\):
\[f(\mathbf{y}) =\frac{1}{2}\langle\mathbf{y},\mathbf{F}\mathbf{y}\rangle, \tag{22}\] \[\mathbf{F}_{mi,m^{\prime}i^{\prime}} =[\sigma_{m}^{-2}(1-\omega_{i}^{2}\sigma_{m}^{-2})\delta_{mm^{ \prime}}-\omega_{i}^{2}\sigma_{m}^{-2}\sigma_{m^{\prime}}^{-2}(1-\delta_{mm^{ \prime}})]\delta_{ii^{\prime}}.\]
In words, the \(Md\times Md\) dimensional matrix \(\mathbf{F}\) is block diagonal with \(d\) blocks of size \(M\times M\). The blocks themselves capture the interactions between different measurements indexed by \(m\) and \(m^{\prime}\). To study the spectrum of the covariance matrix, we next focus on \((\sigma,M)\) models, i.e., the permutation-invariant case where \(\sigma_{m}=\sigma\) for all \(m\in[M]\):
\[\mathbf{F}_{mi,m^{\prime}i^{\prime}} =[\sigma^{-2}(1-\omega_{i}^{2}\sigma^{-2})\delta_{mm^{\prime}}- \omega_{i}^{2}\sigma^{-4}(1-\delta_{mm^{\prime}})]\delta_{ii^{\prime}}, \tag{23}\] \[\omega_{i}^{2} =(M\sigma^{-2}+\tau_{i}^{-2})^{-1}.\]
The \(M\times M\) blocks of the matrix \(\mathbf{F}\) have the form:
\[\mathbf{F}_{i}=\begin{pmatrix}a_{i}&b_{i}&\ldots&b_{i}\\ b_{i}&a_{i}&\ldots&b_{i}\\ \vdots&&\ddots\\ b_{i}&b_{i}&\ldots&a_{i}\end{pmatrix},\]
where
\[a_{i} =\sigma^{-2}(1-\omega_{i}^{2}\sigma^{-2}), \tag{24}\] \[b_{i} =-\omega_{i}^{2}\sigma^{-4}. \tag{25}\]
It is straightforward to find the \(M\) eigenvalues of the matrix \(\mathbf{F}_{i}\):
* \(M-1\) degenerate eigenvalues equal to \(a_{i}-b_{i}\) corresponding to the eigenvectors \[\{(1,-1,0,\ldots,0)^{\top},(1,0,-1,\ldots,0)^{\top},\ldots,(1,0,0,\ldots,-1)^ {\top}\},\]
* one eigenvalue equal to \(a_{i}+(M-1)b_{i}\) corresponding to the eigenvector \((1,1,\ldots,1)^{\top}\).
Since \(a_{i}>0,\ b_{i}<0\) we arrive at:
\[\lambda_{\max}(\mathbf{F})=a_{i}-b_{i}=\sigma^{-2}\]
which is \((M-1)d\) degenerate on the full matrix \(\mathbf{F}\). The remaining \(d\) eigenvalues are given by
\[\lambda_{i}=a_{i}+(M-1)b_{i}=\sigma^{-2}(1-M\omega_{i}^{2}\sigma^{-2}), \tag{26}\]
the smallest of which is given by
\[\lambda_{\min}(\mathbf{F})=\sigma^{-2}(1-M\sigma^{-2}\omega_{\max}^{2})= \sigma^{-2}\left(1-M\sigma^{-2}\frac{1}{M\sigma^{-2}+\tau_{\max}^{-2}}\right)= \frac{\sigma^{-2}}{1+M\sigma^{-2}\tau_{\max}^{2}}. \tag{27}\]
It follows:
\[\kappa(\sigma,M)=\lambda_{\max}(\mathbf{F})/\lambda_{\min}(\mathbf{F})=1+M \sigma^{-2}\tau_{\max}^{2}. \tag{28}\]
Experiments
We conducted experiments on the CIFAR-10 dataset (Krizhevsky et al., 2009) of \(32\times 32\) color images from 10 classes. The goal of these experiments is to empirically study the results of sampling from GPS models in the same universality class as implied by our theoretical analysis in Sec. 3.
Training.\(\nu_{\theta}\) (from Eq. 10) was parameterized using the "U-Net" used in recent work on generative modeling on this dataset (Dariwal and Nichol, 2021). We set \(\sigma_{\mathsf{eff}}=0.25\), by choosing \(\sigma=1\) and \(M=16\) for training the network. Similar to the MDAE parametrization from Saremi and Srivastava (2022), learning essentially involves training a denoising autoencoder with a mean squared loss. For optimization, the Adagrad optimizer (Duchi et al., 2011) was used with a batch size of 128 and maximum 400 epochs of training. The learning rate was initialized \(1\times 10^{-6}\) and scheduled to linearly increase to 1.5 over \(1\times 10^{6}\) updates (though training terminated earlier). During training, the FID score (Heusel et al., 2017) computed using samples from 125 parallel MCMC chains (400 samples each, resulting in 50,000 samples total) was monitored at regular intervals, and the model with the lowest FID score was selected as a form of early stopping.
Sampling Results.Our sampling algorithm is based on the walk-jump sampling (Saremi and Hyvarinen, 2019) that samples noisy data using the learned score function with Langevin MCMC (walk) together with the Bayes estimator of clean data (jump). For Langevin MCMC we considered three different algorithms (Sachs et al., 2017; Cheng et al., 2018; Shen and Lee, 2019). We settled on the algorithm by Sachs et al. (2017) early on as it was more reliable in the small-scale experiments that we performed (see Fig. 3(e) for a visual comparison to the randomized midpoint method by Shen and Lee (2019)). We set the step size \(\delta=\sigma/2\) for all \((\sigma,M)\) models and did extensive experiments on tuning the friction parameter. The results are shown in Fig. 2 where the FID score is obtained averaged over 5 random seeds. In the algorithm by Sachs et al. (2017) the friction parameter \(\gamma\) only shows up in the form \(\gamma_{\mathsf{eff}}=\gamma\cdot\delta\) which we call effective friction. This is especially important in our model since step sizes vary greatly between different \((\sigma,M)\) models. The best results were obtained for \((0.25,1)\) model with the FID of **14.15**. In addition, our results in Fig. 3 are remarkable in qualitatively demonstrating fast mixing in long-run MCMC chains, where diverse classes are visited in a single chain: such fast-mixing MCMC chains on CIFAR-10 have not been reported in the literature.
Figure 2: CIFAR-10 FID scores obtained when tuning the value of \(\gamma_{\mathsf{eff}}\) for various values of \(M\), for a model trained with \(\sigma_{\mathsf{eff}}=0.25\).
Figure 3: Examples of long-run MCMC chains on CIFAR-10 dataset for \((\sigma,M)\) models in the \([\sigma_{\mathsf{eff}}=0.25]\) universality class. **Each panel represents a single MCMC chain.** Only 20 steps are taken per image, starting from noise, a total of \(10\,\mathrm{K}\) steps (viewed left-to-right, top-to-bottom). We set \(\delta=\sigma/2\), and we set the friction \(\gamma\) from Sachs et al. (2017), panels (a)-(d), such that \(\gamma_{\mathsf{eff}}\coloneqq\gamma\delta=1\). In the bottom panel we show the performance of randomized midpoint method (Shen and Lee, 2019) used for Langevin MCMC in the walk-jump sampling. The random seed is fixed across runs. Best seen zoomed on a computer screen.
Conclusion
This work was primarily concerned with the theoretical understanding of smoothing methods using factorial kernels proposed by Saremi and Srivastava (2022), where in particular we focused on the permutation-invariant case in which the model is defined by a single noise scale. We showed such models are grouped into universality classes in which the densities can be easily mapped to each other, and we introduced the GPS parametrization to utilize that. Theoretically, the models that belong to the same universality class should have very different sampling properties and we had an analysis of that here, focused on studying the condition number.
Our experimental results on CIFAR-10 were surprising on two fronts: (i) We achieved low FID scores which have been argued to be not feasible for denoising models with a single noise scale. In fact, the research on generative models based on denoising autoencoders (DAE) came to a halt around 2014 with the invention of GANs (Goodfellow et al., 2014), and we were ourselves surprised by a simple model such as \((0.25,1)\) outperforming BigGAN (Brock et al., 2019) on the CIFAR-10 challenge. Note that, \((0.25,1)\) is essentially a DAE with an empirical Bayes interpretation (Saremi and Hyvarinen, 2019). (ii) We also found it surprising that our experimental results did not show any benefit for larger \(M\) models, but that needs more investigation in future research. In particular, there might exist more "clever samplers" that need to be invented for exploiting the structure of \((\sigma,M)\) models.
## Acknowledgement
We would like to thank Ruoqi Shen for communication regarding their randomized midpoint method.
|
2310.00884 | Equivalence between definitions of the gravitational deflection angle of
light for a stationary spacetime | The Gibbons-Werner-Ono-Ishihara-Asada method for gravitational lensing in a
stationary spacetime has been recently reexamined [Huang and Cao,
arXiv:2306.04145], in which the gravitational deflection angle of light based
on the Gauss-Bonnet theorem can be rewritten as a line integral of two
functions $H$ and $T$. The present paper proves that the Huang-Cao line
integral definition and the Ono-Ishihara-Asada one [Phys. Rev. D 96, 104037
(2017)] are equivalent to each other, whatever asymptotic regions are. A remark
is also made concerning the direction of a light ray in a practical use of
these definitions. | Kaisei Takahashi, Ryuya Kudo, Keita Takizawa, Hideki Asada | 2023-10-02T04:00:27Z | http://arxiv.org/abs/2310.00884v2 | Equivalence between definitions of the gravitational deflection angle of light for a stationary spacetime
###### Abstract
The Gibbons-Werner-Ono-Ishihara-Asada method for gravitational lensing in a stationary spacetime has been recently reexamined [Huang and Cao, arXiv:2306.04145], in which the gravitational deflection angle of light based on the Gauss-Bonnet theorem can be rewritten as a line integral of two functions \(H\) and \(T\). The present paper proves that the Huang-Cao line integral definition and the Ono-Ishihara-Asada one [Phys. Rev. D 96, 104037 (2017)] are equivalent to each other, whatever asymptotic regions are. A remark is also made concerning the direction of a light ray in a practical use of these definitions.
pacs: 04.40.-b, 95.30.Sf, 98.62.Sb
## I Introduction
The gravitational deflection of light plays a crucial role in modern cosmology and gravitational physics [1; 2; 3; 4; 5], where a conventional formulation of the gravitational deflection of light assumes the weak deflection of light in a quasi-Newtonian region that can be treated as a perturbation around Minkowski background.
Although the conventional formulation is practically useful in many situations [1; 2; 3; 4], it is limited. In order to discuss a more geometrical aspect of the gravitational deflection of light, Gibbons and Werner (GW) [6] proposed a use of the Gauss-Bonnet theorem (GBT) [7; 8]. The GW method was initially applied to a static and spherically symmetric (SSS) spacetime [6], for which the deflection angle of light can be defined as a surface integral of the Gaussian curvature of the equatorial plane in the optical geometry. Later, Ishihara et al. generalized the GW idea for a case that an observer and source are located at a finite distance from a lens object [9]. It was extended also to the strong deflection limit [10]. Without assuming the asymptotic flatness, eventually, Takizawa et al. proved the equivalence between the two definitions by GW and Ishihara et al. for SSS spacetimes [11].
The GW method was extended by Werner to a stationary axisymmetric (SAS) case [12]. This still employs asymptotically flat regions, at which the angle can be defined in a Euclid space. Furthermore, Ono, Ishihara and Asada (OIA) developed a formulation for a non-asymptotic observer and source in SAS spacetimes [13]. These works assumed asymptotically flat regions. In the OIA approach, an alternative definition of the deflection angle of light was proposed in terms of a linear combination of three functions.
It was proven [13] that the deflection angle of light in the OIA approach is equivalent to the GW-type definition as a two-dimensional integral of the Gaussian curvature, if the SAS spacetime has asymptotically flat regions. See e.g. Eqs. (29) and (30) in [13].
Very recently, Huang and Cao (HC) have reexamined the Gibbons-Werner-Ono-Ishihara-Asada (GWOIA) method for SAS spacetimes [14]. They have found that the GW definition as a two-dimensional integral can be simplified as a line integral of two functions \(H\) and \(T\). See Eq. (44) in [14].
Can the OIA definition be related with the HC line-integral definition without assuming the asymptotic flatness? The main purpose of the present paper is to prove that the two definitions are equivalent to each other for SAS spacetimes, whatever asymptotic regions are.
This paper is organized as follows. For its simplicity, first we consider a SSS spacetime to prove the equivalence in Section II. Section III extends the equivalence to SAS cases. Section IV summarizes this paper. Throughout this paper, we use the unit of \(G=c=1\).
## II Static and spherically symmetric case
This section focuses on a SSS spacetime. The line element can be written as [10]
\[ds^{2}=-A(r)dt^{2}+B(r)dr^{2}+C(r)(d\theta^{2}+\sin^{2}\theta d \phi^{2}). \tag{1}\]
In the rest of this section, we assume \(A(r),B(r),C(r)>0\). If the spacetime represents a black hole, we study the outside of a black hole horizon.
Without loss of generality, a photon orbit can be chosen as the equatorial plane (\(\theta=\pi/2\)) because of the spherical symmetry. From the null condition, we obtain [10]
\[dt^{2} =\gamma_{ij}dx^{i}dx^{j}\] \[=\frac{B(r)}{A(r)}dr^{2}+\frac{C(r)}{A(r)}d\phi^{2}, \tag{2}\]
which defines the optical metric on the equatorial plane [6; 9; 15]. We examine a light ray with an impact parameter \(b\), which is related to the specific energy \(E\) and specific angular momentum \(L\) of a photon as \(b\equiv L/E\). The null condition of a photon orbit becomes [10]
\[\left(\frac{dr}{d\phi}\right)^{2}+\frac{C(r)}{B(r)}=\frac{C(r)^{2}}{b^{2}A(r)B (r)}. \tag{3}\]
As a solution to Eq. (3), \(r\) along the light ray is a function of \(\phi\).
In the optical geometry, the angle from the radial direction to the light ray tangent is denoted as \(\Psi\), which is expressed in terms of the metric components as [9]
\[\cos\Psi = \frac{b\sqrt{A(r)B(r)}}{C(r)}\frac{dr}{d\phi}, \tag{4}\] \[\sin\Psi = \frac{b\sqrt{A(r)}}{\sqrt{C(r)}}. \tag{5}\]
See also Figure 1.
By using \(\Psi\) and \(\phi\), Ishihara et al. [9; 10] defines the deflection angle of light \(\alpha_{I}\) as
\[\alpha_{I}\equiv\Psi_{O}-\Psi_{S}+\phi_{OS}, \tag{6}\]
where \(\Psi_{O}\) and \(\Psi_{S}\) are \(\Psi\) at the observer (O) and source (S), respectively, \(\phi_{OS}=\int_{S}^{O}d\phi\) is the longitude from S to O, and \(\Psi_{O}\) equals to \(\Psi_{R}\) in the notation of [9; 10]. For the later convenience, this definition is rewritten as
\[\alpha_{I}=\int_{S}^{O}d\phi\left(\frac{d\Psi}{d\phi}+1\right). \tag{7}\]
By differentiating Eq. (5) with respect to \(\phi\), we obtain
\[\frac{d\Psi}{d\phi}=\frac{C(r)}{\sqrt{A(r)B(r)}}\frac{d}{dr}\sqrt{\frac{A(r)} {C(r)}}, \tag{8}\]
where we use Eq. (4) and \(|dr/d\phi|<+\infty\) for a non-radial photon orbit.
See Eq. (4.25) in Reference [14] for the HC definition of the deflection angle of light. The HC definition is
\[\alpha_{HC}=\int_{S}^{O}d\phi[1+H+T]. \tag{9}\]
For the SSS case, \(H\) and \(T\) simply become
\[H \equiv -\frac{1}{2\sqrt{\gamma}}\frac{d(\gamma_{\phi\phi})}{dr} \tag{10}\] \[= -\frac{A(r)}{2\sqrt{B(r)C(r)}}\frac{d}{dr}\left(\frac{C(r)}{A(r) }\right),\] \[T = 0, \tag{11}\]
for \(\gamma\equiv\det(\gamma_{ij})\).
The HC definition is thus reduced to
\[\alpha_{HC}\equiv\int_{S}^{O}d\phi\left(1+H\right). \tag{12}\]
By direct calculations for Eqs. (8) and (10), we find
\[H = \frac{C(r)A^{\prime}(r)-C^{\prime}(r)A(r)}{2A(r)\sqrt{B(r)C(r)}} \tag{13}\] \[= \frac{d\Psi}{d\phi},\]
where the prime denotes the differentiation with respect to \(r\). Therefore, Eq. (7) equals to Eq. (12). In the SSS case, the two definitions are thus equivalent to each other.
## III Stationary and axisymmetric case
In this section, we consider a SAS spacetime. The line element can be written as [13]
\[ds^{2}= -A(r,\theta)dt^{2}+B(r,\theta)dr^{2}+C(r,\theta)d\theta^{2}+D(r, \theta)d\phi^{2}\] \[-2W(r,\theta)dtd\phi. \tag{14}\]
The null condition is rewritten in a form as [13]
\[dt=\sqrt{\gamma_{ij}dx^{i}dx^{j}}+\beta_{i}dx^{i}, \tag{15}\]
where
\[\gamma_{ij}dx^{i}dx^{j}= \frac{B(r,\theta)}{A(r,\theta)}dr^{2}+\frac{C(r,\theta)}{A(r, \theta)}d\theta^{2}\] \[+\frac{A(r,\theta)D(r,\theta)+[W(r,\theta)]^{2}}{[A(r,\theta)]^{2 }}d\phi^{2}, \tag{16}\] \[\beta_{i}dx^{i}= -\frac{W(r,\theta)}{A(r,\theta)}d\phi. \tag{17}\]
In the rest of this section, we focus on the equatorial plane (\(\theta=\pi/2\)) for a photon orbit, where we assume \(A(r,\pi/2),B(r,\pi/2),D(r,\pi/2)>0\) and a
Figure 1: \(\Psi\) as the angle from the radial direction to the light ray tangent. \(\Psi_{O}\) and \(\Psi_{S}\) are \(\Psi\) at the observer and source, respectively. |
2309.03236 | Analysis of accretion disk around the Euler-Heisenberg Anti-de Sitter
Black Hole | We demonstrate the investigation of thin accretion disc surrounding the
Einstein- Euler-Heisenberg-Anti-de Sitter black hole. Additionally, we analyze
the black hole event horizons and compute their effective potential and
equations of motion. The specific energy, specific angular momentum, and
specific angular velocity of the particles that move in circular orbits above
the thin accretion disk are obtained. Also, the effects of parameters of the
Einstein- Euler-Heisenberg-Anti de sitter black hole on specific angular
velocity, specific heat energy, and specific angular momentum, have been
discussed in detail. We display the positions of the innermost stable circular
orbits and illustrate the effective potential. Furthermore, the circular orbits
of black hole are obtained numerically and locates the position of the
innermost stable circular orbit and event horizon. Also, we study the Gamma ray
burst emitted from the Einstein Euler-Heisenberg Anti de sitter black hole. | G. Abbas, H. Rehman | 2023-09-06T02:08:22Z | http://arxiv.org/abs/2309.03236v1 | # Analysis of accretion disk around the Euler-Heisenberg Anti-de Sitter Black Hole
###### Abstract
We demonstrate the investigation of thin accretion disc surrounding the Einstein- Euler-Heisenberg-Anti-de Sitter black hole. Additionally, we analyze the black hole event horizons and compute their effective potential and equations of motion. The specific energy, specific angular momentum, and specific angular velocity of the particles that move in circular orbits above the thin accretion disk are obtained. Also, the effects of parameters of the Einstein- Euler-Heisenberg-Anti de sitter black hole on specific angular velocity, specific heat energy, and specific angular momentum, have been discussed in detail. We display the positions of the innermost stable circular orbits and illustrate the effective potential. Furthermore, the circular orbits of black hole are obtained numerically and locates the position of the innermost stable circular orbit and event horizon. Also, we study the Gamma ray burst emitted from the Einstein Euler-Heisenberg Anti de sitter black hole.
**Keywords:** General Relativity; Accretion disk; Black Hole; Non Linear Electrodynamics.
Introduction
Numerous astronomical studies have confirmed that the cosmos is expanding at an accelerated rate and General Relativity (GR) helps us to understand this phenomenon. Einstein's theory of GR has a lot of interesting predictions, but one of the most surprising is the existence of BH. Black holes are the most fascinating objects in our universe and also the most mysterious one. The BH is a region in space where the force of gravity is so strong that not even light (the fastest known entity) in our universe can escape from it. When something crosses the event horizon, it collapses into the BH singularity, and the laws of physics no longer apply. Normally, BH are invisible to the naked eye, but they can be detected in a binary system or at the center of a galaxy using several approaches. The most realistic method of detecting BH is their influence on nearby substances, which is known as accretion. Over the last four decades, the fundamental perspectives of spherical accretion onto BHs have been extensively studied in [1]. In 1934, Born and Infeld presented nonlinear electrodynamics(NLED) to make sure that the self-energy of a point-like charge is finite [3; 45]. Unless the 1980s, it was discovered that the reliable action for an open string ending on D-branes could be expressed precisely in nonlinear form [4]-[6]. The Born-Infeld BH is studied in [7], as are the Einstein equations associated with Born-Infeld NLED. Euler and Heisenberg suggested a novel method to interpret the electromagnetic field based on Dirac's positron theory [8], forecasting for the one-loop corrections to quantum electrodynamics (QED), and investigating the vacuum polarization in QED. This novel approach enables the NLED models to interpret the expansion of the cosmos at its beginning [9]. In [10] derived the Euler-Heisenberg (EH) BH solution by examining the one-loop lagrangian density associated with the Einstein field equation. Xiao-Xiong Zeng in [11] looks into how the accretion flow models and QED affect the optical appearance of EH BH. Thermodynamics of the EH-AdS BH in [12]
In astrophysics, accretion is the accumulation of particles into a central object by their gravitational attraction. It should be noted that among the most realistic scenarios for understanding the tremendously bright active galactic nuclei (AGNs) and the quasar is the accretion of matter onto the BHs. In [13] Bondi published an important work more than 40 years ago. It was the first research on spherical accretion onto compact objects. In GR, Michl [14] analyzes the steady-state spherically symmetric flow of gas onto the
Schwarzschild BH. He demonstrated that the accretion onto the BH must be transonic. Instead of the polytropic, equations of state have been used to study spherical accretion and winds in the framework of GR. Accretion has been examined in a variety of scenarios, including Reissner-N\(\ddot{o}\)rdstrom (RN) BH [15], string cloud background [16] and charged BH [17] and Kerr-Newman BH [18]- [20]. We are now interested in the study of the accretion disk around Euler and Heisenberg because one of the conceivable approaches for demonstrating the difference among GR and alternative theories is the study of accretion disks surrounding cosmic body.
The accretion disk is a flattened, circular or elliptical structure of energetic material that is orbiting the massive object such as a star. It can occur around protostar, BHs, neutron stars and white dwarfs. The mass of a BH can be increased by accretion disks, which means surrounding matter falls onto the disk [21]. Page [22], analyzed the accretion disk mass through the rotating BH. The relativistic and radiation characteristics of thin accretion disks have been discussed in detail by Thorne [23]. The physical characteristics of the matter that forms thin accretion disks have been analyzed in spherically symmetric wormhole-space-time [24]. The characteristics of electromagnetic radiation released by the Kerr BH were examined in [25]. The optical properties of thin accretion disk surrounding a cosmic object are investigated using Einstein-Gauss-Bonnet gravity [26]. The theory of non-relativistic accretion was extended by using the equatorial approximation to the axisymmetric and stationary space along with time of rotating BH [27]. The Schwarzschild BH accretion disk image was investigated by Luminet [28]. Numerous fixed values of the parameters were used to produce the image of the Kerr BH with the accretion disk [29; 30]. The further investigation of the accretion disk were discussed in Refs. [31; 32]. The enormous energy released in gamma ray bursts (GRBs), which are explosions of around 1% of the solar mass, lasts just a few seconds [33; 34]. Most experts agree that the central engine is an accretion disk surrounding a newly generated BH. A pair of tremendously relativistic velocity jets powered by the BH accretion disk system formed the GRB if the jet is oriented in the direction of the spectator. The highly efficient gravitational energy of the progenitor star is needed to power such a large amount of energy over certain timescales, along with the relativistic jet. Furthermore, it is possible to verify the parameters of the EEH theory using the successful launch of two GRB jets.
The objective of this article is to examine the accretion disk around the EEH BH. The
following is the structure of this article. We give a brief summary of the solutions EEH BH. Also, we examine the horizon of the BH in section II. We determine the effective potential and equation of motion in portion III. All the components of the thin accretion disk surrounding the EEH BH are investigated in section IV. In Section V, we describe the numerical analysis that involves the identification of the innermost stable circular orbits (ISCO). Furthermore, we explain the emission GRB from the BH accretion disk. Finally, we draw a conclusion in Section VI.
## II Space-time
This section provides a brief summary of the EH theory in conjunction with gravity. In GR, the \(4D\) action with \(\Lambda\) (cosmological constant) associated to NLED [35; 36] have the following form of action
\[S=\frac{1}{4\pi}\int_{M^{4}}d^{4}x\sqrt{-g}\Big{(}\frac{1}{4}(R-2\Lambda)- \mathcal{L}(F,G)\Big{)}, \tag{1}\]
where, \(R\) stands for the Ricci scalar, \(g\) denotes the metric determinant and \(\mathcal{L}\) is the Lagrangian that relay on the electromagnetic invariants \(F=\frac{1}{4}F_{ab}F^{ab}\) and \(G=\frac{1}{4}F_{ab}\,^{*}F^{ab}\), where \(F_{ab}\) represents the the electromagnetic field strength and its dual is \(\,{}^{*}F^{ab}=\frac{\epsilon_{abcd}F^{cd}}{2\sqrt{-g}}\), the antisymmetric tensor \(\epsilon_{abcd}\) fulfils \(\epsilon_{abcd}\epsilon^{abcd}=-4\). The Lagrangian density for the Euler-Heisenberg NLED is [8]
\[\mathcal{L}(F,G)=-F+\frac{a}{2}F^{2}+\frac{7a}{8}G^{2}, \tag{2}\]
where \(a=\frac{8a^{2}}{45m^{4}}\), \(m\) is mass of electron and \(\alpha\) is fine structure constant, which is EH parameter and we get the Maxwell electrodynamics \(\mathcal{L}(F)=-F\) for \(a=0\). In this perspective the electromagnetic field tensor \(F^{ab}\), the possible frameworks for NLED are \(F\) and \(P\). Furthermore, there is \(P\) framework that has a tensor \(P_{ab}\) defined by
\[P_{ab}=-\mathcal{L}_{X}F_{ab}+\,^{*}F_{ab}\mathcal{L}_{G}, \tag{3}\]
hear \(\mathcal{L}_{X}=\frac{dL}{dX}\). For the EH theory, \(P\) has following form
\[P_{ab}=(1-aF)F_{ab}-\,^{*}F_{ab}\frac{7a}{4}G. \tag{4}\]
In the EH NLED, the magnetic field \(H\) and electric induction \(D\) are both represented by this tensor, and the fundamental connections among these two fields (as well as the magnetic
intensity \(B\) and electric field \(E\)) are given by Eq. (17). The \(P\) and \(O\) are two different invariants that are related to the \(P\) framework defined by
\[P=\frac{-1}{4}P_{ab}P^{ab},\ \ O=\frac{-1}{4}P_{ab}\,^{*}P^{ab}, \tag{5}\]
with \(\,{}^{*}P^{ab}=\frac{\epsilon_{abab}P^{cd}}{2\sqrt{-g}}\). The Hamiltonian is defined by the Legendre transformation of \(\mathcal{L}\) as follows
\[\mathcal{H}(P,O)=\frac{-1}{2}F_{ab}P^{ab}-\mathcal{L}, \tag{6}\]
by omitting second and higher order terms in \(a\), the Hamiltonian for the EH theory takes the form [37]
\[\mathcal{H}(P,O)=P-\frac{2}{2}P^{2}-\frac{7a}{8}O^{2}. \tag{7}\]
In Ref. [38] the field equations are given by
\[\nabla_{a}P^{ab}=0,\ \ G_{ab}+\Lambda g_{ab}=8\pi T_{ab}. \tag{8}\]
In the framework \(P\), the energy momentum tensor for EH theory is defined by
\[T_{ab}=\frac{1}{4\pi}\Big{(}(1-ap)P^{\alpha}P_{b\alpha}+g_{ab}(P-\frac{2}{2}P ^{2}-\frac{7a}{8}O^{2})\Big{)}. \tag{9}\]
We find the solution of Eq. (7) for a static space-time, which is
\[ds^{2}=-f(r)dt^{2}+f^{-1}(r)dr^{2}+r^{2}(d\theta^{2}+sin^{2}\theta d\phi^{2}), \tag{10}\]
with \(f(r)=1-\frac{2m(r)}{r}\). The symmetry of space-time permits the non-vanishing components of the electromagnetic field, which is restricted to an electric charge of \(Q\), so, we have
\[P_{ab}=\frac{Q}{r^{2}}\delta^{0}_{[a}\delta^{0}_{b]}, \tag{11}\]
hence, the electromagnetic invariants are
\[P_{ab}=\frac{Q^{2}}{2r^{4}},\ \ \ \ O=0, \tag{12}\]
inserting Eq.(12) into the \((t,t)\) component of equation Eq.(8), we get
\[\frac{dm}{dr}=\frac{Q^{2}}{2r^{2}}-\frac{aQ^{4}}{8r^{6}}+\frac{\Lambda r^{2}} {2}, \tag{13}\]
the metric function is derived by integrating the above equation, which has following form
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}-\frac{aQ^{4}} {20r^{6}}, \tag{14}\]
where, \(Q\) and \(M\) stand for the BH charge and mass, \(a\) denotes the EEH parameter, and \(\Lambda\) represents the cosmological constant, which is either negative or positive. In Eq. (14), the condition for which \(a=0\), coincides with the RN-AdS BH solution. The solution of \(RN\) with electromagnetic Lagrangian \(\mathcal{L}(F)=-F\) for the coupled Maxwell electromagnetic and gravitational field. The system behaves more like a Schwarzschild rather than the \(RN-\Lambda\). It should be noted that \(r=0\) is the singular point and is much stronger than \(RN-\Lambda\) with an opposite sign [39; 40]. To examine the accretion disk around the EEH-AdS BH, we determine the BH horizons by assuming \(f(r)=0\), so
\[f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}-\frac{aQ^{4}}{ 20r^{6}}=0,. \tag{15}\]
In **Fig. 1**, we can find how many event horizons would exist for the considered BH, by counting the singular points on every curve. This demonstrates whether the curve coincide with \(r-axis\), and the number of intersections tells us how many zeros there are. The red curve correspond to the RN-AdS BH (\(a=0\)) that has two horizons. The green and blue curves represent the three event horizons of the EEH-AdS BH, while the magenta, cyan and black curves have one event horizon.
## III Equation of motion
Here, we determine the effective potential and equations of motion in order to determine the dynamic behavior of the system. Thus, we take into account the Lagrangian \(\mathcal{L}\) for a particle in the vicinity of the EEH-AdS BH presented in Eq.(10). The Lagr
by
\[{\cal L}=\frac{1}{2}g_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}=\frac{1}{2}\epsilon, \tag{16}\]
\[{\cal L}=\frac{1}{2}g_{tt}\Big{(}\frac{dt}{ds}\Big{)}^{2}+g_{rr}\Big{(}\frac{ dr}{ds}\Big{)}^{2}+g_{\theta\theta}\Big{(}\frac{d\theta}{ds}\Big{)}^{2}+g_{\phi \phi}\Big{(}\frac{d\phi}{ds}\Big{)}^{2}=\frac{1}{2}\epsilon. \tag{17}\]
It is worthy to mention that the massive particle is represented by the value \(\epsilon=1\), while \(\epsilon=0\) is the value of the photon. The conserved angular momentum \(L\) and energy \(E\) for equatorial plane is
\[E=g_{tt}\Big{(}\frac{dt}{ds}\Big{)}=(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{ \Lambda r^{2}}{3}-\frac{aQ^{4}}{20r^{6}})\frac{dt}{ds}, \tag{18}\]
\[L=g_{\phi\phi}\frac{d\phi}{ds}=r^{2}\frac{d\phi}{ds}. \tag{19}\]
For massive particles, the geodesic equation of motion is given by
\[\Big{(}\frac{dr}{ds}\Big{)}^{2}=E^{2}-(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}- \frac{\Lambda r^{2}}{3}-\frac{aQ^{4}}{20r^{6}})(1+\frac{L^{2}}{r^{2}}), \tag{20}\]
\[\Big{(}\frac{dr}{d\phi}\Big{)}^{2}=\frac{r^{4}}{L^{2}}\Big{(}E^{2}-\Big{(}1- \frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}-\frac{aQ^{4}}{20r^{6 }}\Big{)}\Big{(}1+\frac{L^{2}}{r^{2}}\Big{)}\Big{)}, \tag{21}\]
\[\Big{(}\frac{dr}{dt}\Big{)}^{2}=\frac{1}{E^{2}}\Big{(}1-\frac{2M}{r}+\frac{Q^ {2}}{r^{2}}-\frac{\Lambda r^{2}}{3}-\frac{aQ^{4}}{20r^{6}}\Big{)}^{2}\Big{(}E ^{2}-\Big{(}1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}-\frac{ aQ^{4}}{20r^{6}}\Big{)}\Big{(}1+\frac{L^{2}}{r^{2}}\Big{)}\Big{)}. \tag{22}\]
It must noted be that the comprehensive dynamics of the system may be described by using Eqs. (20-22). In addition, we may calculate the effective potential using Eq. (20) as follows
\[V_{eff}=\Big{(}1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{\Lambda r^{2}}{3}- \frac{aQ^{4}}{20r^{6}}\Big{)}\Big{(}1+\frac{L^{2}}{r^{2}}\Big{)}. \tag{23}\]
When we take into account Eq. (25) and get
\[V_{eff}=\Big{(}1-\frac{2}{\tilde{r}}+\frac{\tilde{Q^{2}}}{\tilde{r}^{2}}-\tilde {\Lambda}\tilde{r}^{2}-\frac{\tilde{a}\tilde{Q}^{4}}{20\tilde{r}^{6}}\Big{)} \Big{(}1+\frac{\tilde{L}^{2}}{\tilde{r}^{2}}\Big{)}, \tag{24}\]
where
\[\tilde{r}=\frac{r}{M},\quad\tilde{Q}=\frac{Q}{M},\quad\tilde{L}=\frac{L}{M}, \tilde{a}=\frac{a}{M^{2}},\quad\tilde{\Lambda}=\frac{\Lambda M^{2}}{3}, \tag{25}\]
these are dimensionless variables.
Accretion disk around EEH-AdS black hole
In the present section, we will be able to discuss the physical characteristics of the accretion disk surrounding the EEH-AdS BH and determine all of its parameters. As we know, the particles move in a circular orbit above the disk, so we can compute the specific angular momentum \(L\), specific energy \(E\), specific angular velocity \(\Omega\), and flux of the radiant energy \(F\). All the quantities may be determined for every possible case. It should be noted that the physical characteristics of the accretion disk obey the particular structural equations that are associated with the conservation of mass, energy and angular momentum. These equations may be determined by the conservation of energy, angular momentum and mass. It is obvious that radius of the orbit affects the kinematic values, and we can get these by using the general formulas. We use dimensionless variables from Eq. (25) to evaluate all of the parameters [41; 42] as follows
\[\Omega=\sqrt{\frac{1}{\tilde{r}^{3}}-\tilde{\Lambda}-\frac{\tilde{Q}^{2}}{ \tilde{r}^{4}}+\frac{3\tilde{a}\tilde{Q}^{4}}{20\tilde{r}^{8}}}, \tag{26}\]
\[\tilde{E}=\frac{\Big{(}1-\frac{2}{\tilde{r}}+\frac{\tilde{Q}^{2}}{\tilde{r}^{2 }}-\tilde{\Lambda}\tilde{r}^{2}-\frac{\tilde{a}\tilde{Q}^{4}}{20\tilde{r}^{6}} \Big{)}}{\sqrt{1-\frac{3}{\tilde{r}}+\frac{2\tilde{Q}^{2}}{\tilde{r}^{2}}- \frac{4\tilde{a}\tilde{Q}^{4}}{20\tilde{r}^{6}}}}, \tag{27}\]
\[\tilde{L}=\frac{\tilde{r}^{2}\sqrt{\frac{1}{\tilde{r}^{3}}-\tilde{\Lambda}- \frac{\tilde{Q}^{2}}{\tilde{r}^{4}}+\frac{3\tilde{a}\tilde{Q}^{4}}{20\tilde{r }^{8}}}}{\sqrt{1-\frac{3}{\tilde{r}}+\frac{2\tilde{Q}^{2}}{\tilde{r}^{2}}- \frac{4\tilde{a}\tilde{Q}^{4}}{20\tilde{r}^{6}}}}. \tag{28}\]
From Eq. (23), we get
\[\frac{d^{2}V_{eff}}{dr^{2}}=\frac{1}{40\tilde{r}^{8}(5\tilde{r}^{ 4}(2\tilde{Q}^{2}+(\tilde{r}-3)\tilde{r})-\tilde{a}\tilde{Q}^{4})^{3}}\Big{(} 21\tilde{a}^{4}\tilde{Q}^{16}+5\tilde{a}^{3}\tilde{Q}^{12}\tilde{r}^{4}( \tilde{r}(8\tilde{\Lambda}\tilde{r}^{3}-63\tilde{r}+202)-144\tilde{Q}^{2})\] \[+25\tilde{a}^{2}\tilde{Q}^{8}\tilde{r}^{8}(-6\tilde{Q}^{2}\tilde {r}(48\tilde{\Lambda}\tilde{r}^{3}-63\tilde{r}+176)+3\tilde{r}^{2}(\tilde{r} (2\tilde{r}(4\tilde{\Lambda}\tilde{r}(30\tilde{\Lambda}\tilde{r}^{3}-29\tilde {r}+38)+33)-239)+278)+364\tilde{Q}^{4})\] \[-1000\tilde{a}\tilde{Q}^{4}\tilde{r}^{12}(-4\tilde{Q}^{4}\tilde {r}(3\tilde{\Lambda}\tilde{r}^{3}-4\tilde{r}+27)+\tilde{Q}^{2}\tilde{r}^{2}( \tilde{r}(2\tilde{r}(\tilde{\Lambda}\tilde{r}(88\tilde{\Lambda}\tilde{r}^{3}- 39\tilde{r}+28)-9)+39)+56)+\] \[3\tilde{r}^{3}(\tilde{r}(\tilde{r}(\tilde{r}(2\tilde{r}(5\tilde {\Lambda}(\tilde{r}-6)\tilde{r}+3)-1)+6)-7)+33)-51)+18)+36\tilde{Q}^{6})\] \[+10000\tilde{r}^{16}(-2\tilde{Q}^{6}\tilde{r}(4\tilde{\Lambda} \tilde{r}^{3}-9\tilde{r}+32)+\tilde{Q}^{4}\tilde{r}^{2}(\tilde{r}(2\tilde{r}( 6\tilde{\Lambda}\tilde{r}(5\tilde{\Lambda}\tilde{r}^{3}-3\tilde{r}+4)+5)-69)+ 126)+2\tilde{Q}^{2}\tilde{r}^{3}\] \[(\tilde{r}(\tilde{r}(\tilde{\Lambda}\tilde{r}(\tilde{r}(\tilde{r} (\tilde{\Lambda}\tilde{r}(17\tilde{r}-72)-6)+41)-36)-9)+41)-54)\] \[+\tilde{r}^{4}(\tilde{r}(\tilde{r}(\tilde{r}(\tilde{\Lambda}( \tilde{r}(\tilde{r}(2\tilde{r}-15)+30)-2)+18)-54)+36)-1)+12)-36)+36)+12\tilde{Q}^{8} )\Big{)}. \tag{29}\]
e impose some restrictions on the circular motion of the particles, so we put \(V_{eff}=0\), \(\frac{dV_{eff}}{dr}=0\) and \(\frac{d^{2}V_{eff}}{dr^{2}}=0\). Now we are interested to calculate ISCO of EEH BH, which exists at a local minimum of effective potential, so by using \(V_{eff}=0\), \(\frac{dV_{eff}}{dr}=0\) and \(\frac{d^{2}V_{eff}}{dr^{2}}=0\), we can determine ISCO. The analytical solution of \(V_{eff}=0\), \(\frac{dV_{eff}}{dr}=0\) and \(\frac{d^{2}V_{eff}}{dr^{2}}=0\) is not possible, so we calculate ISCO numerically. Further, details are given in section **VI**. It should be noted that the Stefan Boltzmann law [43] is used to determine the flux of radiation as follows
\[F(r)=\frac{-\dot{M}_{0}}{4\pi\sqrt{-g}}\frac{\Omega_{,r}}{(\tilde{E}-\Omega \tilde{L})^{2}}\int_{r_{isco}}^{r}(\tilde{E}-\Omega\tilde{L})\tilde{L_{,r}}dr, \tag{30}\]
where the mass accretion rate is denoted by \(\dot{M}_{0}\). It is important to note that flux conservation laws can be used to get radiant energy above the disk between radial distances and the ISCO. **Figure 2** illustrates the pattern of the energy flux of a thin accretion disk around the EEH BH for dimensionless variables \(\tilde{Q}=0.8\), \(\tilde{\Lambda}=3\) and various values of \(\tilde{a}\).
In **Fig. 3**, the plot is drawn between angular velocity and the dimensionless variable \(\tilde{r}\) in terms of dimensionless quantities \(\tilde{Q},\quad\tilde{a}\), and \(\tilde{\Lambda}\). In plots **3.a** and **3.b**, the angular velocity increases slightly as the value of the dimensionless variables \(\tilde{Q}\) and \(\tilde{a}\) increases, but in plot **3.c**, the angular velocity decreases as the value of the dimensionless variable \(\tilde{\Lambda}\) increases, and in plot **3.d**, we recover the behavior of the angular velocity of the Schwarzschild BH by assuming \(\tilde{Q}=0\) and \(\tilde{a}=0\), while angular velocity of RN BH can be restored by considering \(\tilde{a}=0\) with various values of \(\tilde{Q}\).
The specific energy \(\tilde{E}\) verses \(\tilde{r}\) for altered values of \(\tilde{a}\), \(\tilde{Q}\) and \(\tilde{\Lambda}\) as represented in **Fig. 4**. From **Fig. 4a** and **4b**, it is clear that in the thin accretion disk, the specific energy
Figure 2: The _F(r)_ of an accretion disk surrounding the EEH BH for Q = 0.8 and \(\dot{M}_{0}\) = 1
Figure 3: The plot displays the angular velocity \(\Omega\) verses \(\tilde{r}\) of thin accretion disk and around the BH. (**a**) for \(\tilde{a}=0.5\), \(\tilde{\Lambda}=-3\) and for distinct values of \(\tilde{Q}\). (**b**) for \(\tilde{Q}=0.8\), \(\tilde{\Lambda}=-3\) and for altered values of \(\tilde{a}\). (**c**) for \(\tilde{a}=0.5\), \(\tilde{Q}=0.8\) and for various values of \(\tilde{\Lambda}\). (**c**) for \(\tilde{a}=0\), \(\tilde{\Lambda}=0\) and for various values of \(\tilde{Q}\)
\(\tilde{E}\) of the particle rapidly decreases and attains a minimum value and then it continuously increases. Also, from the illustration of **4c**, we can observe that the specific energy of the particle initially decreases to attain a minimum value for \(\tilde{\Lambda}=-3\), \(\tilde{\Lambda}=-2\), \(\tilde{\Lambda}=-1\), and then increases gradually. Also, for \(\tilde{\Lambda}=0\), the energy of the particle decreases and become constant. It should be worth noticing that, by imposing some restrictions, i.e., \(\tilde{a}=0,\ \ \tilde{\Lambda}=-3\) on the EEH BH, we retrieve the specific energy \(\tilde{E}\) to \(\tilde{r}\) of the thin accretion disk of Schwarzschild and RN BHs, as shown in illustration **4d**. In Plot **4d**, the black doted line represents the specific energy of thin accretion disk around Schwarzschild BH, while the remaining lines represent the specific energy of thin accretion disk around RN BH.
In Fig.**5**, we have represented the specific angular momentum \(\tilde{L}\) in accordance with \(\tilde{r}\) for several values of \(\tilde{Q}\), \(\tilde{a}\) and \(\tilde{\Lambda}\). From, illustrations **5a** and **5b**, we see that for distinct values of \(\tilde{Q}\), \(\tilde{a}\), the angular momentum \(\tilde{L}\) decreases for a small value of \(\tilde{r}\), which is around \(\tilde{r}=3\), and then increases. In illustration **5c**, we see that for \(\tilde{\Lambda}=-3,\ \ \tilde{\Lambda}=-2\), and \(\tilde{\Lambda}=-1\), the angular momentum decreases for a small value of \(\tilde{r}\) and then increases, but for \(\tilde{\Lambda}=0\), the angular momentum decreases to become constant. Furthermore, we also analyzed the behavior of the specific angular moment of the thin accretion disk around the Schwarzschild and RN
BH from EEH BH by assuming \(\tilde{a}=0\), \(\tilde{\Lambda}=0\) for different values of \(\tilde{Q}\), which is shown in **Fig. 5d**. In **Fig. 5d**, the black dot line represents the behavior of the specific angular momentum of the thin accretion disk of the Schwarzschild BH, while the remaining lines indicate the behavior of the specific angular momentum of the thin accretion disk around the RN BH.
## V Numerical analysis
In this section, we investigate the ISCO, effective potentials and indicate the position of the ISCO of the EEH BH. In GR the smallest marginally stable orbit around the BH is said to be ISCO. It must be emphasized that the accretion disk ISCO play a vital role in the BH because it marks the inner edge of the BH [44]. It should be noted that for \(a=0\), \(Q=0\), and \(\Lambda=0\) the EEH BH reduce to Schwarzschild BH with \(r_{isco}=6M\). For \(a=0\), \(\Lambda=0\), we recover RN BH, and the position of ISCO for \(0<Q<1\) is \((4,6)\). The location of horizons and ISCO is displayed in tables **I**, **II** and **III**.
The profile of effective potential versus \(r\) is shown in **Fig. 6** for \(L=1\), \(Q=0.8\), \(\Lambda=\) -3, and various values of the EEH BH parameter \(a\). From graph it is clear that as the value of \(a\) goes up, its effective potential goes down. Also, by considering \(a=0\), we recover the behavior of the effective potential for RN-AdS BH, which is indicated by the green curve.
In addition, we displayed the position of ISCO in **Fig. 7** by considering \(L=1\), \(Q=0.8\) and \(\Lambda=\) -3 for different values of \(a\). In **Fig. 7**\(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbf{c}\), the ISCOs with \(r_{isco}=3.1496\)
\begin{table}
\begin{tabular}{l c c c c} \hline a & Inner horizon & Middle horizon & Outer horizon & ISCO \\ \hline
0.0 & - & 0.438 & 0.7328 & 3.14996 \\
0.02 & 0.18 & 0.42 & 0.73425 & 3.14998 \\
0.04 & 0.225 & 0.38 & 0.73515 & 3.14999 \\
0.06 & - & - & 0.73675 & (0.3241, 0.3470, 3.15001) \\
0.08 & - & - & 0.732 & (0.3331, 0.4203, 3.15003) \\
0.1 & - & - & 0.741 & (0.3499, 0.4481, 3.15004) \\ \hline \end{tabular}
\end{table}
Table 1: For \(Q=0.8\), \(\Lambda=-3\), and different values of EEH BH parameter \(a\), the position of the horizons and the ISCO are shown.
\(r_{isco}=3.1498\) and \(r_{isco}=3.1499\), respectively, are located outside the outer event horizons.
Figure 8 shows the effective potential corresponding to \(r\) for \(L=1\), \(a=0.02\), \(Q=0.8\) and various values of \(\Lambda\). The plot clearly shows that as the value of \(\Lambda\) grows, its effective potential drops. ISCO is located at \(r_{isco}=3.14998\) for \(\Lambda=-3\), \(r_{isco}=3.15572\) for \(\Lambda=-2\), and \(r_{isco}=3.17239\) for \(\Lambda=-1\). We also locate the ISCO position, which is indicated by dark circles. **Figure 9** depicts the profile of effective potential vs. \(r\) for \(L=1\), \(a=0.005\), \(\Lambda=-3\) and several \(Q\) values. From the plot, it is clear that as the value of \(Q\) increases, so does its effective potential. Furthermore, by assuming \(Q=0\), we recover the behavior of the effective potential for the Schwarzschild-AdS BH, as shown by the cyan curve.
Furthermore, we locate the position of ISCO in **Fig. 10** by assuming \(L=1\), \(a=0.005\), \(\Lambda=-3\) and numerous values of \(Q\).
\begin{table}
\begin{tabular}{c c c c c} \hline \(\Lambda\) & Inner horizon & Middle horizon & Outer horizon & ISCO \\ \hline -3 & 0.193 & 0.41 & 0.721 & 3.14998 \\ -2 & 0.1939 &.402 &.84 & 3.15572 \\ -1 & 0.19391 & 0.392 & 1.02 & 3.17239 \\ 0 & 0.194 & 0.384 & 1.596 & 4.89078 \\ 1 & 0.195 & 0.37 & - & (0.910447,3.10018) \\ 2 & 0.194 & 0.36 & - & (0.739501,3.11973) \\ \hline \end{tabular}
\end{table}
Table 2: The positions of the horizons and the ISCO for \(a=0.02\), \(Q=0.8\), and various values of \(\Lambda\) are given.
\begin{table}
\begin{tabular}{c c c c c} \hline \(Q\) & Inner horizon & Middle horizon & Outer horizon & ISCO \\ \hline
0.0 & - & - & 0.998 & 3.76053 \\
0.2 & - & - & 0.98 & 3.72831 \\
0.4 & - & - & 0.95 & (0.102336,0.144972,3.62808) \\
0.6 & 0.128 & 0.182 & 0.88 & 3.44678 \\
0.8 & 0.126 & 0.42 & 0.72 & 3.14996 \\
1.0 & 0.1351 & - & - & (0.593385,1.14954,2.60655) \\ \hline \end{tabular}
\end{table}
Table 3: For \(a=0.005\), and \(\Lambda=-3\), the position of the horizons and the ISCO for various values of \(Q\) are represented around the EEH BH.
Figure 8: Depicts the profile of effective potential for \(L=1\), \(a\)=0.02, \(Q\)=0.8, with different values of \(\Lambda\) also the location of ISCO indicated by solid circles.
Figure 6: The behavior of effective potential for \(L=1\), \(Q\)=0.8, \(\Lambda\)=-3 and altered values of \(a\).
Figure 7: The position of ISCO represented by dots for \(L=1\), \(Q\)=0.8, \(\Lambda\)=-3. (a) For \(a=0.0\), ISCO is located outside the outer event horizon at \(r_{isco}=3.1496\). (b) For \(a=0.02\), ISCO is at \(r_{isco}=3.1498\), which is outside the outer event horizon. (c) The ISCO for \(a=0.04\), we have \(r_{isco}=3.1499\), which lies far from the outer event horizon.
### Gamma ray Burst
In the given portion, we investigate the GRB emitted from the BH. We should speculate regarding GRBs from the BH accretion disk. It is important to note that the the outer event horizon is usually smaller than outer stable circular orbit. By assuming a certain efficiency \(\eta\) for transferring gravitational energy to gamma-ray energy, we can constrain the parameters
Figure 10: The solid circle denotes the location of ISCO for \(L=1\), \(a\)=0.005, \(\Lambda\)=-3, and altered values of charge Q. (a) ISCO is located outside the outer horizon at \(r_{isco}=3.76053\) for \(Q=0\).(b) For \(Q=0.2\), the ISCO is \(r_{isco}=3.72831\), which is outside the outer horizon. (c) The ISCO for \(Q=0.4\) is \(r_{isco}=3.62808\), which is located beyond the outer event horizon. (d) ISCO is located outside the outer event horizon at \(r_{isco}=3.44678\) for \(Q=0.6\).
Figure 9: The profile of effective potential for \(L=1\), \(a\)=0.005, \(\Lambda\)=-3 and various values of \(Q\).
of the EEH BH. The ordinary mass of the progenitor is \(M\sim 10M_{\odot}\), the total energy of an usual gamma-ray burst is \(\sim 10^{52}\) erg, and the outer stable circular orbit \(x\) which are given in Ref.[45].
\[E_{\gamma}=\eta\frac{M}{x}. \tag{31}\]
Consider \(\eta<0.1\), which gives
\[x<\eta\frac{M}{E_{\gamma}}\sim 18\eta_{-1}M_{1}E_{\gamma,52}^{-1}, \tag{32}\]
where
\[Q=10^{k}\times Q_{k},\ \ \ M_{1}=\frac{M}{10M_{\bigodot}}. \tag{33}\]
Here \(M_{1}\) denotes the unit of solar mass.
## VI Conclusions
Accretion discs around supermassive BHs are the main source of gravitational information in systems with intense gravity, and they also show how space and time are arranged around them. Researchers in [46]-[47] have examined the luminosity of accretion disks in the space with time of a static BH occupied by dark matter and of BHs surrounded by dark matter with isotropic or anisotropic pressure, respectively. In the present study, we have investigated the thin accretion disk around static and spherically symmetric EEH BH. Furthermore, we have examined the event horizons, the effective potential and equation of motion of the EEH BH. We have determined the specific angular velocity, specific energy, and specific angular momentum of particles that traverse in a circular orbit around the EEH BH. Also, we have demonstrated the the flux of radiant energy above the disk. Moreover, the energy flux of thin accretion around the EEH BH for various values of \(\tilde{a}\) are shown in **Fig. 2** In addition to this, we have investigated the variations in the specific energy, the specific angular momentum, ISCO of the particles orbiting around the BH, and the angular velocity as a result of the various values of the parameters of the EEH BH, which are shown in **Figs**. **3**, **4**, and **5**. We have plotted the effective potential and location of ISCO is represented in **Fig 5**. Furthermore, we have investigated the location of ISCO and the event horizon is given in table **III**. Also, we recover the Schwarzschild BH with \(\tilde{r}_{isco}=6M\) and RN BH with \(\tilde{r}_{isco}=(4,6)\) by imposing some conditions on EEH. Finally, we analyzed the GRB with more realistic efficiency \(\eta\), obtained from simulations and specific extreme events.
## Acknowledgements
The work of G. Abbas has been partially supported by the National Natural Science Foundation of China under project No. 11988101. He is grateful to compact object and diffused medium Research Group at NAOC led by Prof. JinLin Han for excellent hospitality and friendly environment. G. Abbas is also thankful to The Islamia University of Bahawalpur, Pakistan for the grant of 06 months the study leave.
|
2302.10886 | Some Fundamental Aspects about Lipschitz Continuity of Neural Networks | Lipschitz continuity is a crucial functional property of any predictive
model, that naturally governs its robustness, generalisation, as well as
adversarial vulnerability. Contrary to other works that focus on obtaining
tighter bounds and developing different practical strategies to enforce certain
Lipschitz properties, we aim to thoroughly examine and characterise the
Lipschitz behaviour of Neural Networks. Thus, we carry out an empirical
investigation in a range of different settings (namely, architectures,
datasets, label noise, and more) by exhausting the limits of the simplest and
the most general lower and upper bounds. As a highlight of this investigation,
we showcase a remarkable fidelity of the lower Lipschitz bound, identify a
striking Double Descent trend in both upper and lower bounds to the Lipschitz
and explain the intriguing effects of label noise on function smoothness and
generalisation. | Grigory Khromov, Sidak Pal Singh | 2023-02-21T18:59:40Z | http://arxiv.org/abs/2302.10886v4 | # Some Fundamental Aspects about Lipschitz Continuity
###### Abstract
Lipschitz continuity is a simple yet pivotal functional property of any predictive model that lies at the core of its robustness, generalisation, and adversarial vulnerability. Our aim is to thoroughly investigate and characterise the Lipschitz behaviour of the functions learned via neural networks. Despite the significant tightening of the bounds in the recent years, precisely estimating the Lipschitz constant continues to be a practical challenge and tight theoretical analyses, similarly, remain intractable. Therefore, _we shift our perspective and instead attempt to uncover insights about the nature of Lipschitz constant of neural networks functions_ -- by relying on the simplest and most general upper and lower bounds. We carry out an empirical investigation in a range of different settings (architectures, losses, optimisers, label noise, etc.), which reveals several fundamental and intriguing traits of the Lipschitz continuity of neural networks functions, In particular, we identify _a remarkable double descent trend in both upper and lower bounds to the Lipschitz constant_ which tightly aligns with the typical double descent trend in the test loss.
## 1 Introduction & Related Work
Lipschitz continuity of a function is a key quantity that reflects its smoothness as the input is perturbed (or more accurately, the maximum absolute change in the function value for a unit norm change in the input). Typically, in machine learning, our hope is that the learned predictive function is not overly sensitive to (non-adversarial) input changes. Otherwise, if the function changes too drastically for minuscule changes in the input, we can hardly hope that the learned function would generalise to unseen data drawn from the underlying distribution. On the flip side, when the value of the Lipschitz constant is extremely small (for the sake of the argument, think close to zero), this would imply a large bias [1] and essentially a useless function.
From these arguments sketched above, it is not hard to see why the Lipschitz constant plays a crucial role in various topics such as generalisation [2], robustness [3], vulnerability to adversarial examples [4], and more. As a result, there is also a significant body of literature in both theoretical [2; 5; 6; 7; 8; 9] and applied [10; 11; 12; 13; 14] directions, as discussed below in more detail.
The focus of our current study is on uncovering the fundamental aspects of Lipschitz continuity of neural network functions -- and thus contribute towards a better understanding of modern over-parameterised neural networks. Despite much progress achieved over the years -- owing to which we have a better estimate of the true Lipschitz constant [5] -- insights about their behaviour and characteristics for neural networks have been hard to come by. For instance, a plethora of fundamental questions, in the context of Lipschitz continuity of neural functions, remain to be understood:
* How does the Lipschitz constant behave for a wider versus narrow network? Shallow versus deeper networks? More broadly, how does it vary across different network architectures?
* Does the Lipschitz constant change significantly over the course of training or is it largely governed by the value at initialisation?
* How does its behaviour vary across different loss functions? Since, of course, we don't want to be misled by the nature of a particular loss function, and rather obtain 'functional' properties.
* Are there any notable differences between the Lipschitz continuity for different choices of optimisers?
* How does the nature of the learning task (i.e., in terms of the amount of signal or noise present) affect the Lipschitz continuity?
Our objective is precisely to contribute towards this end, with an emphasis on over-parameterised deep neural networks as trained in practice.
Approach.Part of the reason why important insights have been hard to come by through recent studies on the Lipschitz constant is that the newer methods, which produce tighter estimates of the true Lipschitz, are, more often than not, rather expensive computationally -- limiting their use. Or when simple bounds are utilised, there is an element of uncertainty whether the findings apply to the true Lipschitz constant or just that particular bound. We propose a simple way around this problem: track and investigate both upper and lower bounds to the Lipschitz constant. We will show this embarrassingly simple fix nevertheless lets us showcase intriguing traits about the Lipschitz constant of neural network functions in a multitude of settings. A key highlight is that we discover a double descent trend in both upper and lower bounds to the Lipschitz constant, which bears an intimate relationship with double descent trend in test loss [15].
Contributions:
* We begin by investigating the evolution of upper and lower bounds to the Lipschitz constant during the course of training for fully-connected and convolutional networks. In this setting, we also showcase the fidelity of the lower bounds to represent the true Lipschitz constant by evaluating them in a much wider vicinity of the training samples.
* We then study the effect of increasing network widths on the Lipschitz constant, within an extensive range of widths spanning about \(14\) orders of magnitude. Interestingly, we observe that double descent trend in the test loss is also mirrored through the upper and lower Lipschitz bounds.
* We also present a theoretical argument based on the bias-variance tradeoff and show that the variance is upper bounded by the squared Lipschitz constant (something which we also establish empirically).
* Next, we probe the effect of different loss functions and optimisers on the behaviour of Lipschitz continuity and hypothesise the reasons for the same. We find that with Cross-Entropy (CE) loss, the Lipschitz bounds are higher as compared to that when using Mean-Squared Error (MSE) loss. On the other hand, training with adaptive methods like Adam [16] results in higher values of the Lipschitz constant as compared to just using stochastic gradient descent (SGD).
* Furthermore, we also do another set of studies that illustrates the effect of depth, training set size, and label noise on the Lipschitz continuity of neural networks. This reveals an interesting trend where Lipschitz constant increases with more signal in the dataset (more clean training samples) and decreases in the presence of more noise.
* Lastly, we find that the distance of the network from the initialisation can capture many of the interesting trends in the Lipschitz constant.
Related Work.Theoretical interest in the Lipschitz constant of neural networks has been revived since Bartlett et al. [2] described how margin-based generalisation bounds are linearly dependent on the Lipschitz constant. Since generalisation bounds and robustness guarantees [3] are expressed in terms of the upper bound estimates of the true Lipschitz constant, extensive research has been done in
the field of its efficient and accurate estimation [5; 6; 8; 17]. As opposed to this direction of finding the tightest possible bounds, our aim in this work is rather to identify and uncover intriguing aspects of Lipschitz continuity in neural networks. But, in the future, we also aim to explore these tighter bounds in the context of neural networks.
On the practical side, Lipschitz constant has also been useful as a regularise while training regular neural networks [10; 11], but also for Generative Adversarial Networks (GANs) [13; 18], as well as addressing robustness to adversarial examples [12] and perturbations in input [14]. Most of these regularisation techniques enforce constraints on the parameters or the architecture in order to constrain the Lipschitz constant or its estimate.
Very recently, concurrent to our work, Gamba et al. [19] have also noted the connection between Lipschitz constant and the phenomenon of double descent [15]. In contrast to their work, which only tracks a reasonable yet heuristic estimate of the Lipschitz constant, we painstakingly track both lower and upper Lipschitz bounds. Besides, our focus is also much broader and covers many other settings where we explore the behaviour of Lipschitz constant.
## 2 Lipschitz constant of Neural Network functions
Before we start investigating the Lipschitz constant for neural networks, let us first recall the definition of Lipschitz-continuous functions.
**Definition 2.1** (Lipschitz continuous function).: _For function \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\), defined on some domain \(dom(f)\subseteq\mathbb{R}^{n}\), \(f\) is called \(C\)-Lipschitz continuous, \(C>0\), w.r.t some \(\alpha\) norm if:_
\[\forall\mathbf{x},\mathbf{y}\in dom(f):\|f(\mathbf{x})-f(\mathbf{y})\|_{ \alpha}\leq C\|\mathbf{x}-\mathbf{y}\|_{\alpha}\]
Note that we are usually interested in the smallest \(C\), such that the above condition holds. This is what we call the Lipschitz constant of the function \(f\).
Unfortunately, the exact value of the Lipschitz constant is proven to be NP-hard to compute [17]. Therefore we focus on the upper and the lower bounds of the true Lipschitz constant. To lower bound the Lipschitz constant, we use another definition of the Lipschitz constant (see theorem 1 in [20] or proceed with [21]).
**Theorem 1** (Alternative definition of the Lipschitz constant).: _Let function \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{m}\), be defined on some domain \(dom(f)\subseteq\mathbb{R}^{n}\). Let \(f\) also be differentiable and \(C\)-Lipschitz continuous. Then the Lipschitz constant \(C\) is given by:_
\[C=\sup_{\mathbf{x}\in dom(f)}\|\nabla_{\mathbf{x}}f\|_{\alpha^{*}}\]
_where \(\nabla_{\mathbf{x}}f\) is the Jacobian of \(f\) w.r.t. to input \(\mathbf{x}\) and \(\alpha^{*}\) is the dual norm of \(\alpha\)._
For simplicity, we will focus on computing the 2-norm for the rest of the paper. Note that the dual norm for the 2-norm is also 2-norm. Hereafter, all the norms will denote the 2-norm, and thus we will omit it in the expressions.
In a typical machine learning setting, we would like to train a model that generalises well to samples drawn from the underlying distribution \(\mathcal{D}\). Thus, for our model \(f_{\boldsymbol{\theta}}\), we are interested in finding,
\[C=\sup_{\mathbf{x}\in\mathcal{D}}\|\nabla_{\mathbf{x}}f_{\boldsymbol{\theta}} \|_{2}\leq\sup_{\mathbf{x}\in dom(f)}\|\nabla_{\mathbf{x}}f_{\boldsymbol{ \theta}}\|_{2}\,,\]
where in the last step we consider the supremum over the entire domain of the function \(f\), i.e., even outside of the true data distribution \(\mathcal{D}\), which is usually unknown.
However, we can find a lower bound for our Lipschitz constant by limiting the supremum computation to the training set \(S\subseteq\mathcal{D}\):
\[C_{\text{lower}}=\sup_{\mathbf{x}\in S}\|\nabla_{\mathbf{x}}f_{\boldsymbol{ \theta}}\|_{2}\leq\sup_{\mathbf{x}\in\mathcal{D}}\|\nabla_{\mathbf{x}}f_{ \boldsymbol{\theta}}\|_{2}=C \tag{1}\]
Tight upper bounds to the Lipschitz constant, however, are far from trivial to compute. A lot of recent research has focused on producing different methods of upper bound estimation (as it has been discussed in the Section 1, under related work), each aiming to improve the tightness of the bound
and its computation speed. While this is a fascinating direction, in this paper, our aim is slightly different. Instead of getting encumbered by the demands of tighter bounds to the Lipschitz constant, _we shift the perspective towards understanding the nature and behaviour of the Lipschitz constant_ in a host of different settings. We do this by'sandwiching' the true Lipschitz by suitable lower and upper bounds to it, and tracking them both. Rather remarkably, we find that just focusing on the simplest possible bounds delivers intriguing insights which we will elaborate through the coming sections.
We start with describing our approach to compute the upper bound of the Lipschitz constant, inspired by the AutoLip algorithm introduced by [17]. To give a simple overview, let us analyse the case of sequential neural networks where each layer has a zero bias. Let our model \(f_{\mathbf{\theta}}\) with \(L\) layers be defined as \(f_{\mathbf{\theta}}:=f_{L}\circ f_{L-1}\circ\cdots\circ f_{1}\), which is a sequence of linear and non-linear transformations. For this case, the Lipschitz constant upper bound would be the product of Lipschitz constants of each individual layer:
\[C\leq\prod_{i=1}^{L}C_{f_{i}}=\prod_{i=1}^{L}\sup_{t_{i}\in dom(f_{i})}\| \nabla_{t_{i}}f_{i}\|_{2}\leq\prod_{i=1}^{L}\sup\|\nabla_{t_{i}}f_{i}\|_{2}=C _{\text{upper}} \tag{2}\]
More pedantically, one can see this by using Theorem 1 and chain rule as follows:
\[C =\sup_{\mathbf{x}\in\mathbf{D}}\|\nabla_{\mathbf{x}}f_{\mathbf{ \theta}}\|_{2}=\sup_{\mathbf{x}\in\mathcal{D}}\|\nabla_{f_{L-1}}f_{L}\cdot \nabla_{f_{L-2}}f_{L-1}\cdot\ \ldots\ \cdot\nabla_{f_{1}}f_{2}\cdot\nabla_{\mathbf{x}}f_{1}\|_{2}\] \[\leq\sup_{f_{L-1}(\mathbf{x})}\|\nabla_{f_{L-1}}f_{L}\|_{2}\cdot \sup_{f_{L-2}(\mathbf{x})}\|\nabla_{f_{L-2}}f_{L-1}\|_{2}\cdot\ \ldots\ \cdot\sup_{f_{1}(\mathbf{x})}\|\nabla_{f_{1}}f_{2}\|_{2}\cdot \sup_{\mathbf{x}\in\mathcal{D}}\|\nabla_{\mathbf{x}}f_{1}\|_{2} \tag{3}\] \[\leq\sup\|\nabla_{f_{L-1}}f_{L}\|_{2}\cdot\sup\|\nabla_{f_{L-2}}f _{L-1}\|_{2}\cdot\ \ldots\ \cdot\sup\|\nabla_{f_{1}}f_{2}\|_{2}\cdot\sup\|\nabla_{ \mathbf{x}}f_{1}\|_{2}=:C_{\text{upper}}\, \tag{4}\]
where in the last line we consider the unconstrained supremum. For each linear layer \(f_{i}(\mathbf{x})=\mathbf{W}_{i}\,\mathbf{x}\), we know that the upper bound to the Lipschitz constant is equal to \(\|\mathbf{W}_{i}\|_{2}\), since the Jacobian matrix for \(f_{i}\) is equal to the weight matrix. Note that convolution layers can be represented as suitable linear transformations [22] using doubly block Toeplitz matrices. As a result, the upper bound to their Lipschitz constant can be recovered as the 2-norm of this equivalent matrix that is only dependent on kernel weights. Meanwhile, activation functions (like ReLU) and pooling layers (such as max-pooling) are just \(1\)-Lipschitz. Therefore, for such sequential neural networks, where each layer has a \(1\)-Lipschitz activation function, the upper bound is just the product of the norms of the corresponding linear transformation matrices.
Deep CNN layers have rather large respective linear transformation matrices due to a high number of input and output channels, which complicates the calculation of the matrix 2-norm using standard tools. To speed the computation we have therefore implemented Power Method, which, in some cases, resulted in up to 10 times faster calculations.
Empirical evaluation of the evolution of Lipschitz constant.Having introduced the main concepts, let us begin by first exploring how the Lipschitz constant behaves during training. In other words, we would like to investigate the evolution of the Lipschitz constant during training for a given neural network. In Figure 1, we present the results of training a feed-forward neural network with ReLU activations (we will call such network FF ReLU later) and 1 hidden layer of width 256. The model was trained on MNIST1D dataset [23] using the SGD optimiser and Cross-Entropy loss until convergence and zero training loss.
We include the line for \(C_{\text{avg\_norm}}=\mathbb{E}_{\mathbf{x}\in\mathcal{S}}\|\nabla_{\mathbf{x} }f_{\mathbf{\theta}}\|_{2}\) to indicate that the supremum bound is relatively close to the expected value of the Jacobian norm, which indicates that the supremum computation is not heavily affected by potential outliers.
From Figure 1 we can see that the \(C_{\text{lower}}\) and the \(C_{\text{upper}}\) are increasing with epochs, diverging rapidly from each other. Similar Lipschitz evolution trends are depicted by networks in other settings as well. In later sections, we give more details on how the Lipschitz constant evolution changes with the choice of width (section 3) or depth of the network (section 6), as well as the loss and the optimiser (section 5). Similar trends seem to hold even for other network architecture choices, such as Convolutional Neural Networks (CNNs), as illustrated in Figure 4.
Fidelity of the lower bound to the Lipschitz.As the true Lipschitz constant lies somewhere between the upper and the lower bounds, one may ask: how loose are those bounds and where does the true Lipschitz constant lie?
To give more insight into this, we compute bound 1 on a larger set of examples \(S^{*}\). This set includes, apart from the previously presented training set \(S\), (a) the test1 set \(S^{\prime}\), and (b) random 100,000 convex combinations of samples from the train set and test set. The set of convex combinations (\(\lambda\mathbf{x}_{i}+(1-\lambda)\mathbf{x}_{j}\)) is computed for each \(\lambda=\{0.1,0.2,0.3,0.4,0.5\}\), making it 500,000 samples for the train and 500,000 samples for the test set. In the end, \(S^{*}\) contains 1,005,000 samples. Results are presented in Figures 2 and 3. We denote \(C_{S^{*}}\) as the Lipschitz constant computed on the set \(S^{*}\).
Footnote 1: MNIST1D train dataset contains 4000 samples, test dataset contains 1000 samples.
It is evident from the above Figure 2 that the bound for the Lipschitz constant computed on set \(S^{*}\) lies closer to the lower bound \(C_{\text{lower}}\). This indicates that even though the upper bound \(C_{\text{upper}}\) gives a loose Lipschitz constant bound for the model for the whole functional domain \(dom(\mathbf{f}_{\mathbf{\theta}})\), _the true Lipschitz constant lies relatively closer to the lower bound \(C_{\text{lower}}\)_.
This statement is important, as the gap between the upper and the lower bound can become extremely large in some cases, leading to a significant overestimation of the Lipschitz constant of the network function. For instance, this may be observed in the case of CNNs. Figure 4 shows the Lipschitz constant trend for the CNN model with width (i.e., number of channels) 7, trained on CIFAR-10 [24] using Cross-Entropy loss and SGD optimiser until convergence and zero training loss.
Figure 1: Plot of Lipschitz constant bounds by training epoch for **FF ReLU network** with 1 hidden layer with 256 neurons. The model was trained on MNIST1D using Cross-Entropy loss and SGD optimiser. Results are averaged over 4 runs. More details in appendix S1.
Figure 2: Plot of Lipschitz constant bounds on the **train set \(S\) and set \(S^{*}\)** by training epoch for **FF ReLU network** with 1 hidden layer with 256 neurons. The model was trained on MNIST1D using Cross-Entropy loss and SGD optimiser. Results are averaged over 4 runs. More details in appendix sections S1.
We have presented both linear and log scales to show that even though the upper bound is growing much faster than the lower and the average norm bounds, all three bounds still follow the same trend as it was first shown in Figure 1.
For the rest of the paper, we will present both upper and lower Lipschitz constant bounds. However, we would like to mention that since the lower bound \(C_{\text{lower}}\) is closer to representing the Lipschitz constant for the distribution \(\mathcal{D}\), it would serve the reader to pay more attention to it compared to other bounds. In fact, this fact can also be seen in some of the works which propose tighter upper-bounds to the Lipschitz [9].
Remark.As can be seen, we have evaluated the fidelity of our lower bound on samples in the input space that are outside of the data distribution (but are still somewhat meaningful). It would also be interesting to consider the case of adversarially generated samples [4] and see how much can the lower bounds be pushed up. We leave this for a future investigation.
## 3 Double Descent in Lipschitz constant: Effect of Width
To study how the Lipschitz constant changes with the width of the hidden layers of the model, we conduct an experiment where we trained 16 FF ReLU networks, with 1 hidden layer, of increasing width. All the networks were trained on MNIST1D using Cross-Entropy loss and SGD optimiser.
This experimental setting, where networks with increasing number of parameters are compared to one another, should be reminiscent of the Double Descent phenomenon [15]. In this study, we replicated Double Descent for the setting above, which requires us to train all models until convergence. In order to keep the hyperparameters consistent across all settings while still being able to handle networks
Figure 4: Plots of Lipschitz constant bounds by training epoch for **CNN network** with width 7. The model was trained on CIFAR-10 using Cross-Entropy loss and SGD optimiser. Results are averaged over 4 runs. More details in appendix S1.
Figure 3: Plot of lower Lipschitz constant bounds **on different dataset combinations** by training epoch for **FF ReLU network** with 1 hidden layer with 256 neurons. The model was trained on MNIST1D using Cross-Entropy loss and SGD optimiser. Results are averaged over 4 runs. More details in appendix sections S1.
with such varying widths, we utilize an initial learning rate warmup to the base learning rate (LR) of 0.005 (referred to as Warmup20000Step25 LR Scheduler) for our runs. More details on models and the training strategy are listed in the appendix S1.
Figure 5 clearly displays how all three Lipschitz constant bounds display a shape similar to Double Descent. The interpolation threshold for both Lipschitz bounds and the test loss is at width 80, which is also the theoretical threshold, as FF ReLU with no bias and 1 hidden layer with width 80, 40 values on input and 10 values on output, has \(40\cdot 80+80\cdot 10=4000\) parameters, which is the exact number of training samples in the MNIST1D dataset.
Similar trends in the Lipschitz constant could be seen for FF ReLU networks trained using MSE, as well as other models like CNNs. In Figures 6 and 7, we present the Lipchitz constant bounds of the models at their last epoch for these two settings. We also include additional experiments on top of previously observed double descent in test loss in the literature [25], where we simply plot the Lipschitz bounds and even observe a double descent trend for them. These figures are located in appendix S2.1. Besides, a few remarks are in order:
Location of Interpolation threshold.It should be noted that, in both of these cases, the empirical interpolation thresholds appear slightly before the theoretical ones. For MSE loss, the interpolation threshold should theoretically occur at \(p=K\cdot n\) where \(p\) denotes the number of parameters, \(n\) is the number of training samples, and \(K\) is the output dimension. This corresponds to a width of \(800\) in the case of a one-hidden layer fully connected network, while empirically the interpolation threshold already seems near width \(~{}256\). Likewise, for the CNN model trained on CIFAR10 with 50000 samples, theoretically, the interpolation threshold should occur between width 11 (with \(p=46915\)) and width 12 (with \(p=55716\)). However, empirically, we see that it occurs at width \(10\). But besides
Figure 5: Log-Log plots of Lipschitz constant bounds and train and test losses by width for **FF ReLU networks** with 1 hidden layer with varying number of hidden neurons. Models were trained using **Cross-Entropy loss** and SGD optimiser with learning rate 0.005 and Warmup20000Step25 learning rate scheduler. Results are presented for the last epoch for each individual model and are averaged over 4 runs. More details in appendix S1.
this slight difference in the location of the interpolation threshold, we can clearly see that both the test loss and Lipschitz constant (as seen through the upper and lower bounds) show a double descent trend. As a matter of fact, such a behaviour of the interpolation threshold is also implicit in past works [26].
Trend of the entire generalization bounds.The fact that Lipschitz constant shows the same trend as the test loss is in a way expected, given that several generalization bounds include it as a term [2]. It would also be interesting to investigate if such proposed bounds also capture double descent in their entirety, and not just when the trend of the Lipschitz constant is noted in isolation.
Potential indication of Triple Descent.Lastly, unlike the test loss, the upper Lipschitz for FF ReLU networks (and even the lower Lipschitz sometimes, although more faintly relative to the upper bound) seems to continue to increase after the second descent, which could be a connection to the Triple Descent phenomenon [27]. We leave this observation as a future research question.
## 4 Bias-Variance Tradeoff
### A textbook theoretical analysis
To give some theoretical ground for the observations in the previous section, we present how bias-variance tradeoff connects with the Lipschitz constant of the network. Let us denote the neural network function as \(f_{\mathbf{\theta}}(\mathbf{x},S,\zeta)\) and the ground-truth function as \(\mathbf{y}^{\star}(\mathbf{x})\). Here, \(S\) denotes the training set and \(\zeta\) indicates the noise in the function due to the choice of random initialization and the noise
Figure 6: Log-Log plots of Lipschitz constant bounds and train and test losses by width for **FF ReLU networks** with 1 hidden layer with varying number of hidden neurons. Models were trained using **MSE loss** and SGD optimiser with learning rate 0.005 and Warmup20000Step25 learning rate scheduler. Results are presented for the last epoch for each individual model and are averaged over 4 runs. More details can be found in appendix S1.
Figure 7: Log-Log plots of Lipschitz constant bounds and train and test losses by width for **CNN networks** with varying width parameter. Models were trained using **Cross-Entropy loss** and SGD optimiser with learning rate 0.01 and Cont100 learning rate scheduler. Results are presented for the last epoch for each individual model and are averaged over 4 runs. More details can be found in appendix S1.
introduced by a stochastic optimizer, like stochastic gradient descent (SGD). Or said differently, one can take \(\zeta\) as denoting the random seed used in practice. Then let us assume we have the square loss, i.e., \(\ell(\mathbf{x};\mathbf{f}_{\mathbf{\theta}})=\|y(\mathbf{x})-f_{\mathbf{\theta}}(\mathbf{x },S,\zeta)\|^{2}\). We can write the loss evaluated on a test set, \(S^{\prime}\), i.e., the test loss, as follows:
\[\mathcal{L}(\mathbf{\theta},S^{\prime},\zeta)=\mathbb{E}_{\,\mathbf{x}\sim S^{ \prime}}\left[\|y(\mathbf{x})-f_{\mathbf{\theta}}(\mathbf{x},S,\zeta)\|^{2}\right] \tag{5}\]
In practice, we typically average the test loss over several random seeds, hence inherently involving an expectation over the noise \(\zeta\). We derive a bias-variance tradeoff [1, 28] that rests upon this as the noise source. Also, we consider the fixed-design variant of the bias-variance tradeoff and as a result, we will not average over the choice of the training set sampled from the distribution. In any case, for a suitably large training set size, this is expected not to introduce a lot of fluctuations and in particular, for the phenomenon at hand, i.e. double descent the training set is generally considered to be fixed. Hereafter, for convenience, we will suppress the dependence of the network function on the training set.
Now we do the usual trick of adding and subtracting the expected neural network function over the noise source. Hence, we can rewrite the above as:
\[\mathcal{L}(\mathbf{\theta},S^{\prime},\zeta) =\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|y(\mathbf{x})- \mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]+\mathbb{E}_{\,\zeta}[ f_{\mathbf{\theta}}(\mathbf{x},\zeta)]-f_{\mathbf{\theta}}(\mathbf{x},\zeta)\|^{2}\right]\] \[=\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|y(\mathbf{x})- \mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]\|^{2}\right]+\mathbb{ E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}( \mathbf{x},\zeta)]-f_{\mathbf{\theta}}(\mathbf{x},\zeta)\|^{2}\right]\] \[+2\,\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\left(y( \mathbf{x})-\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]\right)^{ \top}\,(\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]-f_{\mathbf{\theta }}(\mathbf{x},\zeta))\right)\right]\]
Next, we take the expectation of the above test loss with respect to the noise source \(\zeta\) -- mirroring the empirical practice of reporting results averaged over multiple seeds. It is easy to see that when taking expectation, the cross-term vanishes and we are left with the following expression:
\[\mathbb{E}_{\,\zeta}\,\mathcal{L}(\mathbf{\theta},S^{\prime},\zeta) =\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|y(\mathbf{x})- \mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]\|^{2}\right]+\mathbb{ E}_{\,\zeta}\,\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|\mathbb{E}_{\, \zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]-f_{\mathbf{\theta}}(\mathbf{x},\zeta)\|^ {2}\right] \tag{6}\] \[=\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|y(\mathbf{x})- \mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]\|^{2}\right]+\mathbb{ E}_{\,\mathbf{x}\sim S^{\prime}}\mathbb{E}_{\,\zeta}\left[\|\mathbb{E}_{\,\zeta}[f_{ \mathbf{\theta}}(\mathbf{x},\zeta)]-f_{\mathbf{\theta}}(\mathbf{x},\zeta)\|^{2}\right]\] (7) \[=\mathbb{E}_{\,\mathbf{x}\sim S^{\prime}}\left[\|y(\mathbf{x})- \mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]\|^{2}\right]+\mathbb{ E}_{\,\mathbf{x}\sim S^{\prime}}\,\mathrm{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x}, \zeta)) \tag{8}\]
As a shorthand, we denote the above expected test loss as \(\overline{\mathcal{L}}(\mathbf{\theta},S^{\prime}):=\mathbb{E}_{\,\zeta}\,\mathcal{ L}(\mathbf{\theta},S^{\prime},\zeta)\). Overall, this results in the bias-variance tradeoff under our setting.
Upper-bounding the Variance term.Now, we want to a do a finer analysis of the variance term by involving the Lipschitz constant of the network function. Moreover, let's for the moment focus only on the part \(\mathrm{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x},\zeta))\).
\[\mathrm{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x},\zeta))=\mathbb{ E}_{\,\zeta}\left[\|\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]-f_{\mathbf{ \theta}}(\mathbf{x},\zeta)\|^{2}\right] \tag{9}\] \[=\mathbb{E}_{\,\zeta}\left[\|\underbrace{\mathbb{E}_{\,\zeta}[f_ {\mathbf{\theta}}(\mathbf{x},\zeta)]-\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x }^{\prime},\zeta)]}_{a}+\underbrace{\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}( \mathbf{x}^{\prime},\zeta)]-f_{\mathbf{\theta}}(\mathbf{x}^{\prime},\zeta)}_{b}+ \underbrace{f_{\mathbf{\theta}}(\mathbf{x}^{\prime},\zeta)-f_{\mathbf{\theta}}(\mathbf{x },\zeta)}_{c}\|^{2}\right] \tag{10}\]
where, we have considered some auxiliary point \(\mathbf{x}^{\prime}\), and added and subtracted some terms. Further recalling that for \(n\) vectors, \(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}\), we can utilize the simple inequality:
\[\|\mathbf{x}_{1}+\cdots+\mathbf{x}_{n}\|^{2}\leq n\sum_{i=1}^{n}\|\mathbf{x}_{i }\|^{2}\]
which follows from \(n\) applications of the Cauchy-Schwarz inequality. Hence, the variance above can be upper-bounded as:
\[\mathrm{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x},\zeta)) \leq 3\,\|\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)]- \mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x}^{\prime},\zeta)]\|^{2}+3\, \mathbb{E}_{\,\zeta}\,\|\mathbb{E}_{\,\zeta}[f_{\mathbf{\theta}}(\mathbf{x}^{ \prime},\zeta)]-f_{\mathbf{\theta}}(\mathbf{x}^{\prime},\zeta)\|^{2}\] \[+3\,\mathbb{E}_{\,\zeta}\,\|f_{\mathbf{\theta}}(\mathbf{x}^{\prime}, \zeta)-f_{\mathbf{\theta}}(\mathbf{x},\zeta)\|^{2}\]
We can think of \(\mathbb{E}\,_{\zeta}f_{\mathbf{\theta}}(\mathbf{x},\zeta)\) as the ensembled function mapping, and denote it by say \(\overline{f_{\mathbf{\theta}}}(\mathbf{x}):=\mathbb{E}\,_{\zeta}f_{\mathbf{\theta}}( \mathbf{x},\zeta)\), and let's assume that it is \(\overline{C}\)-Lipschitz. On the other hand, let's say that each individual function \(f_{\mathbf{\theta}}(\mathbf{x},\zeta)\) has Lipschitz constant \(C_{\zeta}\). Hence we can further reduce the upper bound to
\[\operatorname{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x},\zeta))\leq 3\, \overline{C}^{2}\,\|\mathbf{x}-\mathbf{x}^{\prime}\|^{2}+3\,\operatorname{ Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x}^{\prime},\zeta))+3\,\mathbb{E}\,_{ \zeta}\,C_{\zeta}^{2}\|\mathbf{x}-\mathbf{x}^{\prime}\|^{2}\,. \tag{11}\]
Now, we bring back the outer expectation with respect to samples from the test set, i.e., \(\mathbf{x}\sim S^{\prime}\):
\[\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\operatorname{Var}_{\zeta}(f_{\mathbf{ \theta}}(\mathbf{x},\zeta))\leq 3\,\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}} \operatorname{\overline{C}^{2}}\,\|\mathbf{x}-\mathbf{x}^{\prime}\|^{2}+3\, \mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\,\operatorname{Var}_{\zeta}(f_{\mathbf{ \theta}}(\mathbf{x}^{\prime},\zeta))+3\,\mathbb{E}\,_{\mathbf{x}\sim S^{ \prime}}\,\mathbb{E}\,_{\zeta}\,C_{\zeta}^{2}\|\mathbf{x}-\mathbf{x}^{\prime}\| ^{2}\]
Notice that while the Lipschitz constant of the neural network function do depend on the training data, the above expectation is with respect to samples from the test set. Hence, we can take the Lipschitz constants that appear above outside of the expectation. Besides, the middle term on the right-hand side has no dependency on the test sample \(\mathbf{x}\sim S^{\prime}\) and so the expectation goes away. Overall, this yields,
\[\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\operatorname{Var}_{\zeta}(f_{\mathbf{ \theta}}(\mathbf{x},\zeta))\leq 3\,(\overline{C}^{2}+\overline{C_{\zeta}^{ \prime}}^{2})\,\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\,\|\mathbf{x}-\mathbf{ x}^{\prime}\|^{2}+3\,\operatorname{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{x}^{ \prime},\zeta)) \tag{12}\]
where, for simplicity, we have denoted the Lipschitz constant \(C_{\zeta}\) averaged over the random seeds \(\zeta\), as \(\overline{C_{\zeta}}\). We can simplify the above upper bounds by taking \(\mathbf{x}^{\prime}=\mathbf{0}\) as the vector of all zeros, resulting in:
\[\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\operatorname{Var}_{\zeta}(f_{\mathbf{ \theta}}(\mathbf{x},\zeta))\leq 3\,(\overline{C}^{2}+\overline{C_{\zeta}^{ \prime}}^{2})\,\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\,\|\mathbf{x}\|^{2}+3 \,\operatorname{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{0},\zeta)) \tag{13}\]
and which now contains the variance of the network function at the input \(\mathbf{0}\) computed over the various seeds. As a shorthand let's denote \(\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\,\|\mathbf{x}\|^{2}\) as the \(r_{S^{\prime}}^{2}\).
At the cost of loosening the above upper bound, we can simplify it by using the fact that \(\overline{C}\leq\overline{C_{\zeta}}\). This is because (using Jensen's inequality):
\[\|\overline{f_{\mathbf{\theta}}}(\mathbf{x})-\overline{f_{\mathbf{\theta }}}(\mathbf{x}^{\prime})\| =\|\mathbb{E}\,_{\zeta}[f_{\mathbf{\theta}}(\mathbf{x},\zeta)-f_{\mathbf{ \theta}}(\mathbf{x}^{\prime},\zeta)]\|\] \[\leq\mathbb{E}\,_{\zeta}\|f_{\mathbf{\theta}}(\mathbf{x},\zeta)-f_{ \mathbf{\theta}}(\mathbf{x}^{\prime},\zeta)\|\] \[\leq\mathbb{E}\,_{\zeta}C_{\zeta}\|\mathbf{x}-\mathbf{x}^{\prime} \|=\overline{C_{\zeta}}\|\mathbf{x}-\mathbf{x}^{\prime}\|\,.\]
Hence, we have:
\[\mathbb{E}\,_{\mathbf{x}\sim S^{\prime}}\operatorname{Var}_{\zeta}(f_{\mathbf{ \theta}}(\mathbf{x},\zeta))\leq 6\,r_{S^{\prime}}^{2}\,\overline{C_{\zeta} }^{2}+3\,\operatorname{Var}_{\zeta}(f_{\mathbf{\theta}}(\mathbf{0},\zeta)) \tag{14}\]
As \(r_{S^{\prime}}^{2}\) is going to be network size agnostic (or e.g., when the data is on the unit sphere, simply unity), we can see that the variance term in the bias-variance tradeoff will be largely dictated by the average Lipschitz constant of the function over various seeds. Plus, close to interpolation and after, the bias term will be rather small. So, it is the Lipschitz constant which will control the upper bound on the generalization error.
### Empirical Evaluation
Bias-Variance tradeoff.In Figure 8, we present the empirical calculation for the relation of Variance and test loss. We use the same sweep of FF ReLU networks with varying width on the MNIST1D dataset using MSE loss from the Double Descent section 3.
Variance upper bounds.We also show empirical calculations for the bounds in the equations 13 and 14, presented in Figure 9. Empirical data was collected from the same models introduced in the previous paragraph.
## 5 Lipschitz Constant and the Optimisation procedure
In this section we explore how the choice of the optimisation strategy affects the Lipschitz constant of the Network. By optimisation strategy we mean the choice of the loss function and the optimisation algorithm. We will observe the effect of each choice separately.
Figure 8: Log-log plots of Variance and test loss against model width. Results are presented from training FF ReLU networks with varying width on MNIST1D using MSE loss and SGD optimiser. The expectations over random initialisations were computed as an average of 4 seeds. More details in appendix S1.
Figure 10: Plot of Lipschitz constant bounds for FF ReLU network with 1 hidden layer with 256 neurons, trained using **Cross-Entropy loss and MSE loss**. Both models were trained on MNIST1D with SGD using the same LR and LR scheduler. Results are averaged over 4 runs. More details in appendix S1.
Figure 9: Log-log plot of Variance and its upper bounds against model width. Results are presented from training FF ReLU networks with varying width on MNIST1D using MSE loss and SGD optimiser. The expectations over random initialisations were computed as an average of 4 seeds. More details in appendix sections S1 and S2.2.
### Effect of the loss function: CE vs MSE
Figure 10 shows the evolution of the Lipschitz constant for the FF ReLU models, trained on MNIST1D with SGD using Cross-Entropy and MSE loss.
The plots show that the bounds for MSE loss are by an order of magnitude smaller than the ones for Cross-Entropy. Our hypothesis is that this occurs due to the magnitude of possible values on the output. Since for the classification task with MSE loss our model has to ideally output a one-hot vector, the range of the norm of possible output values is smaller than in the case of Cross-Entropy, where outputs are further Softmax-ed. Note that we compute the Lipschitz constant with respect to the output of the model without Softmax, since classification could still be done by finding the argmax of the output.
Lipschitz evaluation including Softmax.To support our argument, we also compute the value of the lower Lipschitz constant bound on the FF ReLU 256 network trained on Cross-Entropy with an additional layer of Softmax applied. Results in Figure 11 empirically verify that Softmax pushes the lower bound for the Cross-Entropy trained network closer to the range of values of the MSE trained network.
### Effect of the optimisation algorithm: SGD vs Adam
In Figure 12, we show evolution of the Lipschitz constant for the FF ReLU models, trained on MNIST1D using Cross-Entropy with SGD and Adam optimisers.
Figure 11: Plot of lower Lipschitz constant bounds for FF ReLU network with 1 hidden layer with 256 neurons, trained using **Cross-Entropy loss and MSE loss**. A variant of **FF ReLU trained on Cross-Entropy with Softmax applied** is included as well. All models were trained on MNIST1D with SGD using the same LR and LR scheduler. Results are averaged over 4 runs. More details in appendix S1.
Figure 12: Plot of Lipschitz constant bounds for FF ReLU network with 1 hidden layer with 256 neurons, trained using **SGD and Adam**. Both models were trained on MNIST1D with Cross-Entropy, using the same LR and LR scheduler. Results are averaged over 4 runs. More details in appendix S1.
Figure 12 displays how Adam significantly increases all Lipschitz bounds, and we hypothesize that this might be linked to the nature of Adam optimisation. More concretely, we find that in the case of training with Adam the final model parameters travel further from initialisation as compared to that with SGD. This is illustrated in Figure 13, where we plot the norm of the parameter vector difference between current epoch and initialisation \(\|\mathbf{\theta}_{t}-\mathbf{\theta}_{0}\|_{2}\) over the epochs. But we leave a thorough exploration of this facet for future work.
We note that this trend also holds even if train the network with SGD for more epochs and the respective plots are in appendix S2.
An interesting by-product of the above finding is that actually \(\|\mathbf{\theta}_{t}-\mathbf{\theta}_{0}\|_{2}\) also exhibits Double Descent as in the setting of increasing network width. Results of the sweep are shown in Figure 14.
## 6 Miscellaneous Experiments
In this section we present more experiments to find dependencies between the Lipschitz constant of the network and other learning parameters. In particular, we present how Lipschitz depends on depth, number of training samples and different amounts of label noise.
### Lipschitz constant and the effect of Depth
To study the effects of depth we train a series of FF ReLU models with various number of hidden layers, from 1 to 5. Comparison of the evolution of Lipcschitz constant bounds are presented in Figure 15 and a summary plot is displayed in Figure 16.
Figure 14: Log-Log plots of Norms of the parameter difference and train and test losses by width for **FF ReLU networks** with 1 hidden layer with varying number of hidden neurons. Models were trained using **Cross-Entropy loss** and SGD optimiser with learning rate 0.005 and Warmup20000Step25 learning rate scheduler. Results are presented for the last epoch for each individual model and are averaged over 4 runs. More details in appendix S1.
Figure 13: Plot of the difference Lipschitz constant bounds for FF ReLU network with 1 hidden layer with 256 neurons, trained using **SGD and Adam**. Both models were trained on MNIST1D with Cross-Entropy, using the same LR and LR scheduler. Results are averaged over 4 runs. More details in appendix S1.
#### 6.1.1 At Convergence
#### 6.1.2 At Initialization
According to the plots, all Lipschitz bounds start to multiplicatively increase with each subsequent layer. Note that the increase is not immediate: deeper models start rapidly increasing the Lipschitz later than more shallow models, especially in the case for the lower bound. An interesting fact is that
Figure 16: Summary plot of Lipschitz constant bounds against the **number of hidden layers** of FF ReLU network **at the last epoch**. All models are trained on MNIST1D using Cross-Entropy loss and SGD. More details in appendix S1.
Figure 17: Summary plot of Lipschitz constant bounds against the **number of hidden layers** of FF ReLU network **at initialisation**. All models are trained on MNIST1D using Cross-Entropy loss and SGD. More details in appendix S1.
Figure 15: Plots of Lipschitz constant bounds against epochs for **various number of hidden layers** of FF ReLU network. All models are trained on MNIST1D using Cross-Entropy loss and SGD. More details in appendix S1.
the aforementioned trend does not hold for networks at initialisation, both for the case of increasing depth (Figure 17) and increasing width (Figure 18). Consequently, we also see how the effect of feature learning gets manifested in the bounds of the Lipschitz constant and that looking solely at initialisation (as in the style of the lazy regime [29]) would be insufficient.
### Lipschitz constant and the effect of the number of training samples
In this section we present Lipschitz constant bounds for FF ReLU model with width 256 on various sizes of MNIST1D: 4000, 1000, 500 and 100 training samples. Results are presented on Figure 19.
Figure 19 shows that increasing the number of samples in the training dataset increases the Lipschitz constant. This suggests that the networks need to become "less smooth" as the complexity of function increases to fit a larger number of points. It would be interesting to precisely tease out this behaviour in terms of relevant theoretical quantities, but we leave that for future work.
### Lipschitz constant and the effect of Label Noise
In this paragraph we have trained an FF ReLU model with width 256 on the modified version of MNIST1D, where labels are shuffled. We used datasets, where 0%, 10%, 15%, 20%, 25%, 50%, 75% and 100% of labels are shuffled. Results are presented on Figure 21.
Figure 21 shows that increasing the amount of shuffling reduces the lower and average norm Lipschitz constant bounds. These observations show that increasing label noise makes the function become injuriously smooth, which interferes with the ability of the network to generalize.
Figure 19: Plots of Lipschitz constant bounds against epochs for **various number of samples in the training dataset** for FF ReLU 256 network. All models are trained on MNIST1D using Cross-Entropy loss and SGD. Results are averaged over 4 runs. More details in appendix S1.
Figure 18: Log-log plot of Lipschitz constant bounds against the **width** of FF ReLU network **at initialisation**. All models are trained on MNIST1D using Cross-Entropy loss and SGD. More details in appendix S1.
## 7 Conclusion
To conclude, we have presented a wide-ranging study of the trends of the Lipschitz constant in various learning settings. First, we have shown evolution trends of various Lipschitz constant bounds for different networks, displaying the tightness of the simple lower bound with respect to the true Lipschitz constant. Second, we looked at the behaviour of Lipschitz constant in the Double Descent setting and have shown that both lower and upper Lipschitz bounds exhibit trends similar to test loss. We next presented theoretical statements, connecting Lipschitz constant with the variance term in the generalisation bound, and also gave experimental proof to those statements. In the subsequent sections, we discussed the effect of the choice of loss (between Cross-Entropy and MSE), optimisation algorithm (SGD versus Adam), depth of the network and training sample modifications, like variable dataset lengths and label noise.
Further research directions.We hope that this work inspires further research on uncovering and understanding the characteristics of the Lipschitz constant. One potential avenue for investigation is to explore more complex model classes, such as ResNets or different types of Transformers [30]. Additionally, it would be of great interest to compare tighter Lipschitz constant bounds, proposed in the literature, with the results presented in this study. Another promising area for future research is to examine how various forms of input noise affect the Lipschitz constant.
Figure 21: Plots of Lipschitz constant bounds against epochs for **various amounts of label shuffling of the training dataset** for FF ReLU 256 network. Results are averaged over 4 runs. All models are trained on MNIST1D using Cross-Entropy loss and SGD. More details in appendix S1.
Figure 20: Summary log-log plot of Lipschitz constant bounds against the **number of training samples** in MNIST1D. Plotted for the FF ReLU 256 networks at their last epoch. All models are trained using Cross-Entropy loss and SGD. Results are averaged over 4 runs. More details in appendix S1.
## Acknowledgements
We would like to thank Thomas Hofmann, Bernhard Scholkopf, and Aurelien Lucchi for their useful comments and suggestions. We are also grateful to the members of the DALab for their support. Sidak Pal Singh would like to acknowledge the financial support from Max Planck ETH Center for Learning Systems.
|
2307.13589 | Properties of binary systems in a one-dimensional approximation | Evolutionary calculations for stars in close binary systems are in high
demand to obtain better constraints on gravitational wave source progenitors,
understand transient events from stellar interactions, and more. Modern
one-dimensional stellar codes make use of the Roche lobe radius $R_{\rm L}$
concept in order to treat stars in binary systems. If the stellar companion is
approaching its $R_{\rm L}$, mass transfer treatment is initiated. However, the
effective acceleration also affects the evolution of a star in a close binary
system. This is different from the gravity inside a single star, whether that
single star is rotating or not. Here, we present numerically obtained tables of
properties of stars in a binary system as a function of the effective
potential: volume-equivalent radii of the equipotential surfaces, effective
accelerations and the inverse effective accelerations averaged over the same
equipotential surfaces, and the properties of the L1 plane cross-sections. The
tables are obtained for binaries where the ratios of the primary star mass to
the companion star mass are from $10^{-6}$ to $10^5$ and include equipotential
surfaces up to the star's outer Lagrangian point. We describe the numerical
methods used to obtain these quantities and report how we verified the
numerical results. We also describe and verify the method to obtain the
effective acceleration for non-point mass distributions. We supply a sample
code showing how to use our tables to get the average effective accelerations
in one-dimensional stellar codes. | Ali Pourmand, Natalia Ivanova | 2023-07-25T15:47:29Z | http://arxiv.org/abs/2307.13589v1 | # Properties of binary systems in a one-dimensional approximation
###### Abstract
Evolutionary calculations for stars in close binary systems are in high demand to obtain better constraints on gravitational wave source progenitors, understand transient events from stellar interactions, and more. Modern one-dimensional stellar codes make use of the Roche lobe radius \(R_{\rm L}\) concept in order to treat stars in binary systems. If the stellar companion is approaching its \(R_{\rm L}\), mass transfer treatment is initiated. However, the effective acceleration also affects the evolution of a star in a close binary system. This is different from the gravity inside a single star, whether that single star is rotating or not. Here, we present numerically obtained tables of properties of stars in a binary system as a function of the effective potential: volume-equivalent radii of the equipotential surfaces, effective accelerations and the inverse effective accelerations averaged over the same equipotential surfaces, and the properties of the \(L_{1}\) plane cross-sections. The tables are obtained for binaries where the ratios of the primary star mass to the companion star mass are from \(10^{-6}\) to \(10^{5}\) and include equipotential surfaces up to the star's outer Lagrangian point. We describe the numerical methods used to obtain these quantities and report how we verified the numerical results. We also describe and verify the method to obtain the effective acceleration for non-point mass distributions. We supply a sample code showing how to use our tables to get the average effective accelerations in one-dimensional stellar codes.
Multiple star evolution -- Binary stars -- Roche Lobe -- Lagrange points 0000-0002-2287-1091]Ali Pourmand
0000-0002-4882-0808]Natalia Ivanova
## 1 Introduction
Binary stars are stellar systems consisting of two stars that are gravitationally bound together and orbiting around each other. If the radius of one of the stars during the course of its evolution becomes comparable to the orbital separation, the binary is termed a close binary. In a close binary such phenomena as tidal spin-up, stable mass transfer (MT), or unstable MT (common envelope evolution) can occur. The evolution of each star in a binary system can be significantly altered from the evolution of a similar but evolved-in-solation star. Substantial attention is now given in modern one-dimensional (1D) stellar codes to treat the evolution of stars on their way toward the start of MT. The structure of the donor star, specifically of its envelope, at the start of MT - when the volume of the donor star is approaching the volume of its Roche lobe - plays a crucial role in determining whether the MT proceeds stably or unstably.
Upon approaching contact, each star in a binary system is affected strongly by the gravitational field of its companion and by the binary's orbital motion. However, the primary effect that is considered by 1D stellar evolutionary codes is the size of the Roche lobe of the donor star. Recently, it has become appreciated that the donor star may remain in a state of substantial Roche lobe overflow (RLOF)1 for an evolutionary-noticeable time while keeping MT in a stable regime, appearing, for example, as Ultra-Luminous X-ray sources (Pavlovskii et al., 2017). Furthermore, the binding energy of the donor's envelope at the onset of CE, following the MT while the donor significantly overfills its Roche lobe, plays an essential role in the initial conditions of three-dimensional (3D) simulations of common
envelope events (Ivanova et al., 2020). When a star is close to its Roche lobe radius, or overfills it, the outer parts of the stellar envelope are strongly affected by the effective acceleration in a binary system, and this effective acceleration is different from the one that a single star experiences. While the knowledge of what is happening to the star in this regime has been around for about 60 years(Kopal, 1959), this physics has not yet been included in detail in 1D stellar calculations.
The main goal of this paper is to provide the community with the database we have constructed. That database contains the various properties of binary stellar systems and the code that allows anyone to easily use binary physics in their future binary or MT numerical studies using 1D codes. We review in SS 2 the physics that becomes relevant when simulating the gravitational field of a binary star. In SS 3 we discuss the assumptions made to approximate binary stars into a 1D scheme, and the numerical methods employed to obtain volume-equivalent radii and average gravitational accelerations for our tables; we also obtain some analytical expressions which determine how this gravitational acceleration behaves close to the center of the donor star, and allow verification of the numerical method in this regime. In SS 4 we verify our results by comparing them to various published results and present self-convergence checks. In SS 5 we discuss the effects of a non-point mass on the binary potentials, and how to use our tables in the case of non-point masses. This Paper I is devoted to the numerical methods only. The application of the tables that we obtain for binary evolution, and the scientific outcomes, are described in the follow-up Paper II.
## 2 Binary in a corotating frame
Here we review the theory of binary's effective potential and introduce the specific quantities we obtain numerically and present in the database.
### Effective binary potential.
We consider a coordinate system in corotation with the binary, with the origin at the donor star's center. The \(xy\) plane is the plane of the orbit, and the co-rotating plane is rotating around the axis which passes through the center of mass of the binary system. The effective potential \(\Psi\) is then
\[\Psi(X,Y,Z) = -\frac{GM_{1}}{|R_{1}|}-\frac{GM_{2}}{|R_{2}|} \tag{1}\] \[-\frac{1}{2}\Omega^{2}\left[(X-a\frac{M_{2}}{M_{1}+M_{2}})^{2}+Y ^{2}\right]\,\]
where \(M_{1}\) and \(M_{2}\) are the donor and companion star masses, \(a\) is the orbital separation, \(\Omega=\sqrt{G(M_{1}+M_{2})/a^{3}}\) is the binary system's orbital angular velocity, and \(R_{1}\) and \(R_{2}\) are the distances of each element to the donor star's and companion star's centers,
\[|R_{1}| = \sqrt{X^{2}+Y^{2}+Z^{2}}\,\] \[|R_{2}| = \sqrt{(X-a)^{2}+Y^{2}+Z^{2}}. \tag{2}\]
We construct a unitless (or scaled) potential \(\xi\) following the convention of Mochnacki (1984),
\[\Psi\equiv-\frac{G(M_{1}+M_{2})}{2a}\ \xi. \tag{3}\]
We introduce the mass ratio \(q\)
\[q\equiv\frac{M_{1}}{M_{2}}\, \tag{4}\]
and unitless distances
\[x\equiv\frac{X}{a},\ y\equiv\frac{Y}{a},\ z\equiv\frac{Z}{a},\ |r_{1}|\equiv \frac{|R_{1}|}{a},\ |r_{2}|\equiv\frac{|R_{2}|}{a}. \tag{5}\]
Then \(\xi\) can be written as
\[\xi(x,y,z) = \frac{q}{(1+q)}\ \frac{2}{|r_{1}|}+\frac{1}{1+q}\ \frac{2}{|r_{2}|} \tag{6}\] \[+\left[(x-\frac{1}{1+q})^{2}+y^{2}\right].\]
Example contour plots of the unitless equipotentials for several mass ratios (two-dimensional slices for \(xy\) plane and \(xz\) plane) are shown in Figure 1. In the case of a binary considered in a corotating frame, five equilibrium points exist that correspond to local extrema of the effective potential. The most important for studies of binary interactions are the positions of unstable equilibrium \(L_{1}\), \(L_{2}\), and \(L_{3}\), located on the line through the centers of the two large bodies (see examples shown in Figure 1). In this paper, we will refer to \(L_{1}\) as the inner Lagrangian point and \(L_{2}\) and \(L_{3}\) as the outer Lagrangian points. The outer Lagrangian point closest to the donor star is \(L_{2}\) for \(q<1\), and is \(L_{3}\) for \(q>1\).
### Effective acceleration.
We are interested in finding the unitless effective acceleration \(\eta\), acting in the direction normal to the equipotential surfaces. This is the gradient of the potential, which is perpendicular to the equipotential surface at each point,
\[\eta=-\nabla\xi.\]
Figure 1: Two-dimensional slices showing the scaled potential \(\xi\) in a binary corotating frame for three mass ratios: \(q=0.2,2\), and \(5\), from top to bottom. Shown are slices in the \(xy-\)plane (left panels) and \(xz-\)plane (right panels). The color bar represents the chosen range of the scaled potential \(\xi\) at each point. We plot a narrow range of equipotentials for a better resolution in the region of interest; the white areas are where \(\xi\) is outside the chosen ranges. The coordinates are in units of the orbital separation, and the Lagrange points have been denoted by \(Li\). The black line passing through \(L_{1}\) is the \(L_{1}\)-plane (the plane through the first Lagrange point and perpendicular to the line attaching the two stars). The equipotential surfaces passing through \(L_{1}\), \(L_{2}\), and \(L_{3}\) are shown with black lines.
It is easiest to find this gradient of the potential by using its \(x,y,\) and \(z\) components, \(\eta_{x}\), \(\eta_{y}\), and \(\eta_{z}\), respectively,
\[\eta_{x} = \frac{2q}{1+q}\frac{x}{r_{1}^{4}}+\frac{2}{1+q}\frac{(x-1)}{r_{2}^ {4}}\frac{|r_{2}|}{r_{2}^{4}} \tag{8}\] \[-2\left[x-\frac{1}{1+q}\right]\,\] \[\eta_{y} = \frac{2q}{1+q}\frac{y}{r_{1}^{4}}+\frac{2}{1+q}\frac{y}{r_{2}^{4} }-2y\,\] (9) \[\eta_{z} = \frac{2q}{1+q}\frac{z}{r_{1}^{4}}+\frac{2}{1+q}\frac{z}{r_{2}^{4} }. \tag{10}\]
A plot depicting \(\eta_{x}\), \(\eta_{y}\), and \(\eta_{z}\) can be seen in Figure 2. The unitless effective acceleration \(\eta\) is then given by
\[\eta=\sqrt{\eta_{x}^{2}+\eta_{y}^{2}+\eta_{z}^{2}}. \tag{11}\]
Contours for effective accelerations are shown in Figure 3. The effective acceleration with physical units \(g_{\rm eff}=\) can be then recovered as:
\[g_{\rm eff}=\frac{G(M_{1}+M_{2})}{2a^{2}}\eta. \tag{12}\]
## 3 One-dimensional representation
As shown in Figures 1 and 3, for stars in a binary system, there is no spherical symmetry for neither effective potentials nor effective accelerations. Close to the center of each star, a close-to-spherical symmetry can be observed, but the further away from the center of each star one goes, the more significant the influence of its companion and of the orbital motion becomes. The closer one gets to the \(L_{1}\) point, the more the shape of the potential gets close to a teardrop shape. Outside of the \(L_{1}\) point, the equipotentials take on a shape similar to a peanut. The effective acceleration, on the other hand, entirely disappears at \(L_{1}\).
Our goal here is to develop a method to approximate 3D binary stars' gravitational fields for 1D considerations. Our approach is based on the concept of equipotential shells (3D shells that enclose the center of the stars). We will only focus on the donor star for the remainder of the paper. A 3D depiction of a sample equipotential shell outside of the Roche lobe and limited by the \(L_{1}\)-plane can be seen in Figure 4. The \(L_{1}\)-plane is the plane that is parallel to the \(y-z\) plane, and passes through the \(L_{1}\) point.
The key physical property of the equipotential shells is that, in the case of hydrostatic equilibrium, both isobaric surfaces and surfaces of constant density coincide with equipotential surfaces. Therefore here we consider averaging the effective acceleration that affects the star
Figure 2: \(\eta_{x}\), \(\eta_{y}\), and \(\eta_{z}\) shown for mass ratio \(q=2\). The coordinates are in units of the orbital separation.
Figure 3: Two-dimensional slices showing the contours of the effective acceleration \(\eta\), in a binary corotating frame, for three mass ratios: \(q=0.2\), \(2\), and \(5\), from top to bottom. Shown are slices in the \(xy-\)plane (left panels) and \(xz-\)plane(right panels). The coordinates are in units of the binary separation, and the Lagrange points have been denoted by \(Li\).
over the equipotential surfaces. The effective accelerations are found as a function of the volume-equivalent radii of the same equipotential surfaces.
### Volume-equivalent radii
Each equipotential surface encloses some volume \(V_{\rm equip}\). This volume can be associated with a sphere of the same volume. This sphere has a so-called volume-equivalent radius
\[R_{\rm eq}=\sqrt[3]{3V_{\rm equip}\over 4\pi}. \tag{13}\]
The idea of the volume-equivalent radius was introduced in the past (see, for example, Kopal, 1959). The best known numerical fitting of 3D integrations for the unitless volume-equivalent Roche lobe radius is the famous equation of Eggleton (1983)
\[r_{L_{1}}={0.49q^{2/3}\over 0.6q^{2/3}+\ln(1+q^{1/3})}. \tag{14}\]
The physical volume-equivalent Roche lobe radius of the star \(R_{L_{1}}\) then is found by multiplying \(r_{L_{1}}\) by the orbital separation \(a\), \(R_{L_{1}}=r_{L_{1}}a\).
For volume integrations, we use spherical coordinates. We define \(i\) and \(j\) as the indices describing the directions in \(\varphi\) (azimuthal angle) and \(\vartheta\) (polar angle) for integrations, where \(\varphi\in[0,\pi]\) and \(\vartheta\in[0,\pi/2]\)2. We divide each of these domains into \(n_{\phi}\) and \(n_{\vartheta}\) respectively (we adopt \(n_{\varphi}=2n_{\vartheta}\)). We define \(k\) to be the index describing the potential shell with \(\xi_{k}\). The \(r_{ijk}\) is the location of the equipotential shell \(\xi_{k}\) in each angular direction \(ij\), or, specifically, the distance to the origin \(r\) for the given angles \(\varphi\) and \(\vartheta\). \(r_{ijk}\) is found iteratively until at least the convergence condition
Footnote 2: The values of unitless acceleration, \(\eta\), and equipotentials, \(\xi\), in a binary system have the symmetry along \(z\)-direction with respect to the orbital plane \(xy\), as well as along \(y\)-direction with respect to \(xz\)-plane. Therefore, only integrations in \(\varphi\in[0,\pi]\) and \(\vartheta\in[0,\pi/2]\) are needed.
\[\left|{\xi(r_{ijk},\varphi_{i},\vartheta_{j})-\xi_{k}\over\xi_{k}}\right|\leq 1 0^{-12} \tag{15}\]
is satisfied. Here \(\xi(r,\varphi,\vartheta)\) is the potential at the location \((r,\varphi,\vartheta)\), while \(\xi_{k}\) is the potential for the equipotential surface for which we search.
The numerical integrator uses spherical volume elements in each angular direction \(\varphi\) and \(\vartheta\), and the volume between the \(k_{th}\) shell and \((k-1)_{th}\) shell in the potential is
\[\Delta V_{\rm sph,k}=r_{ijk}^{2}\sin\vartheta_{j}\Delta r_{ijk}\Delta\varphi \Delta\vartheta. \tag{16}\]
\(\Delta r_{ijk}\) is the step in the \(ij\) direction. This is not constant for the entire potential shell \(k\) but depends on the location of the currently sought potential \(\xi_{k}\) in \(ij\)-direction, and the location of the previous potential, \(\xi_{k-1}\). Using the symmetry of the equipotential in a binary system, we perform the volume integration in the angular domain as above and then multiply by 4.
The volume integrations are limited by the outer Lagrange point's equipotential shell of the donor star, and the \(L_{1}\) plane. These limitations affect volume integrations for shells that are located outside the \(L_{1}\) equipotential. The \(L_{1}\) plane is depicted as a black line in the 2D slice of Figure 1 and as a red plane in the 3D Figure 4.
### Effective accelerations
The unitless effective accelerations at each location are found using equations 8, 9, and 10.
The weighted averages of the effective acceleration for each equipotential shell are obtained with the area of the triangular elements on that equipotential surface \(dA_{ij}\)3,
Footnote 3: We compared the performance of triangular mesh area elements and spherical area elements (as did Mochnacki, 1984) and found that the triangle method provides faster convergence to obtain surfaces on equipotential shells. This is especially important for the shells outside the Roche lobe, where element areas of the surfaces of equipotentials do not resemble spherical area elements.
\[\eta_{\rm avg}={\sum_{i}\sum_{j}\eta_{ij}dA_{ij}\over\sum_{i}\sum_{j}dA_{ij }}\, \tag{17}\]
Figure 4: A sample equipotential surface outside of the Roche lobe in a binary star, shown in blue color. The equipotential surface is limited by the \(L_{1}\)-plane, shown in red color. The coordinates are in units of the orbital separation.
where \(i\) and \(j\) are the indices of \(\varphi\) and \(\vartheta\) for which the triangles are summed. Here \(\eta_{ij}\) is the unitless effective acceleration averaged over three triangles' vertices.
Figure 5 shows a sample triangular mesh scheme for an equipotential shell.
### Integration zones
The potential is very sensitive to the distance from the center. To accommodate this sensitivity, we divide the integration zones into three regions:
\[\xi_{1} = \xi(0,0,0,q) \tag{18}\] \[\xi_{2} = \xi(0.05x_{L_{1}},0,0,q)\] \[\xi_{3} = \xi(x_{L_{1}},0,0,q)\] \[\xi_{4} = \xi(x_{L,\text{outer}},0,0,q)\]
An example of the three integration regions can be seen in Figure 6.
Near the center, the true binary potential and effective acceleration \(\eta\) are not very different from these in a spherically symmetric single star
\[\xi = \xi_{\text{ss}}+\Delta\xi,\quad\Delta\xi=\left(\frac{3+2q}{2q(1+q )}r+\frac{2q-1}{6q}r^{3}\right)\xi_{\text{ss}}\;,\] \[\eta = \eta_{\text{ss}}+\Delta\eta,\quad\Delta\eta=-\frac{2}{3}\frac{(1+ q)}{q}r^{3}\eta_{\text{ss}}\;. \tag{19}\]
Here \(\xi_{\text{ss}}\) is the unitless effective binary potential in a spherically symmetric star, \(\eta_{\text{ss}}\) is the unitless effective acceleration in a spherically symmetric star, and \(r\) is the unitless distance to the center of the first star (distance in the units of the binary separation). For the derivation, see derivations in Appendix SSC and Equation C7. The expected deviations near the center, \(\Delta\xi\) and \(\Delta\eta\), are very small.
The expected local truncation error due to the limited resolution during numerical integration for any quantity is of the order of \(2\pi^{2}/n_{\phi}^{2}\) times that quantity. We have checked and verified numerically that our integrations for volume equivalent radii and averaged effective accelerations are limited by the adopted angular resolution, as expected for \(R_{\text{Err}}\). The limited numerical precision with which we can obtain solutions near the center does not allow us to resolve this behavior correctly, for instance the error in finding volume equivalent radii is \(\delta_{\text{err}}(R_{eq})\approx(2/3)(\pi^{2}/n_{\phi}^{2})\) (assuming that \(R_{\text{eq}}\) is of the order of one). However, this error might be comparable to the radius itself and hence leads to an even larger error in finding the effective gravity. We introduce as \(r_{\text{am}}\) the unitless distance to the origin within which the analytical solutions using Equation 19 have to be used, as opposed to the distances \(r>r_{\text{am}}\), where numerical integrations have to be used.
We have verified that for the mass ratios in the range of \([10^{-6},10^{5}]\), at \(r_{\text{an}}=0.05x_{L_{1}}\), the non-sphericity effect is \(\Delta\xi/\xi_{\text{ss}}\lesssim 10^{-4}\), and hence the higher terms do not contribute substantially. This makes Equation 19 directly applicable. The numerical precision to find the potential or the effective acceleration, with \(n_{\varphi}=4000\), is approximately \(10^{-6}\). With this resolution, the numerical "noise" due to truncation error contributes at less
Figure 5: A depiction of a part of the equipotential shell with triangular meshes on it shown in 3D; for a random shell in the case of the mass ratio \(q=0.5\). Only a part of the integration domain is shown, to make the triangular mesh visible.
Figure 6: The three zones of integration; this specific example is for a mass ratio of \(q=2\). The zones are defined as described in equation 19, see § 3.3 for further details.
than a percent of the obtained values for various quantities for our smaller mass ratio \(q=10^{-6}\), and as little as \(0.01\%\) for all mass ratios \(q>1\). The numerical noise decreases with \(r\), albeit the role of terms neglected in Equation 19 grows. Therefore \(r_{\rm an}=0.05x_{L_{1}}\) is a compromise at which we can still trust the analytic solution but can start to trust the numerical solution for a wide range of mass ratios.
For \(r<r_{\rm an}\) we therefore consider that the potential and effective acceleration should be provided by Equation 19. For the intermediate zone, we start our numerical integrations at \(r=r_{\rm an}\).
In our runs, we use 500 shells between \(\xi_{2}\) and \(\xi_{3}\), equally spaced in distance between \(0.05x_{L1}\) and \(x_{L1}\) and 100 shells between \(\xi_{3}\) and \(\xi_{4}\), equally spaced in the logarithm of the potential between \(\xi_{L1}\) and \(\xi_{L\rm outer}\).
### Properties in the \(L_{1}\) neighborhood cross-section
We also compute several properties at the \(L_{1}\) plane to provide in our database. These properties are functions of unitless equipotential shells passing through the \(L_{1}\) plane. The intersections of equipotential shells with the \(L_{1}\) plane appear as ellipse-like curves on the \(L_{1}\) plane (see Figure 7). The tabulated properties are as follows4:
Footnote 4: Note that all \(L_{1}\)-plane properties are obtained for equipotential shells exceeding the Roche lobe. In the database we provide, all the columns corresponding to these properties are set to zero for all equipotential shells within the Roche lobe.
* The area of the cross-section of the \(L_{1}\) plane and the equipotential shell passing it, as a function \(\xi\). The area integration uses polar coordinates with 4000 angular resolution per quadrant
* The locations of intersections of this cross-section with the \(xy\)-plane and \(xz\)-plane, as a function of \(\xi\).
* The effective acceleration averaged over the intersection between the equipotential and the \(L_{1}-\) plane.
* The effective acceleration averaged over the entire area of each of these elliptical cross-sections.
## 4 The Numerical Results
### Self-convergence
We have computed the properties of interest for 109 mass ratios in the range of \(\log_{10}q\in[-6,5]\). We have divided this range into three regimes based on the possible applications it could have:
\[\mathbf{A}\colon \log_{10}q\in[-6,-2],\Delta(\log q)=0.25\] \[\mathbf{B}\colon \log_{10}q\in(-2,2],\Delta(\log q)=0.05\] \[\mathbf{C}\colon \log_{10}q\in(2,5],\Delta(\log q)=0.25\]
The first range \(\mathbf{A}\) is relevant to the regime of planets or stars orbiting a supermassive black hole, range \(\mathbf{B}\) is a typical range of mass ratios for two stars in a binary
Figure 8: This plot shows the results of our highest resolution run for the volume equivalent radii passing the first three Lagrange points, as a function of the mass ratio. The horizontal axis is in the units of orbital separation. \(R_{L3}\) volume around the donor star exists only for mass ratios greater than 1. Note that the outer Lagrange point is \(L_{3}\) for mass ratios greater than 1, and is \(L_{2}\) for mass ratios less than 1.
Figure 7: The intersection of several equipotential surfaces with the \(L_{1}\) plane for \(q=0.2\). Axes are in units of the orbital separation.
system, and range \(\mathbf{C}\) could be used in studies where one wants to investigate the effects of the interactions of a star and a Jupiter-like planet, from the star's perspective. The list of all properties provided for each mass ratio can be found in Appendix A, Table 1.
We show the volume equivalent radii passing the first three Lagrange points, as a function of the mass ratio in Figure 8.
We investigate at which angular resolution our numerical solver self-converges. Under the self-convergence test, we mean a numerical study in which the resolution of numerical interactions is continuously increased, to see a) if the difference between the numerical outcomes at increasing resolutions continuously decreases, without new features appearing and b) that said difference becomes smaller than the required numerical precision. While self-convergence is often termed as simply convergence, it is important to remember that self-convergence does not verify that the numerical results have converged to a correct solution, for the latter one has to do at least verification tests5. The number of angular zones we considered is \(\varphi=500,1000,2000\), and \(4000\) for \(\varphi\) in the range \(\varphi\in[0,\pi]\) and \(\vartheta\) in \(\vartheta\in[0,\pi/2]\).
Footnote 5: Validation – i.e., the measure of numerical model accuracy between model predictions and measurements of the real world – is not possible in our case, as for most astrophysical problems.
For a quantity \(A\), obtained with the angular resolution \(m\), we define as the normalized error
\[\frac{\Delta A_{m}}{A}=\frac{A_{m}-A_{4000}}{A_{4000}} \tag{20}\]
We use the highest angular resolution, \(n_{\phi}=4000\), to compare the other angular resolutions with it and see whether the deviation is decreasing with increasing angular resolution or not.
In Figures 9 and 10 we provide typical convergences for effective acceleration and volume equivalent radii as functions of the fill-out function \(F\). \(F\) is a unitless function used to compare different mass ratios' potential distributions up to the outer Lagrange points (which could be \(L_{2}\) or \(L_{3}\)), defined as by Mochnacki (1984):
\[F = \frac{\xi_{L_{1}}}{\xi},\text{ for }\xi\geq\xi_{L_{1}}\,\] \[F = 1+\frac{(\xi_{L_{1}}-\xi)}{(\xi_{L_{1}}-\xi_{\text{Louter}})}, \text{ for }\xi\leq\xi_{L_{1}}. \tag{21}\]
\(F\) is defined in such a way that \(F\in[0,1]\) for potential shells inside and up to the \(L_{1}\) equipotential surface, and \(F\in[1,2]\) for shells outside of the \(L_{1}\) equipotential and inside the outer Lagrange point's equipotential shell. Our results show that self-convergence is obtained quickly. For all mass ratios, the absolute value of the normalized error for effective accelerations is less than \(10^{-4}\) (\(|\Delta A_{m}/A|<10^{-4}\)) for \(m=4000\) outside of the Roche lobe, and is below \(5\times 10^{-6}\) within the Roche lobe. Volume-equivalent radii and equipotential areas are obtained with the normalized error less than
Figure 10: Self-convergence test for the volume-equivalent radius for mass ratio \(q=0.2\). The vertical axis is the normalized error in volume equivalent radius at \(L_{1}\) for angular resolutions \(m=500,1000,2000\), and \(4000\) (Equation 20). The horizontal axis is the fill-out factor \(F\) (Equation 21).
Figure 9: Self-convergence test for the effective acceleration for mass ratios \(q=2\). The vertical axis is the normalized error in effective acceleration for angular resolutions \(m=500,1000,2000\), and \(4000\) (Equation 20). The horizontal axis is \(F\), the fill-out factor \(F\) (Equation 21). \(F>1\) means the equipotentials are outside of the Roche lobe.
\(10^{-7}\). We, therefore, consider our final results presented in the database and obtained with \(n_{\varphi}=4000\) to be self-converged.
### Verification for Volume-Equivalent Radii
We have verified our volume-equivalent radii against two numerical fits available in the literature (i) the \(L_{1}\) equipotential (Eggleton, 1983, see also Equation 14 in this paper), and (ii) the outer Lagrange point's equipotential as provided by (Marchant et al., 2021),
\[\frac{R_{\rm outer}}{R_{L_{1}}} = 1+\frac{2.74}{1+[(1.02-\ln q)/\sigma]^{2}}\frac{1}{7.13+q^{0.386} }\,\] \[\text{and}\] \[\sigma = \frac{49.4}{12.2+q^{-0.208}}. \tag{22}\]
In the above, \(R_{L_{1}}\) is the radius of the Roche lobe found by using the approximation of Eggleton (1983). Note that here we modified Equation (3) of Marchant et al. (2021) to match our definition of \(q\)(Marchant et al. (2021) define the mass ratio as the companion star's mass divided by the donor's).
The results are shown in Figure 11. The fitting equation for \(R_{L_{1}}\) of Eggleton (1983) has been reported (in the paper where the equation was presented) to be accurate to 1%. The fitting equation for \(R_{\rm outer}\) by Marchant et al. (2021) has been reported by them to be accurate to 0.15%. However, the equation for \(R_{\rm outer}\) also, by definition, includes the uncertainty in \(R_{L_{1}}\) from the formula of Eggleton (1983). We find that we have agreement within 1% for both fitting equations.
### Verification for Effective Acceleration
Mochnacki (1984) has provided coarse tables with numerically obtained average effective acceleration on the equipotential as a function of the potential. We have compared our average effective acceleration passing the \(L_{1}\) shell and the \(L_{2}\) shell (obtained with \(n_{\varphi}=4000\)) with that of Mochnacki (1984). The deviations are within \(10^{-4}\) for the values at the outer Lagrangian equipotential (their \(F=2\)), and within the last significant digit of the values that Mochnacki (1984) provides for \(L_{1}\) equipotential (their \(F=1\)). These deviations are consistent with our test for self-convergence.
Deviation of the binary effective acceleration from spherically symmetric gravitational acceleration
The final result of the effective acceleration is shown in Figure 12. The analytical solution at distances close to the center has also been plotted and, as one can see, the numerical results correspond to the analytical equation very closely.
### Deviation of the binary effective acceleration from the case of a solid body rotating star
If a star is a solid body rotator, it experiences an additional acceleration that opposes gravitational acceleration. If one assumes spherical symmetry, the effective acceleration, averaged over a spherical shell, can be expressed in the units of acceleration in a non-rotating star as
\[\frac{\eta_{\rm rot}}{\eta_{\rm ss}}=1-\frac{2}{3}\frac{1+q}{q}r^{3}=1+ \delta\eta_{\rm rot}. \tag{23}\]
Here we introduced the relative deviation that rotation causes, \(\delta\eta_{\rm rot}\). Analogously, one can consider that our numerically obtained and tabulated acceleration is
\[\frac{\eta}{\eta_{\rm ss}}=1+\delta\eta_{\rm tab}. \tag{24}\]
In Figure 13, we compare those two terms for the binaries with the mass ratios range in the interests of stellar binaries. The comparison is provided for stars presumed to fill their Roche lobe - their radii are taken to be equal to their volume-equivalent Roche lobe radii. The effective binary potential always results in the average effective acceleration being smaller (in its absolute value) than in the case of a spherically symmetric star that rotates with an angular velocity equal to the binary angular velocity.
Figure 11: Deviation of \(R_{L_{1}}\) (\(L_{1}\) volume-equivalent radius) from Eggleton (1983) relation (the blue line). Deviation of \(R_{\rm outer}\) (the outer Lagrange equipotential volume-equivalent radius) from the one provided by the Equation 22 (the orange line). The deviations are shown as functions of mass ratio \(q\).
### Alternative treatment of the effective acceleration
It has been argued that instead of the effective gravity \(g\) (averaged over the equipotential), the quantity that should be averaged on the equipotentials for the inclusion in the stellar equations is \(\langle g^{-1}\rangle\), (see Kippenhahn & Thomas, 1970; Endal & Sofia, 1976, 1978, for how to use it). We provide the unitless quantity \(\langle g^{-1}\rangle\) in our numerical tables. The quantity \(\langle g^{-1}\rangle^{-1}\) deviates from spherically symmetric gravity even more significant than \(\langle g\rangle\), as can be seen from Figure 14 where we show the values on \(L_{1}\) equipotential for a range of mass ratios \(\langle g\rangle\times\langle g^{-1}\rangle=\langle\eta\rangle\times\langle \eta^{-1}\rangle\).
### How to use the provided quantities
This work is devoted to the evolution of stars within the gravity field of a binary system. Hence, throughout this paper, we use the same definition of the effective acceleration as per Equation 11 - we always consider the magnitude of the effective acceleration vector component that is normal to the equipotential surface at
Figure 14: The effective gravity averaged over equipotential multiplied by the inverse effective gravity averaged over equipotential, as a function of the mass ratios. See §4.6.
Figure 12: The deviation in effective acceleration from our tables for mass ratios \(q=0.2\) (the left plane) and \(q=2\) (the right plane), as a function of the volume equivalent radius. The red star denotes the equipotential passing \(L_{1}\). The green curve in each figure is the analytical expression 7. We show the analytical solution for the distances within \(0.5\) of \(R_{L1}\), albeit we use it only for the distances within \(0.05\) of \(x_{L_{1}}\), as described in the text. One can see that the analytical solution matches the numerical solution for a large range of distances. The blue line shows the case when the effective acceleration is averaged over the whole contour surrounding the star, and the orange line shows the effective acceleration with \(L_{1}\) plane is excluded from the averaging.
Figure 13: The deviation in effective acceleration from our tables from a solid body rotating star, as a function of the mass ratios (blue circles). The deviation is provided for stars that fill their Roche lobe. With orange stars, we show the deviation where we used Eggleton’s equation for Roche lobe radius instead of our integrated quantities for the spherically symmetric acceleration and the rotating term. See §4.5.
each considered location. The adopted definition of the effective acceleration holds for values obtained on the \(L_{1}\) plane, including the inverse values of the effective acceleration. In the main data table (see Appendix A for more detail), we provide:
1. values from 3D integrations made throughout the closed 3D surface (including the \(L_{1}\)-plane)
2. values from 2D integrations for the \(L_{1}\) plane only.
When the considered star overfills its Roche lobe, the values on the truncated (limited by the \(L_{1}\)-plane) equipotential layers of the star are to be reconstructed from the 3D values provided in a), excluding the 2D values provided in b). For example, the effective acceleration at \(i\)-equipotential shell inside the star is to be found using the 3D area \(A\), the averaged (over \(A\)) 3D effective acceleration \(\eta\), the 2D area of \(L_{1}\)-plane \(A_{\rm Lpl}\), and the averaged (over the \(L_{1}\)-plane) effective acceleration \(\eta_{\rm Lpl}\):
\[\eta_{i}=\frac{\eta A-\eta_{\rm Lpl}A_{\rm Lpl}}{A-A_{\rm Lpl}}. \tag{25}\]
We also provide the auxiliary data table with values from 3D integrations made throughout the truncated surface only, excluding the \(L_{1}\)-plane. This table can be used directly without reconstruction. Note that the angle-distributed mesh does not always fall on the cross-section of the 3D equipotential lines and the \(L_{1}\)-plane. Hence, the truncated 3D area will not have a continuous transition to the \(L_{1}\)-plane. This 3D integration is prone to carry errors due to an incomplete layer of triangular mesh between 3D equipotential and flat \(L_{1}\)-plane. While both methods provide essentially the same value, the second has a more significant numerical convergence error, and we strongly recommend using the reconstruction method.
For the \(L_{1}\)-plane, the quantities in the main data table are not related to the component of the effective acceleration that is perpendicular to the \(L_{1}\)-plane, \(\eta_{x,L1}\). Suppose one needs values for the \(x-\)component of the acceleration only, that can be easily obtained doing 2D integration on the \(L_{1}\)-plane for \(\eta_{x}\) with \(x=x_{L1}\) (see Equation 8), and then recovering the value with physical units by multiplying it by \(-G(M_{1}+M_{2})/2a^{2}\). While this quantity is not the primary goal of this work, we supplement the averaged values for the \(\eta_{x}\)-component separately in another auxiliary data table (see Appendix A).
## 5 Effects of a non-point mass
A real star is not a point mass. One has to obtain a 3D stellar model to include the effect of a realistic 3D mass distribution on the effective acceleration. Here, we only consider a first-order effect. We start with defining the local mass fraction as a function of the distance to the center of the first star
\[q_{\rm loc}(r_{\rm eq})\equiv\frac{M_{\rm loc}(r_{\rm eq})}{M_{2}}\, \tag{26}\]
where \(q_{\rm loc}(r_{\rm eq})\) is the mass fraction that assumes the mass within the volume-equivalent radius, \(M_{\rm loc}(r_{\rm eq})\) is the local mass coordinate inside the donor, \(M_{\rm loc}(r_{\rm eq})\leq M_{1}\).
We start by considering the deviation in the gravitational potential due to a non-point mass distribution inside the first star, keeping the second star's gravitational contribution, and the effect from being in the corotating frame, the same as previously
\[\psi(\vec{R}) = -G\int_{M1}\frac{dm}{|R_{1}|}-\frac{GM_{2}}{|R_{2}|} \tag{27}\] \[-\frac{1}{2}\Omega^{2}\left[\left(x-\frac{aM_{2}}{M_{1}+M_{2}} \right)^{2}+y^{2}\right]\]
Here, the first term is, in principle, a 3D integral that has to be taken through the donor star's interior. The effective acceleration is the gradient of the effective potential. We consider the changes in the first term of the effective potential due to the mass having a spherically symmetric but non-point mass distribution. The former assumption takes us to the Newton shell theorem, which states that the gravitational field due to the mass inside
Figure 15: The simplified effective acceleration and the integrated effective acceleration, as a function of \(R_{\rm eq}\), for polytropes with \(n=3\) (blue line) and \(n=1.5\) (orange line). Shown for the case \(q=1\).
a given radius acts as a point mass to elements at that radius, whereas the mass outside of that radius does not exert a net gravitational force on those elements. In the case of spherically-symmetric mass distributions, the unitless effective acceleration can be written then as
\[\eta_{x,\mathrm{loc}}(x,y,z) = \frac{2xq_{\mathrm{loc}}(r_{\mathrm{eq}})}{(1+q)|r_{1}|^{3}}+\frac {2(x-1)}{(1+q)|r_{2}|^{3}}\] \[- 2(x-\frac{1}{1+q})\,\] \[\eta_{y,\mathrm{loc}}(x,y,z) = \frac{2yq_{\mathrm{loc}}(r_{\mathrm{eq}})}{(1+q)|r_{1}|^{3}}+ \frac{2y}{(1+q)|r_{2}|^{3}}-2y\,\] \[\eta_{z,\mathrm{loc}}(x,y,z) = \frac{2zq_{\mathrm{loc}}(r_{\mathrm{eq}})}{(1+q)|r_{1}|^{3}}+ \frac{2z}{(1+q)|r_{2}|^{3}}. \tag{28}\]
The total acceleration at the given location can be found as
\[\eta_{\mathrm{loc}}(r)=\sqrt{\eta_{x,\mathrm{loc}}^{2}+\eta_{y,\mathrm{loc}}^ {2}+\eta_{z,\mathrm{loc}}^{2}}. \tag{29}\]
Based on the shape of the local potential, we consider the correctional term for the effective acceleration as
\[\delta\eta\left(q_{\mathrm{loc}}(r_{\mathrm{eq}}),q,r_{\mathrm{eq}}\right)= \frac{2\left(q_{\mathrm{loc}}(r_{\mathrm{eq}})-q\right)}{(1+q)}\frac{1}{r_{1}^{ 2}}. \tag{30}\]
We also introduce a simple "modified" effective acceleration as
\[\eta_{\mathrm{mod}}(q_{\mathrm{loc}}(r_{\mathrm{eq}}),q,r_{\mathrm{eq}})= \eta(q,r_{\mathrm{eq}})+\delta\eta(q_{\mathrm{loc}}(r_{\mathrm{eq}}),q,r_{ \mathrm{eq}}). \tag{31}\]
Here, \(\eta(q,r_{\mathrm{eq}})\) is the same unitless effective acceleration as previously, for point-mass binaries, and its values for each \(q\) and \(r_{\mathrm{eq}}\) can be found from the tables we obtained and provided for the reader.
In order to test this approximation, we considered polytropes with \(n=1.5\) and \(n=3\). For the local mass ratio, we assumed that the mass within each volume-equivalent of the numerically obtained binary equipotential (each equipotential itself is a function of the local mass) is the same as within the same volume in a spherically symmetric polytrope. As shown in Figure 15, the exact integration and the simplified method are in agreement with each other. We conclude that the simplified method, as described, can be used to obtain effective accelerations inside the star from our precalculated tables for point mass cases and the correction term. It should be noted that, for regions close to the center, \(r_{1}\leq 0.05x_{L_{1}}\), we make use of the following analytical expression, to obtain the effective acceleration:
\[\eta=\eta_{\mathrm{ss,loc}}+\Delta\eta_{\mathrm{loc}};\quad\Delta\eta_{ \mathrm{loc}}=-\frac{2}{3}\frac{(1+q)}{q_{\mathrm{loc}}(r_{1})}r_{1}^{3}\eta_{ \mathrm{ss,loc}}. \tag{32}\]
It is assumed that, in the vicinity of the center, \(r_{\mathrm{eq}}\simeq r_{1}\). In this equation, \(\eta_{\mathrm{ss,loc}}\) is the effective acceleration of a single, non-point mass star up to that zone:
\[\eta_{\mathrm{ss,loc}}=\frac{2q_{\mathrm{loc}}(r_{1})}{(1+q)}\frac{1}{r_{1}^{ 2}}. \tag{33}\]
The derivation of these equations is written in C.3.
The subroutine that we provide (see the description in Appendix B) makes use of the described method.
## 6 Conclusions
We have described the method we use to obtain numerically effective accelerations for a star in a binary system as a function of either the effective potential, or the volume-equivalent radius.
Our method is inspired by the fact that isobaric surfaces coincide with the equipotential shells when stars are in the hydrostatic equilibrium. This makes the concepts of volume-effective radii and average effective acceleration of each shell a valuable tool to simulate binary stars in a 1D stellar evolution code.
Our method for obtaining the volume-equivalent radii of each equipotential shell is to integrate with spherical volume elements from the vicinity of the center of the donor star up to the point where the corresponding potential is equal to that sought, with fixed precision. The average effective acceleration on each shell is obtained by dividing each shell into triangular mesh areas and then calculating the weighted average of the effective acceleration, with the weight being the area of the triangular mesh element. The integration area includes \(L_{1}\)-plane. In addition, we obtained the effective accelerations averaged on the \(L_{1}\)-plane as a function of the cross-sectional area or effective potential (the semi-major and semi-minor axes of these ellipse-like cross-sections are also tabulated). We also obtained the average inverse effective acceleration, averaged over the entire equipotential level and on the \(L_{1}\)-plane. Values on the equipotential shell only (excluding \(L_{1}\)-plane) can be recovered from the two provided values, see SS 4.7.
We provide the tables for a range of mass ratios, from \(10^{-6}\) to \(10^{5}\). We have verified the provided tables for self-convergence by taking the run for several angular resolutions. We have also validated the tables by comparing our results to the fit provided for the volume equivalent radius of the shell passing through \(L_{1}\) by Eggleton (1983), the fit provided for the volume equivalent radius of the shell passing through the outer Lagrange point by Marchant et al. (2021) and the average effective acceleration of shells passing through \(L_{1}\) and \(L_{2}\) tabulated by Mochnacki (1984) for certain mass ratios.
Further, we provide analytical expressions describing the effective acceleration's behavior in the vicinity of
the center of the donor star in order to check whether our numerical tables work well close to the center. We also describe a rapid method to obtain effective accelerations for the non-point mass profiles that can take place in stars, using our pre-calculated table for point-mass calculations. This rapid method has been tested and verified to agree with the results for integrating the polytropic mass distribution for \(n=1.5\) and \(n=3\), encompassing most of the mass distributions that are plausible in stars.
We have provided a subroutine for the community that can be used in any 1D stellar code to rapidly obtain effective accelerations (as a function of the local mass and radius) that are more appropriate for the case of binary stars. We hope that the method, the tables, and the subroutine will help with further progress in the understanding of binary star evolution.
N.I. acknowledges funding from NSERC Discovery under Grant No. NSERC RGPIN-2019-04277. This research was enabled in part by support provided by Compute Canada (www.computecanada.ca). We thank the referee for the valuable suggestions that helped to improve the manuscript.
_Facilities:_ Compute Canada
## Appendix A Description of the tables
We provide one file for each of the 109 calculated mass ratios \(q\) between \(10^{-6}\) and \(10^{5}\). The file's name contains the value of \(\log_{10}q\) for which it was obtained. Each file has 15 columns of data, as described in Table 1. There are 600 data points (rows) for each table, starting from \(\xi=\xi(0.05x_{\mathrm{L}_{1}},0,0)\) to \(\xi=\xi_{\mathrm{L}utter}\). The equipotential shell passing via \(L_{1}\) is the data point number 500.
Columns 3-8 of data provide values at the equipotential shells at all data points as obtained from 3D integrations. Hence, the equipotential shells are inclusive of the \(L_{1}\)-plane; the truncated equipotential areas and values on them are to be reconstructed as described in SS 4.7. The columns 9-14 provide data for the \(L_{1}\)-plane but only for mass ratios \(\log_{10}q\leq 2.5\) (a regime where stellar interactions may lead to a mass transfer with a non-negligible mass stream thickness). The \(L_{1}\)-plane properties are given only at the rows 501-600 for the equipotentials less to the \(L_{1}\) value. In rows 1-500, columns 9-14 are filled with the "0" value as a placeholder. For \(\log_{10}q>2.5\), columns 9-14 for rows 501-600 are filled with the "0" value. For point mass cases, one needs three quantities, the donor star mass \(M_{1}\), the mass ratio \(q\), and the binary separation \(a\), to recover values in CGS. The conversions can be found in Table 1.
Another set of files we provide on the website includes a compact version of the previously mentioned. In this version, the units of potential and effective acceleration have been provided in terms of a single point-mass star's potential and acceleration, respectively. This version of the table and the relevant conversion units are tabulated in Table 2. This table is used by the code described in Appendix B to obtain effective accelerations.
In addition, a file has been provided with properties corresponding to the first three Lagrange points for all mass ratios computed, see Table 3. For mass ratios lower than 1, where the outer Lagrange point is \(L_{2}\), and \(L_{3}\) is on the opposite side, we have filled \(-1\) for properties corresponding to \(L_{3}\); the same has been applied for columns 5 and 6 of mass ratios with \(\log_{10}q>2.5\).
There are two more supplementary tables (see SS4.7). One of them is very similar to Table 1, but 3D quantities were obtained while doing integrations over truncated equipotentials. The second supplementary table provides values of \(\eta_{x}\) integrated on \(L_{1}\)-plane.
## Appendix B Subroutine for effective acceleration
The subroutine that we provide uses as input \(M_{1}\), \(r_{1}\) (distance from the center of the donor star), \(M_{1,\mathrm{loc}}\) (enclosed mass of the star from the center up to \(r_{1}\)), \(M_{2}\), and \(a\). By choice of the user, it returns either value of the effective acceleration in CGS, or the effective acceleration as a fraction of the single star's effective acceleration. The values are for the truncated equipotentials. Our subroutine that uses these databases gives the effective acceleration as a function of the masses of the two stars, the orbital separation, and the distance from the center of the donor star. Once the
\begin{table}
\begin{tabular}{l l l l} \hline \hline & quantity & notation & conversion equation \\ \hline
1 & equipotential shell number & \(q\) & unitless \\
2 & mass ratio & F & unitless \\
3 & fill-out function & \(\xi\) & \(\xi\times\frac{-GM_{1}(1+q)}{2qa}\) \\
4 & potential & \(R_{\rm eq}\) & \(R_{\rm eq}\times a\) \\
5 & volume-equivalent radius & \(R_{\rm eq}\) & \(R_{\rm eq}\times a\) \\
6 & effective acceleration averaged on the equipotential shell & \(\eta_{\rm shell}\) & \(\eta_{\rm shell}\times\frac{GM_{1}(1+q)}{2qa^{2}}\) \\
7 & inverse effective acceleration averaged on the equipotential shell & \(\langle\eta_{\rm shell}^{-1}\rangle\) & \(\langle\eta_{\rm shell}^{-1}\rangle\times\frac{2qa^{2}}{GM_{1}(1+q)}\) \\
8 & area of the equipotential shell & \(A\) & \(A\times a^{2}\) \\
9 & area of \(L_{1}\) cross-section & \(A_{\rm Lpl}\) & \(A_{\rm Lpl}\times a^{2}\) \\
10 & y intersection of \(L_{1}\) cross-section & \(y_{\rm Lpl}\) & \(y_{\rm Lpl}\times a\) \\
11 & z intersection of \(L_{1}\) cross-section & \(z_{\rm Lpl}\) & \(z_{\rm Lpl}\times a\) \\
12 & effective acceleration averaged over the intersection with \(L_{1}-\)plane & \(\eta_{\rm L}\) & \(\eta_{\rm L}\times\frac{GM_{1}(1+q)}{2qa^{2}}\) \\
13 & effective acceleration averaged over \(L_{1}\) cross-section area & \(\eta_{\rm Lpl}\) & \(\eta_{\rm Lpl}\times\frac{GM_{1}(1+q)}{2qa^{2}}\) \\
14 & inverse effective acceleration averaged over \(L_{1}\) cross-section area & \(\langle\eta_{\rm Lpl}^{-1}\rangle\) & \(\langle\eta_{\rm Lpl}^{-1}\rangle\times\frac{2qa^{2}}{GM_{1}(1+q)}\) \\ \hline \end{tabular}
\end{table}
Table 1: The properties computed and included in our database for each mass ratio.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & quantity & notation & conversion equation \\ \hline
1 & volume-equivalent radius & \(R_{\rm eq}\) & \(R_{\rm eq}\times a\) \\
2 & relative potential & \(\xi_{\rm rel}\) & \(\xi_{\rm rel}\times\frac{-GM_{1}}{r_{\rm eq}}\) \\
3 & relative average effective acceleration on equipotential shell & \(\eta_{\rm rel,shell}\) & \(\eta_{\rm rel,shell}\times\frac{GM_{1}}{r_{\rm eq}^{2}}\) \\
4 & relative average effective acceleration on \(L_{1}\) cross-section & \(\eta_{\rm rel,Lpl}\) & \(\eta_{\rm rel,Lpl}\times\frac{GM_{1}}{r_{\rm eq}^{2}}\) \\
5 & area of \(L_{1}\) cross-section & \(A_{\rm Lpl}\) & \(A_{\rm Lpl}\times a^{2}\) \\
6 & area of equipotential shell & \(A\) & \(A\times a^{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: The compact version of the database that will be used in B
\begin{table}
\begin{tabular}{l l l l} \hline \hline & quantity & notation & conversion equation \\ \hline
1 & mass ratio & \(q\) & unitless \\
2 & volume-equivalent radius of shell passing \(L_{1}\) & \(R_{\rm eq,L1}\) & \(R_{\rm eq,L1}\times a\) \\
3 & volume-equivalent radius of shell passing \(L_{2}\) & \(R_{\rm eq,L2}\) & \(R_{\rm eq,L2}\times a\) \\
4 & volume-equivalent radius of shell passing \(L_{3}\) & \(R_{\rm eq,L3}\) & \(R_{\rm eq,L3}\times a\) \\
5 & area of \(L_{1}\) cross-section of \(L_{2}\) shell & \(A_{\rm Lpl,L2}\) & \(A_{\rm Lpl,L2}\times a^{2}\) \\
6 & area of \(L_{1}\) cross-section of \(L_{3}\) shell & \(A_{\rm Lpl,L3}\) & \(A_{\rm Lpl,L3}\times a^{2}\) \\
7 & average effective acceleration on the L-plane of \(L_{2}\) shell & \(\eta_{\rm Lpl,L2}\) & \(\eta_{\rm Lpl,L2}\times\frac{GM_{1}(1+q)}{2qa^{2}}\) \\
8 & average effective acceleration on the L-plane of \(L_{3}\) shell & \(\eta_{\rm Lpl,L3}\) & \(\eta_{\rm Lpl,L3}\times\frac{GM_{1}(1+q)}{2qa^{2}}\) \\ \hline \end{tabular}
\end{table}
Table 3: The properties computed and included in our database for each of the first three Lagrange points.
mentioned inputs to the subroutine are given, the code determines between which two mass ratios does the local mass ratio fall in between. Then for those two tables it finds where the distance from the donor star's center falls in between and interpolates for the effective acceleration between the two volume equivalent radii. After doing this for both mass ratios, it interpolates again, this time between mass ratios and the two obtained accelerations corresponding to those mass ratios to obtain the finalized effective acceleration. This acceleration could be given in CGS units, or as a unitless parameter by dividing it by the local gravitational acceleration if there was only a single star.
If the mass ratio specified is lower than the minimum mass ratio in the database (\(10^{-6}\)), or larger than the maximum mass ratio (\(10^{+5}\)), the code will output an error. If the distance and orbital separation is specified such that \(r_{1}/a\leq 0.05\), the subroutine will use the analytical expression C7 to obtain the effective acceleration instead. The code as it is should not be used for the cases when the mass ratio is above \(10^{+2.5}\), and there is RLOF. In the latter case, the user should re-do this table using 3D quantities obtained while doing integrations over truncated equipotentials. The databases and sample subroutines are available at Zenodo (Pourmand & Ivanova, 2023), and updates will be available at [https://github.com/AliPourmand/1D_binary_star_properties](https://github.com/AliPourmand/1D_binary_star_properties).
## Appendix C Behavior close to the center of \(M_{1}\).
We consider a coordinate system in corotation with the binary, with the origin at the donor star's center. The \(xy\) plane is the plane of the orbit, and the co-rotating plane is rotating around the axis which passes through the center of mass of the binary system. The effective potential \(\Psi\) in a spherical coordinate system that rotates with the fixed binary orbital angular velocity \(\Omega=\sqrt{G(M_{1}+M_{2})/a^{3}}\):
\[\Psi(R,\vartheta,\varphi) = -\frac{GM_{1}}{R}-\frac{GM_{2}}{\sqrt{(R\cos\varphi\sin\vartheta- a)^{2}+R^{2}\sin^{2}\varphi\sin^{2}\vartheta+R^{2}\cos^{2}\vartheta}}\] (C1) \[-\frac{1}{2}\Omega^{2}\left[(R\cos\varphi\sin\vartheta-a\frac{M_ {2}}{M_{1}+M_{2}})^{2}+R^{2}\sin^{2}\varphi\sin^{2}\vartheta)\right]\.\]
Here \(M_{1}\) and \(M_{2}\) are the donor and companion star masses, \(a\) is the orbital separation, and \(R\) is the distance to the origin.
The first term \(\Psi_{\rm ss}\) is the potential of a spherically symmetric single star of mass \(M_{1}\). The other two terms represent the deviation of the effective potential \(\Delta\Psi=\Psi_{\rm c}+\Psi_{\rm b}\) due to having a companion of mass \(M_{2}\), \(\Psi_{\rm c}\), and due to being in a coordinate system that rotates with \(\Omega\), \(\Psi_{\rm b}\). Introducing \(r=R/a\), the terms are reduced to:
\[\Psi_{\rm ss}(r,\vartheta,\varphi) = -\frac{GM_{1}}{a}\frac{1}{r}\,\] \[\Psi_{\rm c}(r,\vartheta,\varphi) = -\frac{GM_{2}}{a}\frac{1}{\sqrt{r^{2}-2r\cos\varphi\sin\vartheta+ 1}}\,\] \[\Psi_{\rm b}(r,\vartheta,\varphi) = -\frac{1}{2}\frac{G(M_{1}+M_{2})}{a}\left[r^{2}\sin^{2}\vartheta- 2r\frac{M_{2}}{M_{1}+M_{2}}\cos\varphi\sin\vartheta+\frac{M_{2}^{2}}{(M_{1}+M _{2})^{2}}\right]\.\] (C2)
We want to investigate the behavior of the effective acceleration and potential at distances close to the center; thus we assume that equipotential surfaces there are almost spherical (we have verified this assumption by numerical integrations).
### Effective acceleration
As the equipotential surfaces are almost spherical, we adopt that for the effective acceleration, the only derivative of the effective potential that matters is the derivative by \(r\).
\[\frac{\partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r}=\frac{G}{a}\left[M _{2}\frac{(r-\cos\varphi\sin\vartheta)}{(r^{2}-2r\cos\varphi\sin\vartheta+1) ^{3/2}}-(M_{1}+M_{2})r\sin^{2}\vartheta+M_{2}\cos\varphi\sin\vartheta\right]\.\] (C3)
In the limit \(r\ll L_{1}\),
\[\left.\frac{\partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r}\right|_{r\ll 1} =\frac{G}{a}\left[M_{2}(r-3r\cos^{2}\varphi\sin^{2}\vartheta)-(M_{1}+M_{2})r \sin^{2}\vartheta\right]\.\] (C4)
After the integration over all angles, the only contributing term that remains is from being in the co-rotating frame, while the contribution of the gravitational field of the companion has vanished:
\[\left\langle\left.\frac{\partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r} \right|_{r\ll 1}\right\rangle=\left.\frac{1}{4\pi}\int_{4\pi}\left.\frac{ \partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r}\right|_{r\ll 1}d \Omega=-\frac{2}{3}\frac{G(M_{1}+M_{2})}{a}r\.\] (C5)
Finally, the relative deviation of the gradient of the effective potential in a binary, from the case of the spherically symmetric one, near the center, is
\[\left\langle\left.\frac{\partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r} \right|_{r\ll 1}\right\rangle/\frac{\partial\Psi_{\rm ss}(r)}{\partial r}=- \frac{2}{3}\frac{(M_{1}+M_{2})}{M_{1}}r^{3}\equiv-\frac{2}{3}\frac{(M_{1}+M_{ 2})}{M_{1}}\frac{R^{3}}{a^{3}}\.\] (C6)
Converting to the unitless acceleration used throughout in this manuscript, and realising that \(r\) used in here is equivalent to \(r_{1}\), we have for the deviation in the average effective acceleration due to being in a binary:
\[\Delta\eta=-\frac{2}{3}\frac{1+q}{q}\ r_{1}^{3}\ \eta_{\rm ss}\.\] (C7)
Here \(\eta_{\rm ss}\) is the the average effective acceleration of a spherically symmetric single star, and \(q=M_{1}/M_{2}\).
### Potential
In the limit \(r\ll 1\), the second term that provides the effect due to having a companion is:
\[\Psi_{\rm c}(r,\vartheta,\varphi) = -\frac{GM_{2}}{a}\left(1+r\cos\varphi\sin\vartheta-\frac{r^{2}}{2 }\right)\.\]
After the integration over all angles, we have:
\[\left\langle\left.\Delta\Psi(r,\vartheta,\varphi)\right|_{r\ll 1}\right\rangle/ \Psi_{\rm ss}(r)=\frac{3M_{2}^{2}+2M_{1}M_{2}}{2M_{1}(M_{1}+M_{2})}\ r+\frac{2M_{1}+5M_{2}}{6M_{1}}r^{3},\] (C9)
and, for the unitless potential, with \(q=M_{1}/M_{2}\):
\[\Delta\xi=\left(\frac{3+2q}{2q(1+q)}r_{1}+\frac{2q+5}{6q}\ r_{1}^{3}\right) \xi_{\rm ss}\.\] (C10)
### Effective acceleration for non-point mass
The derivation for none-point mass is similar to described in C.1, with the difference that in the Equation C1, the total mass \(M_{1}\) in the first term (\(\Psi_{ss}\)) is replaced by the local value of mass,
\[\Psi_{\rm ss,loc}(r,\vartheta,\varphi)=-\frac{GM_{\rm loc}}{a}\frac{1}{r}\.\] (C11)
Here \(M_{\rm loc}\) is the local mass coordinate inside the donor star at a distance \(r\) from the star's center. Thus, the effective acceleration of a single, non-point mass star at radius \(r\) and with local mass \(M_{\rm loc}\) is:
\[\left.\frac{\partial\Psi_{\rm ss,loc}(r)}{\partial r}=\frac{GM_{\rm loc}}{a} \frac{1}{r^{2}}.\right.\] (C12)
\[\left\langle\left.\frac{\partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r} \right|_{r\ll 1}\right\rangle/\frac{\partial\Psi_{\rm ss,loc}(r)}{\partial r}=- \frac{2}{3}\frac{(M_{1}+M_{2})}{M_{\rm loc}}r^{3}\equiv-\frac{2}{3}\frac{(M_ {1}+M_{2})}{M_{\rm loc}}\frac{R^{3}}{a^{3}}\.\] (C13)
The unitless form of the following equation could be obtained if we use the fact that the scaling factor for effective acceleration 12 is the same for both terms of the numerator and the denominator of the left-hand side, we can write the left hand side as:
\[\left\langle\left.\frac{\partial\Delta\Psi(r,\vartheta,\varphi)}{\partial r} \right|_{r\ll 1}\right\rangle/\frac{\partial\Psi_{\rm ss,loc}(r)}{\partial r}= \frac{\Delta\eta_{\rm loc}}{\eta_{\rm ss,loc}}\] (C14)
And, finally, using the definition of \(q_{\rm loc}\):
\[\Delta\eta_{\rm loc}=-\frac{2}{3}\frac{1+q}{q_{\rm loc}(r_{1})}\ r_{1}^{3}\ \eta_{\rm ss,loc}\.\] (C15)
|
2303.17218 | HARFLOW3D: A Latency-Oriented 3D-CNN Accelerator Toolflow for HAR on
FPGA Devices | For Human Action Recognition tasks (HAR), 3D Convolutional Neural Networks
have proven to be highly effective, achieving state-of-the-art results. This
study introduces a novel streaming architecture based toolflow for mapping such
models onto FPGAs considering the model's inherent characteristics and the
features of the targeted FPGA device. The HARFLOW3D toolflow takes as input a
3D CNN in ONNX format and a description of the FPGA characteristics, generating
a design that minimizes the latency of the computation. The toolflow is
comprised of a number of parts, including i) a 3D CNN parser, ii) a performance
and resource model, iii) a scheduling algorithm for executing 3D models on the
generated hardware, iv) a resource-aware optimization engine tailored for 3D
models, v) an automated mapping to synthesizable code for FPGAs. The ability of
the toolflow to support a broad range of models and devices is shown through a
number of experiments on various 3D CNN and FPGA system pairs. Furthermore, the
toolflow has produced high-performing results for 3D CNN models that have not
been mapped to FPGAs before, demonstrating the potential of FPGA-based systems
in this space. Overall, HARFLOW3D has demonstrated its ability to deliver
competitive latency compared to a range of state-of-the-art hand-tuned
approaches being able to achieve up to 5$\times$ better performance compared to
some of the existing works. | Petros Toupas, Alexander Montgomerie-Corcoran, Christos-Savvas Bouganis, Dimitrios Tzovaras | 2023-03-30T08:25:27Z | http://arxiv.org/abs/2303.17218v6 | # HARFLOW3D: A Latency-Oriented 3D-CNN Accelerator Toolflow for HAR on FPGA Devices
###### Abstract
For Human Action Recognition tasks (HAR), 3D Convolutional Neural Networks have proven to be highly effective, achieving state-of-the-art results. This study introduces a novel streaming architecture-based toolflow for mapping such models onto FPGAs considering the model's inherent characteristics and the features of the targeted FPGA device. The HARFLOW3D toolflow takes as input a 3D CNN in ONNX format and a description of the FPGA characteristics, generating a design that minimises the latency of the computation. The toolflow is comprised of a number of parts, including (i) a 3D CNN parser, (ii) a performance and resource model, (iii) a scheduling algorithm for executing 3D models on the generated hardware, (iv) a resource-aware optimisation engine tailored for 3D models, (v) an automated mapping to synthesizable code for FPGAs. The ability of the toolflow to support a broad range of models and devices is shown through a number of experiments on various 3D CNN and FPGA system pairs. Furthermore, the toolflow has produced high-performing results for 3D CNN models that have not been mapped to FPGAs before, demonstrating the potential of FPGA-based systems in this space. Overall, HARFLOW3D has demonstrated its ability to deliver competitive latency compared to a range of state-of-the-art hand-tuned approaches, being able to achieve up to \(5\times\) better performance compared to some of the existing works. The tool is available at [https://github.com/ICIdsl/harflow3d](https://github.com/ICIdsl/harflow3d).
FPGA, Toolflow, 3D CNNs, Human Action Recognition
## I Introduction
The growing focus on video-related applications such as video surveillance, autonomous driving, and patient monitoring has necessitated the development of algorithms that integrate and take into account the temporal domain. 3D CNNs, which are often employed to deal with video and volumetric data, augment their learning capacity by extracting input features related to this additional dimension. Due to the temporal dimension, 3D CNNs often have larger computational and memory requirements compared to 2D CNNs. Particularly, 3D CNNs have exhibited high performance in the task of HAR, enabling the interpretation of human motion across video frames and the detection of various activities without the need for specialised time domain approaches (e.g., LSTMs). Whilst vision transformers have recently attained state-of-the-art accuracy, their operation requires orders of magnitude more GFLOPs comparatively.
Devices such as GPUs, FPGAs, and ASICs have been utilised to address the high processing requirements of 3D CNNs and deliver high-performance systems. FPGAs are particularly attractive as an acceleration platform since they are more flexible than ASICs and more energy efficient than GPUs. The rapid growth and increasing complexity of 3D CNN model designs necessitate high-quality hardware designs that support short design cycles for new 3D CNN model specifications. The goal of this work is to provide an automated way for deploying 3D CNN models onto FPGA systems, with an emphasis on minimising the execution latency. The diversity of the supported models and devices makes it suitable for a number of applications and enables users to select the most appropriate 3D CNN model and device according to their unique demands and budget.
3D CNNs have been studied and developed for quite some time, and the topic of HAR is gaining attention year by year. A few studies have focused on mapping 3D CNNs to FPGAs, but the vast majority of them propose hand-tuned hardware architectures for specific 3D CNN models. While
Fig. 1: Pareto front on 3D CNNs: Latency over Accuracy. Designs produced by the proposed HARFLOW3D toolflow dominate the Pareto front.
FPGA toolflows for 2D CNNs are well-researched [1, 2], there is an absence of 3D CNN FPGA toolflows. The distinct characteristics of 3D CNNs, such as their large workloads and significantly increased memory and resource requirements, require such toolflows.
Figure 1 displays the pareto-front both for prior works as well as the proposed toolflow, showing their achieved accuracy and latency. HARFLOW3D designs account for most of the points on the pareto-front, demonstrating the toolflow's ability to generate pareto-optimal designs for a variety of 3D CNN models on HAR. The pareto-optimal relationship between accuracy and latency is an extremely desirable feature for a toolflow, as it allows a designer to trade-off their model's accuracy for greater performance in a fine-grain manner.
The key contributions of this paper are the following:
* Introduction of HARFLOW3D, the first 3D CNN to FPGA toolflow, which supports a variety of models and devices, achieving competitive results compared to prior hand-tuned works.
* An optimisation strategy accompanied by a set of transformations, providing tailored designs based on the characteristics of each 3D CNN model layer.
* A set of highly-tuned parameterized building blocks that support runtime parameterization, allowing for significantly lower latency over the non-parameterized equivalent.
* A rich evaluation with experimental results across multiple devices and multiple models, including state-of-the-art 3D CNN HAR models that have not been addressed before, setting the landscape for FPGA-based HAR model computation.
## II Related Work
Whilst 3D CNNs have been around for a while, there have been few studies that focus on accelerating these networks on FPGAs. The majority of these studies have focused on older 3D CNNs such as the C3D [3] model, whose accuracy falls short of state-of-the-art models. Fan et al. [4, 5, 6] have released a series of publications about accelerating 3D CNNs for human action recognition using FPGAs. In their first work [4], they introduced the F-C3D hardware architecture for accelerating the C3D model, which is capable of supporting multiple 3D convolutional layers and includes solutions for resolving some of the challenges posed by 3D CNNs, such as higher processing and memory demands. Additionally, they demonstrated the portability of their design to other FPGA devices. In a following publication [5], they presented an analytical model and a tool for optimising the hardware architecture based on device specifications, accuracy requirements, and the usage of block floating point arithmetic precision to minimise accuracy loss. Their evaluation of the design was likewise performed on the C3D model. In their most recent publication [6], they proposed E3DNet, an effective 3D CNN based on the proposed 3D-1 bottleneck building block. The F-E3D hardware implementation of E3DNet achieves a real-time execution time of 35.3 milliseconds per frame and scores 85.1% accuracy on the UCF101 [7] benchmark.
Based on the similarities between the 2D and 3D convolution computing patterns, Liu et al. [8] presented a hardware design to accelerate both 2D and 3D CNNs. They sought to turn CNN convolutions into matrix multiplication operations and concentrated on minimising memory usage to overcome the challenges associated with replicating feature maps. Applying an analytical model, they configured the accelerators for maximum resource utilisation and evaluated their design using the C3D model. Shen et al. [9] developed a template-based architecture based on the Winograd transform [10] that is capable of handling both 2D and 3D CNNs. In addition, they developed an analytical method for quickly exploring the design space for mapping 2D and 3D CNNs onto FPGA accelerators and validated their design with the C3D model. Sun et al. [11] applied weight pruning to the C3D and R(2+1) [12] 3D CNN architectures using a blockwise approach. Their hardware architecture, which is based on the Alternating Direction Method of Multipliers (ADMM), enables the acceleration of 3D CNNs with minimal accuracy loss as compared to their unpruned counterparts.
Teng et al. [13] presented a design space exploration strategy for optimising memory access in 3D CNN models accelerated on FPGAs. The authors proposed a non-overlapping data tiling method for off-chip memory access and explored on-chip data reuse using different loop ordering strategies. They further proposed a hardware architecture that can support these strategies. Their experiments showed that the proposed approach on the C3D model achieved state-of-the-art performance compared to prior FPGA implementations at that time. Khan et al. [14] investigated various 3D CNN design parameters for resource-limited platforms, focusing on the I3D model, a 70-layer deep network for video action recognition. They adjusted the feature-map word lengths and weights in a pre-trained model, which reduced its complexity without affecting its accuracy. They proposed a data tiling technique that utilises all four dimensions of video data and improves memory bandwidth while reducing DRAM accesses. Based on these optimisations, the proposed FPGA accelerator achieves 684 GOPs/s for 32-bit floating point and 1.29 TOPs/s for 8-bit integer implementations with a 2% accuracy drop.
The majority of research has been largely focused on the C3D [3] model for HAR, which was introduced in 2013. The model's architecture is rather simple, consisting of only 8 convolutional layers, while it performs poorly in terms of accuracy when compared to recent SoA models in HAR (85.2% in UCF101 vs. the current SoA's 98.6%). In terms of design complexity, an analogy can be made to AlexNet [15], but in three-dimensional space. Since the aforementioned approaches are mostly focused on the design of the specific C3D model, it is not clear how they can be extended, evaluated, or applied to the more complicated networks of modern state-of-the-art HAR models. The proposed toolflow addresses this by supporting a broad variety of recent 3D CNN designs, such as X3D [16] and Slowonly [17] among others, which include
more complex ResNet3D-wise architectures as well as older models like C3D for direct comparison with prior studies.
## III Proposed Architecture
This section outlines the basics of the hardware-level architecture of the toolflow's generated designs. The suggested toolflow adheres to the same streaming architecture principles as fpgaConvNet [18], which is based on the Synchronous Data-Flow (SDF) computation model.
### _Neural Network Model Parser_
To facilitate the process of incorporating various neural network (NN) models into the toolflow, a dedicated NN model parser has been developed. This parser is designed to read and map NN models in the ONNX format, a standardised format supported by many deep learning frameworks such as PyTorch and TensorFlow, into the format required by the toolflow. The 3D CNN model, can be described as a Directed Acyclic Graph (DAG), which is denoted as \(M=\{l_{1},...,l_{L}\}\), where \(l_{i}\) is the \(i^{th}\) layer within the set of model layers \(L\) and is expressed as an execution node of \(M\). This is then translated into a Synchronous Data-Flow Graph (SDFG) which is also directed and acyclic, denoted as \(G\) with \(N\) computation (or hardware) nodes, where \(G=\{n_{1},...,n_{N}\}\). The core concept of synchronous dataflow modelling is that each node fires whenever data is available at its inputs, resulting in a paradigm of data-driven execution. This format is compatible with the rest of the toolflow's tools, such as the latency optimiser and the toolflow's resource and performance models.
### _Building Blocks Description_
Each layer of a NN model accepts input data to be processed and returns output data once its operation has been completed. These inputs and outputs are described as feature-maps in and out respectively. The maximum feature-map dimensions supported for a given hardware node \(n\) are described as below.
\(\mathbf{S_{n}^{in}}=\{H_{n}^{in},\ W_{n}^{in},D_{n}^{in},\ C_{n}^{in}\ \}\)
\(\mathbf{S_{n}^{out}}=\{H_{n}^{out},W_{n}^{out},D_{n}^{out},C_{n}^{out}\}\)
where \(H_{n}^{in/out}\), \(W_{n}^{in/out}\), \(D_{n}^{in/out}\), and \(C_{n}^{in/out}\) are the spatial dimensions (Height, Width), followed by the temporal dimension (Depth), followed by the number of Channels for the input and output respectively. The size of the feature-map in regards to the number of elements is referred to as \(|\mathbf{S}|\). The \(\mathbf{S}_{n}^{in}\), and \(\mathbf{S}_{n}^{out}\) parameters exist for all of the layers as part of their parameter space definition. Alongside functional parameters, the hardware accepts different fixed-point precisions at compile-time. The rest of the parameters of the layers, and their intermediate representation definitions are detailed in Table I, and a description is provided below. The runtime parameters are differentiated from compile-time parameters using the \(\wedge\) symbol above a parameter. Computation node parameters are subscripted with \(n\), and execution node parameters with \(l\).
* **Convolution 3D** Due to the necessity to support a range of 3D CNN models, the toolflow's convolution building block is designed to accommodate the following types of convolution operation: (a) Full convolution \(K^{D}\times K^{H}\times K^{W}\) (b) Spatial convolution \(1\times K^{H}\times K^{W}\) (c) Temporal convolution \(K^{D}\times 1\times 1\) (d) Depth-wise convolution (e) Point-wise convolution. The computation node is characterised as the following tuple: \[\Gamma=\{\hat{\mathbf{S}}^{in},\hat{\mathbf{S}}^{out},\hat{\mathbf{K}},\hat{\mathbf{J}},\ \hat{\mathbf{P}},\hat{G}_{T},\hat{c}^{in},\hat{c}^{out},\hat{f}\}\]
* **3D Pooling** For 3D pooling layers, the toolflow supports both maximum and average pooling which can be chosen at runtime by the enumerated parameter \(T\). The computation node is characterised as the following tuple: \[\Gamma=\{\hat{\mathbf{S}}^{in},\hat{\mathbf{S}}^{out},\hat{\mathbf{K}},\hat{\mathbf{J}},\hat{ \mathbf{P}},\hat{T},\hat{c}\}\]
* **Global Pooling, Activation & Element-Wise 3D** Although Global Pooling is a special case of the regular Pooling layer, the hardware for it is optimised for this case. The supported activation functions of the activation layer are the following: (a) ReLU activation, (b) Sigmoid activation, (c) Swish activation (\(y=x*sigmoid(x)\)), where the type of activation is given by the parameter \(T\). The runtime choice for broadcasting is defined as \(B\), which is either true or false. The type of element-wise operation is given by the parameter \(T\). The computation node is characterised as the following tuple: \[\Gamma=\{\hat{\mathbf{S}}^{in},\hat{\mathbf{S}}^{out},\hat{T},\hat{B},\hat{c}\}\]
* **Fully Connected** Fully Connected layers share hardware with Convolution layers, but with no feature-map
\begin{table}
\begin{tabular}{|c|l|} \hline \multicolumn{2}{|c|}{**Convolution**} \\ \hline \(E_{n}\) & Number of filters (output channel dimension) \\ \hline \(\mathbf{K}_{n}\) & 3D kernel size (\(K_{n}^{D}\), \(K_{n}^{H}\), \(K_{n}^{W}\)) \\ \hline \(\mathbf{J}_{n}\) & 3D stride (\(J_{n}^{D}\), \(J_{n}^{H}\), \(J_{n}^{W}\)) \\ \hline \(\mathbf{P}_{n}\) & 3D Padding (\(P_{n}^{L_{n}}\), \(P_{n}^{L_{n}}\), \(P_{n}^{H_{n}}\), \(P_{n}^{H_{n}}\), \(P_{n}^{W_{n}}\), \(P_{n}^{W_{n}}\)) \\ \hline \(Gr_{n}\) & Grouping along the channel dimension \\ \hline \(c_{n}^{in}\) & parallel streams in \\ \hline \(c_{n}^{out}\) & parallel streams out \\ \hline \(f_{n}\) & Vector dot product folding \\ \hline \multicolumn{2}{|c|}{**Fully Connected**} \\ \hline \(F_{n}\) & Number of filters (output channel dimension) \\ \hline \(c_{n}^{in}\) & Number of parallel streams in \\ \hline \(c_{n}^{out}\) & Number of parallel streams out \\ \hline \multicolumn{2}{|c|}{**Pooling**} \\ \hline \(T_{n}\) & Type of activation \\ \hline \(\mathbf{K}_{n}\) & 3D kernel size (\(K_{n}^{H}\), \(K_{n}^{H}\), \(K_{n}^{W}\)) \\ \hline \(\mathbf{J}_{n}\) & 3D stride (\(J_{n}^{D}\), \(J_{n}^{H}\), \(J_{n}^{W}\)) \\ \hline \(\mathbf{P}_{n}\) & 3D Padding (\(P_{n}^{L_{n}}\), \(P_{n}^{L_{n}}\), \(P_{n}^{H_{n}}\), \(P_{n}^{H_{n}}\), \(P_{n}^{W_{n}}\), \(P_{n}^{W_{n}}\)) \\ \hline \(c_{n}\) & Number of parallel streams in \& out \\ \hline \multicolumn{2}{|c|}{**Activation**} \\ \hline \(T_{n}\) & Type of activation \\ \hline \(c_{n}\) & Number of parallel streams in \& out \\ \hline \multicolumn{2}{|c|}{**Global Average Pooling**} \\ \hline \(c_{n}\) & Number of parallel streams in \& out \\ \hline \multicolumn{2}{|c|}{**Element-Wise**} \\ \hline \(T_{n}\) & Type of element-wise operation \\ \hline \(B_{n}\) & Mode of operation (default or broadcast) \\ \hline \(c_{n}\) & Number of parallel streams in \& out \\ \hline \end{tabular}
\end{table} TABLE I: Compile time parameters for node \(n\) in the hardware graph \(G\), for each layer type.
buffering. The computation node is characterised as the following tuple:
\[\Gamma=\{\hat{\mathbf{S}}^{in},\hat{\mathbf{S}}^{out},\hat{c}^{in},\hat{c}^{out}\}\]
### _Hardware Design and Implementation_
The proposed architecture follows the paradigm of a system consisting of a processor extended by a set of custom instructions. The core building blocks described before are equivalent to custom instructions, and their control is performed through a CPU. Each building block is connected to a crossbar that is responsible for handling the routing of data between the building blocks and off-chip memory, as well as performing inter building block routing. The memory access to and from memory is supported through dedicated DMA blocks.
The architecture of the custom instructions follows the Streaming Architecture paradigm, by implementing direct convolution operations and exploring the opportunity of driving the instantiated building blocks as dictated by the 3D model without accessing the off-chip memory, but at the same time can map multiple operations on the same building blocks in a time-shared manner avoiding the need for reconfiguring the FPGA fabric.
The toolflow is responsible to identify the required building blocks in order to support the computations of a given 3D CNN model, as well as to tune them in order to optimise the performance of the system given the available resources (FPGA resources and off-chip memory bandwidth). As such, the resulting system is a heterogeneous multi-core system, with blocks tailored to the 3D CNN model and the targeted FPGA device. This deviates from the approach taken in other streaming toolflows such as FINN [19], and fpgaConvNet [18], where the hardware is tailored to specific DNN models with layers being deeply pipelined, utilising bitstream reconfiguration to overcome resource constraints. Such design approach leads to high performance designs for throughput oriented applications, however bitstream reconfiguration inhibits the ability to target latency-driven applications with a latency target smaller than the reconfiguration time of the device, which is usually in the order of hundreds of milliseconds.
Figure 2 illustrates a simplified diagram of an example accelerator generated for a given model. Feature-maps are sent to and from off-chip memory via a pair of DMAs, and sent to the hardware nodes through the configurable crossbars. The design has a sandwich-like architecture, with the AXI-Stream crossbars routing data to and from hardware nodes. The output crossbar connects to the input crossbar, allowing for interconnectivity between hardware nodes.
The fpgaConvNet [18] toolflow is utilised for the implementation of the computation blocks. We regard this as the _baseline_ design, where no runtime configuration is supported, and only padded execution can be used to execute variable feature-map sizes. Figure 3 illustrates how the baseline design has been modified to support runtime configurable layers. The highlighted blocks constitute the overhead required for supporting runtime reconfigurability. The additional hardware includes a configurable counter to change the depth of the line buffers and accumulation buffers, and a crossbar to map configurable kernel sizes to a configurable number of multipliers. These extra resources are insignificant, and there is no change in targetable clock frequency.
## IV Modelling
Precise, high-quality performance and resource modelling is essential to support a rapid and valid design space exploration. This section presents a comprehensive performance model for the parameterized building blocks described in Section III. Performance models are described per computation node, and the resource model is for the entire system design.
### _Performance Modelling_
Performance modelling is required to evaluate the latency objective during design space exploration. The latency of the execution of a layer can be estimated by a roof-line model consisting of the required memory bandwidth, and the latency from computation. The latency model for each supported
Fig. 3: Diagram of hardware for Convolution, and how it can be used with runtime parameters. The blue blocks represent compile-time configurable hardware modules. The red blocks represent runtime configurable hardware modules. Cross-hatching gives an example of how hardware elements can be bypassed at runtime.
Fig. 2: Block diagram of an accelerator instance produced by the HARFLOW3D toolflow. The black lines describe AXI-Stream signals, where the arrows indicate the directionality of the connection, blue are high-throughput AXI interfaces for DMA access, red are AXI-Lite connections for runtime configuration of the hardware nodes, and green indicate the DDR IO interfaces for communicating with off-chip memory.
hardware type as a function of its runtime parameters \(\Gamma\) is given as follows:
\[\mathcal{L}_{Conv}(\Gamma)=\frac{|\hat{\mathbf{S}}^{out}|\cdot\hat{F}\cdot|\hat{\mathbf{K }}|}{\hat{c}^{out}\cdot\hat{c}^{in}\cdot\hat{f}}\]
\[\mathcal{L}_{FC}(\Gamma)=\frac{\hat{C}\cdot\hat{F}}{\hat{c}^{in}\cdot\hat{c}^{ out}}\]
\[\mathcal{L}_{Pool}(\Gamma)=\mathcal{L}_{Act}(\Gamma)=\mathcal{L}_{EltWise}( \Gamma)=\frac{|\hat{\mathbf{S}}^{in}|}{\hat{c}}\]
The preceding models assume unlimited memory bandwidth, however memory accesses have limited throughput. In order to model the effect of memory bandwidth, the consumption and production rates of the layer are required, which are given as,
\[r_{n}^{in}(\Gamma)=\frac{|\hat{\mathbf{S}}^{in}|}{\mathcal{L}_{n}(\Gamma)\cdot\hat{ c}^{in}},\ r_{n}^{out}(\Gamma)=\frac{|\hat{\mathbf{S}}^{out}|}{\mathcal{L}_{n}( \Gamma)\cdot\hat{c}^{out}}\]
where \(r_{n}^{in}(\Gamma)\) and \(r_{n}^{out}(\Gamma)\) are the words per cycle per stream in and out of the computation node \(n\) when executing runtime parameters \(\Gamma\).
For convolution and fully-connected layers in particular, extra memory bandwidth is required to stream in the weight parameters. This rate is described as,
\[r_{Conv,FC}^{param}(\Gamma)=\frac{\hat{C}^{in}\cdot\hat{F}\cdot|\hat{\mathbf{K}}|} {\mathcal{L}_{Conv,FC}(\Gamma)\cdot\hat{c}^{in}\cdot\hat{c}^{out}\cdot\hat{f}}\]
Alongside the double-buffering of weights, if the channel dimension of a convolution or full-connected layer is folded, the partial sums must be accumulated. This requires streaming the previous partial sum from off-chip memory, whose rate matches the rate out.
\[r_{Conv,FC}^{psum}(\Gamma)=r_{Conv,FC}^{out}(\Gamma)\]
The constrained bandwidth in and out can be described as,
\[\mathcal{B}_{n}^{in}(\Gamma)=\min\{\mathcal{B}_{DMA}^{in},\ r_{n}^{in}(\Gamma) \cdot\hat{c}^{in}\}\]
\[\mathcal{B}_{n}^{out}(\Gamma)=\min\{\mathcal{B}_{DMA}^{out},\ r_{n}^{out}( \Gamma)\cdot\hat{c}^{out}\}\]
where \(\mathcal{B}_{n}^{in}(\Gamma)\) and \(\mathcal{B}_{n}^{out}(\Gamma)\) are the constrained words per cycle for the execution of parameters \(\Gamma\) on the computation node \(n\). This describes a roofline model, where the on-chip bandwidth is capped by the memory bandwidth. The above model summarises the bandwidth in and out for all layers apart from convolution and fully-connected. To describe these, the bandwidth for parameters and partial sums must also be considered,
\[\mathcal{B}_{Conv,FC}^{in}(\Gamma)=\min\{\mathcal{B}_{DMA}^{in}, \ r^{in}(\Gamma)\cdot c^{in}+\] \[r_{Conv,FC}^{psum}(\Gamma)\cdot c^{out}+\] \[r_{Conv,FC}^{param}(\Gamma)\cdot c^{in}\cdot c^{out}\cdot f\}\]
For _Fully Connected_ layers, \(f=1\). Given the bandwidths, the total latency for executing a layer is given as,
\[\tilde{\mathcal{L}}_{n}(\Gamma)=\max\left\{\frac{|\hat{\mathbf{S}}^{in}|}{\mathcal{ B}_{n}^{in}(\Gamma)},\ \frac{|\hat{\mathbf{S}}^{out}|}{\mathcal{B}_{n}^{out}(\Gamma)}\right\} \tag{1}\]
It is worth noting that the performance models are functions of runtime parameters, owing to the highly customisable hardware. Streaming Architectures tend to be computationally bounded, as supported by the results presented in Section VII. As the size of the feature-maps is significantly greater than that of the weights, the weights bandwidth is often negligible in comparison to the overall bandwidth required. There is no latency associated with updating the runtime parameters because they are also double-buffered and require negligible information transfer (\(<\)100B).
### _Resource Modelling_
Resource modelling is used to explore the performance-resource design space whilst keeping designs within the target FPGA's constraints. Modern FPGA devices share four common resource types: DSP, BRAM, LUT and FF. Only the _Conv_ and _FC_ layers use DSP resources, An analytical model of DSP usage for building blocks \(n\) of these layer types is given by,
\[\mathcal{R}_{Conv}^{DSP}=c_{n}^{in}\cdot c_{n}^{out}\cdot f_{n},\ \ \mathcal{R}_{FC}^{ DSP}=c_{n}^{in}\cdot c_{n}^{out}\]
As a 16-bit fixed-point precision is used throughout the design, each DSP is used for either \(16\cdot 16\) multiplication or multiplication-accumulation. For BRAM modelling, the number of BRAM blocks can be described as,
\[\mathcal{R}^{BRAM}(depth,words)=\Big{\lceil}\frac{depth}{512}\Big{\rceil} \cdot\Big{\lceil}\frac{16\cdot words}{36}\Big{\rceil}\]
The bus width of the required memory is \(16\cdot words\) as 16-bit fixed-point is used. Only _Conv_, _FC_ and _Pool_ layers consume BRAM components. Both _Conv_ and _Pool_ use BRAM for the Sliding Window module, which is described as,
\[\mathcal{R}_{SIW}^{BRAM}=\mathcal{R}^{BRAM}\Big{(}W_{n}\cdot D_{n}\cdot\frac{ C_{n}^{in}}{c_{n}^{in}},(K_{n}^{H}-1)\cdot c_{n}^{in}\Big{)}+\]
\[\mathcal{R}^{BRAM}\Big{(}D_{n}\cdot\frac{C_{n}^{in}}{c_{n}^{in}},K_{n}^{H} \cdot(K_{n}^{W}-1)\cdot c_{n}^{in}\Big{)}+\]
\[\mathcal{R}^{BRAM}\Big{(}\frac{C_{n}^{in}}{c_{n}^{in}},K_{n}^{H}\cdot K_{n}^{W }\cdot(K_{n}^{D}-1)\cdot c_{n}^{in}\Big{)}\]
_Conv_ and _FC_, require some extra memory for storing weights on-chip, which is modelled as,
\[\mathcal{R}_{Weight}^{BRAM}=\mathcal{R}^{BRAM}\Big{(}\frac{C_{n}^{in}\cdot F_{n }\cdot|\mathbf{K}_{n}|}{c_{n}^{in}\cdot c_{n}^{out}\cdot f_{n}},c_{n}^{in}\cdot c _{n}^{out}\cdot f_{n}\Big{)}\]
For _Fully Connected_ layers, \(\mathbf{K}_{n}=\{1,1,1\}\) and \(f_{n}=1\). The hardware design uses a large data word design technique, which significantly improves BRAM utilisation.
For the modelling of LUT and FF resources, a regression model is used due to the non-deterministic nature of FPGA synthesis. The regression models are obtained from a data set of 5000 synthesised modules, where the relationship between the module's parameters and resources are inferred.
Using the derived resource models, we can estimate the resource consumption of a complete hardware graph (G),
\[\mathbf{\mathcal{R}}_{total}=\left(\sum_{n\in G}\mathbf{\mathcal{R}}_{n}\right)+\mathbf{ \mathcal{R}}_{DMA}+\mathbf{\mathcal{R}}_{xbar}\]
where \(\mathbf{\mathcal{R}}\) describes the complete resources (DSP, BRAM, LUT, FF), for the given component. The quality of the resource model is evaluated in Section VI.
## V Latency-Driven Design Space Exploration
The minimization of the 3D-CNN model's execution latency on the runtime-configurable accelerator is regarded as an optimization problem. As such, a Design Space Exploration is performed that searches for both an efficient accelerator for the specific application and a schedule for execution of the 3D CNN model layers on that architecture.
### _Scheduling_
Once an accelerator design has been created, a schedule is needed for executing the 3D-CNN model's layers for this given design. The main choices with regard to scheduling are:
* Mapping of the computation nodes for executing the 3D-CNN model's layers (i.e. execution nodes).
* Tiling of the feature-maps of execution nodes on a given computation node.
* Runtime configurations for all the invocations of the computation nodes based on the respective execution nodes parameters.
The mapping between a computation node and the respective execution nodes which it will execute is described as an execution mapping function \(\mathcal{E}:G\mapsto\mathcal{P}(M)\), where \(\mathcal{P}(M)\) is the power-set of \(M\), which is all the distinct subsets of \(M\). This mapping creates disjoint subsets of \(M\), which can be described as,
\[\mathcal{E}(n)\cap\mathcal{E}(m)=\emptyset\ \forall\ n,m\in G,n\neq m\]
where \(\mathcal{E}\) is the execution mapping function. The mapping function must give unique mappings for each computation node, such that none of the model's layers are executed more than once. The inverse mapping, \(\mathcal{E}^{-1}\) finds the corresponding computation node for a given execution node. The mapping is decided based on the transform described in Section V-C4.
Once a mapping has been decided, the schedule for \(G\), denoted as \(\Phi_{G}\), can be created for executing the 3D CNN model graph \(M\) on the hardware graph \(G\). This schedule is outlined in Algorithm 1. For each execution node \(l\) of the 3D CNN model graph, the tiling factors across each of its dimensions are obtained, which are then used to find the tile sizes for computation. The algorithm greedily allocates as much of the feature-map as possible onto the computation node and then chooses the coarse factors based on the tile shape. Additional runtime parameters such as kernel sizes and padding are also chosen based on the execution node's parameters.
```
1:\(\Phi=\) empty list \(\triangleright\) initialise an empty schedule
2:for\(l\) in \(M\)do
3:\(n=\mathcal{E}^{-1}(l)\)\(\triangleright\) get the computation node
4:for\(i\) in range(\([\frac{H^{in}_{n}}{H^{in}_{n}}],[\frac{W^{in}_{n}}{W^{in}_{n}}],[\frac{D^{in}_{n}}{ D^{in}_{n}}],[\frac{C^{in}_{n}}{C^{in}_{n}}]\)) do \(\hat{H}=\min\{H^{in}_{n},H^{in}_{n}-i^{H}\cdot H^{in}_{n}\}\) \(\hat{W}=\min\{W^{in}_{n},W^{in}_{l}-i^{W}\cdot W^{in}_{n}\}\)
5:\(\hat{D}=\min\{D^{in}_{n},D^{in}_{n}-i^{D}\cdot D^{in}_{n}\}\) \(\hat{C}=\min\{C^{in}_{n},C^{in}_{n}-i^{C}\cdot C^{in}_{n}\}\)
6:if\(type(n)\)is\(Conv\)or\(FC\)then
7:for\(i^{F}\) in range(\([\frac{F^{in}_{n}}{F^{in}_{n}}]\)) do
8:\(\hat{F}=\min\{F^{in}_{n},F^{in}_{l}-i^{F}\cdot F^{in}_{n}\}\)
9:\(c^{in}=\max\{\textit{factors}\ \hat{C}\}\)
10:\(c^{out}=\max\{\textit{factors}\ \hat{F}\}\)
11:\(\Gamma=\{\hat{H},\hat{W},\hat{D},\hat{C},\hat{F},\hat{c}^{in},\hat{c}^{out}\}\)
12:\(\Phi\)append (\(n,\ \Gamma\))
13:else
14:\(c=\max\{\textit{factors}\ \hat{C}\}\)
15:\(\Gamma=\ \{\hat{H},\hat{W},\hat{D},\hat{C},\hat{c}\}\)
16:\(\Phi\)append (\(n,\Gamma\))
```
**Algorithm 1** Scheduling Algorithm
Having created the schedule \(\Phi_{G}\), it can be used for executing the workload. The ordering of dimensions in the proposed accelerator is \(NHWDC\), where the channel dimension is the fastest changing, and so the schedule is also executed in this order. The total latency for execution is described as,
\[\mathcal{L}_{total}(G)=\sum_{n,\Gamma\in\Phi_{G}}\mathcal{\tilde{L}}_{n}(\Gamma) \tag{2}\]
where \(n\) is the computation node and \(\Gamma\) is the corresponding set of parameters for each configuration in the schedule. The latency \(\mathcal{\tilde{L}}_{n}(\Gamma)\) of each node \(n\) for a configuration \(\Gamma\) is described in Equation (1).
```
1:\(\Phi=\) empty list \(\triangleright\) initialise an empty schedule
2:for\(l\) in \(M\)do
3:\(n=\mathcal{E}^{-1}(l)\)\(\triangleright\) get the computation node
4:for\(i\) in range(\([\frac{H^{in}_{n}}{H^{in}_{n}}],[\frac{W^{in}_{n}}{W^{in}_{n}}],[\frac{D^{in}_{n}}{D^{in}_{n}}],[ \frac{C^{in}_{n}}{C^{in}_{n}}]\)) do \(\hat{H}=\min\{H^{in}_{n},H^{in}_{n}-i^{H}\cdot H^{in}_{n}\}\) \(\hat{W}=\min\{W^{in}_{n},W^{in}_{l}-i^{W}\cdot W^{in}_{n}\}\)
5:\(\hat{D}=\min\{D^{in}_{n},D^{in}_{n}-i^{D}\cdot D^{in}_{n}\}\) \(\hat{C}=\min\{C^{in}_{n},C^{in}_{n}-i^{C}\cdot C^{in}_{n}\}\)
6:if\(type(n)\)is\(Conv\)or\(FC\)then
7:for\(i^{F}\) in range(\([\frac{F^{in}_{n}}{F^{in}_{n}}]\)) do
8:\(\hat{F}=\min\{F^{in}_{n},F^{in}_{l}-i^{F}\cdot F^{in}_{n}\}\)
9:\(c^{in}=\max\{\textit{factors}\ \hat{C}\}\)
10:\(c^{out}=\max\{\textit{factors}\ \hat{F}\}\)
11:\(\Gamma=\{\hat{H},\hat{W},\hat{D},\hat{C},\hat{F},\hat{c}^{in},\hat{c}^{out}\}\)
12:\(\Phi\)append (\(n,\ \Gamma\))
13:else
14:\(c=\max\{\textit{factors}\ \hat{C}\}\)
15:\(\Gamma=\ \{\hat{H},\hat{W},\hat{D},\hat{C},\hat{c}\}\)
16:\(\Phi\)append (\(n,\Gamma\))
```
**Algorithm 2** Simulated Annealing Optimisation Algorithm
### _Optimization Strategy_
Simulated annealing (SA), a meta-heuristic for finding minima in non-convex functions, is adopted as the optimization approach for minimising the latency of a 3D CNN model to FPGA mapping. The implementation of SA for this particular optimisation problem is given in Algorithm 2.
The main objective of the algorithm is to seek the global minimum of a given cost function; in this case, the cost function is latency, which is derived from Equation (2). Starting at an initial state by producing random values for the parameters of the computation nodes, transformations to the hardware graph \(G\) are applied iteratively generating new states that may be accepted or rejected based on a policy described in Algorithm 2. Each new state is evaluated based on the system's predetermined constraints, and is only accepted if it satisfies all
of them. The constraints that must be satisfied on the proposed system are the following:
* The available memory bandwidth should not be exceeded.
* The total used resources \(\boldsymbol{\mathcal{R}}_{total}\) should not exceed the available resources of the device.
* The streams in and out (\(c_{n}^{in},c_{n}^{out}\)) of each computation node should be a factor of the channels in and out respectively.
* The derived scheduled parameters of each layer must be less than the maximum supported by the computation node.
Figure 4 depicts the evolution of latency during Simulated Annealing. The graph depicts how the latency of the C3D model on various FPGA devices evolves over time. As indicated by the graph, the starting point for each run has extremely high latency, as all of the parameters are set to random values. The latency continues to improve until it reaches a plateau, at which point the optimization is terminated.
### _Transformations_
In order to traverse the design space, a set of transforms on the hardware graph \(G\) are described below.
#### Iv-C1 Feature-Map Dimensions Reshaping
The feature-map dimensions dictate the amount of on-chip memory required both for the weight parameters as well as caching required by the Sliding Window module, as given in Section IV-B. At compile time, a fixed shape for the feature-map dimensions must be given to the computation node, and at run time the layer can be executed on the computation node by tiling the feature-map.
When the optimizer searches for the optimal feature-map shape configuration for the computation node for both the input and output, the following conditions must hold,
\[D_{n} \leq max\{D_{l}:l\in M\}\] \[H_{n} =max\{H_{l}:l\in M\}\] \[W_{n} \leq max\{W_{l}:l\in M\}\] \[C_{n} \in\{\textit{factors }C_{l}:l\in M\}\]
As the choice of row dimension has no impact on resources, the maximum of all rows is chosen. For depth and columns, any dimension that is both less than the max of all layers, and greater than a minimum feasible dimension is acceptable. The channel dimension is chosen to be a factor of any of the existing channel dimensions.
#### Iv-C2 Coarse-grain Folding
The coarse-grain folding transformation modifies the number of parallel executions of coarse operations in each layer to achieve parallelism over the channel dimensions of the input feature-map. The primary operations of each layer can be performed simultaneously by deploying (at max) as many instances of its processing blocks as the number of channels. On Fully Connected and 3D Convolutional layers the coarse-level parallelism can be utilised at both the input and output channels. This parallelism is achieved by updating and searching for appropriate values of the compile-time parameters \(c_{n}^{in}\), \(c_{n}\), and \(c_{n}^{out}\) during optimization, which are also taken into account by the performance and resource models. To be considered valid, a design must comply with the constraints driving this transformation.
\[c_{n}^{in},c_{n}\in\textit{factors }C_{n}^{in},\ c_{n}^{out}\in\textit{factors }C_{n}^{out}\]
#### Iv-C3 Fine-grain Folding
The second folding-wise transformation factor determines the parallelism of the vector dot product operation for 3D convolutional layers. This sort of parallelism specifies the number of multipliers to be set in parallel for multiplications and the number of levels on the adder trees for additions. By increasing the compile-time fine folding factor, achievable latency is reduced at the cost of extra DSP resources, as described in Section IV. Evidently, there is a trade-off between performance and resource utilization. This type of parallelism is accomplished by modifying the \(f_{n}\) parameter during the optimization process. The constraint on the compile-time fine-grain folding \(f_{n}\) is given as,
\[f_{n}\in\textit{factors }|\boldsymbol{K}_{n}|\]
#### Iv-C4 Combination and Separation of Computation Nodes
As stated in Section III-C, the toolflow allows several model execution nodes to share the same computation node. The initial mapping creates unique computation nodes \(n\) for each execution node \(l\). In NNs with multiple layers, this is impractical since the FPGA resources can be quickly exhausted, and the performance of each computation node would be compromised in order to fit the design. A combination of execution nodes by type for a single computation node (the available types are depicted in Figure 2) is proposed as a solution to this issue. All execution nodes of the same type are combined and mapped onto a single computation node at the beginning of the optimization. The compile-time parameters of this node are then modified such that it can handle the workload of its associated execution nodes. This transformation is employed throughout the optimization procedure, and can affect the computation and execution nodes in two ways:
* Separate Computation Nodes: The algorithm chooses \(L_{e}\) execution nodes, where \(L_{e}\) is a hyperparameter, and detaches them from their corresponding computation node. The new group of execution nodes are integrated and mapped onto new computation nodes whose characteristics and parameters are adapted respectively.
Fig. 4: Evolution of latency during Simulated Annealing for various FPGA devices.
* Combine Computation Nodes: The algorithm searches for computation nodes of the same type and selects \(N_{c}\) of them, where \(N_{c}\) is a hyperparameter, to combine into a single computation node. The computation node's compile-time parameters are updated to support the new set of workloads.
Each time the combination or separation is applied, a set of constraints are considered to assure the result's validity. The required constraints are a combination of the _Feature-Map Dimensions Reshaping, Coarse-grain Folding, and Fine-grain Folding_ constraints.
## VI Hardware Model Validation
Modelling of performance and resources is used extensively for rapid traversal of the large design space. In this section, the accuracy of the performance and resource models is validated across different hardware designs for C3D [3], demonstrating negligible error between modelled and measured results.
Table II shows a direct comparison between predicted resources and resources after synthesis for a C3D design. The DSP and BRAM models are highly accurate due to the deterministic nature of their synthesis, as resource type annotations are used in the hardware design. For the LUT and FF prediction accuracy, the modelling over-predicts LUT usage and under-predicts FF usage. Logic optimisation contributes to fewer LUTs in the final implemented design, and the additional FF resources likely arise from inter-module buffering that is neglected in the modelling. Additional BRAM resources are required by DMAs for the buffering of bursts across the feature-map. This is accounted for during optimisation.
The convolution layers dominate the total resource consumption, with DSPs typically being the limiting factor. The convolution layers are investigated further, where statistical resource modelling information is captured over 16 designs with varying configurations among different layers. Supporting the results of Table II, the Mean Absolute Percentage Error (MAPE) and the Standard Deviation (\(\sigma\)) over the 16 different convolution configurations are shown in Table III.
The accuracy of the performance model is evaluated in Figure 6, where the calculation of the absolute percentage error is given by: \(error=\frac{|Predicted-Measured|}{Measured}*100\). The small percentage differences between predicted and measured results on ZCU106 board imply a good level of accuracy and confidence in the toolflow's modelling results. The divergence between the expected and actual latency of the layers is due to the DMA introducing a delay between bursts due to memory access cycles. The MAPE for all convolution layers of the C3D model at the particular design is 6.64%.
## VII Evaluation
This section focuses on the evaluation of the proposed methodology and its ability to discover optimal designs. The automatically generated designs are benchmarked against existing 3D CNN accelerators.
The models of 3D CNN included in the evaluation are listed in Table IV. Each of these models has demonstrated state-of-the-art performance in a number of HAR benchmarks and
Fig. 5: The Dataflow of a simple design consisting of Convolution, ReLU, and FC layers. As the red lines and the crossbar dictate the flow between Convolution and ReLU can be addressed within the FPGA without sending the data back to the off-chip memory. This is the result of Fuse Activation optimization.
\begin{table}
\begin{tabular}{l l|l|l|l} \hline & DSP & BRAM & LUT & FF \\ \hline \hline MAPE (\%) & 0.0 & 0.35 & 7.21 & 8.81 \\ \(\sigma\) & 0.0 & 0.38 & 8.82 & 2.89 \\ \hline \end{tabular}
\end{table} TABLE III: Statistical resource modelling information over multiple runs for different convolution layers and configurations.
Fig. 6: Comparison of predicted latency to measured latency for all the convolution layers of C3D on the ZCU106 board, depicted as absolute percentage error.
offers a variety of workload and network parameters. Some of these models also serve as benchmarks for existing FPGA accelerator-focused works. The ONNX files used for all the experimental results have been exported from [20] for _C3D, Slowonly_, and _X3D-M_ models, and from [21] for _R(2+1)D-18_ and _R(2+1)D-34_.
### _Experimental Results_
#### Iv-A1 Ablation Study
In this section, an ablation study was undertaken to assess the effect of several optimization strategies on the final performance of the proposed method. Having a baseline strategy in place, as well as introducing and investigating modifications and additions to it, has yielded significant insights and conclusions regarding the direction that should be followed for improving the optimization process. For the ablation experiments, the R(2+1)D-18 model was used, although the findings apply to all supported models and devices.
**Baseline Design Description:**
The baseline optimisation strategy is defined as follows: the SA hyperparameters are configured by, \(\tau_{start}=10\), \(\tau_{min}=1\times 10^{-6}\), and the cooling rate \(\lambda=0.99\), while a warm start is executed prior to the execution of the optimiser. These parameters remain constant throughout the ablation study. For the baseline experiments, the _Feature-Map Dimensions Reshaping, Coarse-grain Folding, and Fine-grain Folding_ transformations are enabled while the use of runtime parameters and the fusion of activation layers to preceding layer are disabled.
**Optimization Strategies:**
* Building Blocks Combination: Incorporating the _Combination and Separation of Computation Nodes_ transformation into the optimization approach improved the optimiser's performance by \(1.14\times\). This optimization is described in detail in Section V-C.
* Fusion of activation functions into previous layer: Upon revisiting the optimization findings and assessing each layer type of the model, it was found that the activation layers are typically memory bounded, which hinders the overall performance of the design. To overcome this constraint, the fusion optimization was introduced. Through the crossbars seen in Figure 4(b), such layers are fused directly to the preceding ones (mostly convolution layers). Since convolution layers are mainly compute bounded, the fused activation layers also become compute bounded, suppressing the memory bounded limitation. The addition of this specific optimization has provided a \(1.52\times\) boost in performance.
* Runtime reconfiguration of layer parameters: The introduction of runtime reconfigurability of computation nodes resulted in the most significant improvement boost. As described in Section III-C, in a non-runtime parameterizable node, in order to support a layer with different runtime feature-map dimensions the underlying hardware modules would need to add padding to match the compile-time dimensions. This affects the performance of the computation node greatly, as it must perform redundant operations. With the introduction of runtime parameterisable modules, as shown in Figure 3, the extra cost of padding and all the redundant operations are omitted, resulting in a \(18.21\times\) performance increase.
#### Iv-A2 Resources against Latency Comparison
Figure 7 depicts the pareto-front between resources and latency. The DSP utilisation was chosen to represent resources, as the generated designs are typically limited by the number of available DSPs. The figure shows the optimiser's ability to traverse the resource-latency design space, achieving fine-grain control over this trade-off. Indeed the optimiser is able to double performance along the pareto front at the cost of double the number of DSPs, exploring many design points in between.
### _Comparison to the state-of-the-art_
Having evaluated the performance of the hardware and the effectiveness of the design space exploration method, the proposed work is positioned against existing accelerator works. Table V outlines the current space for FPGA-based 3D CNN acceleration, and how HARFLOW3D compares. Please note that all existing accelerators are the product of approaches
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline
**Hardware** & \multicolumn{2}{c}{**DSP**} & \multicolumn{2}{c}{**BRAM**} & \multicolumn{2}{c}{**LUT**} & \multicolumn{2}{c}{**FF**} \\
**Node** & _pred._ & _act._ & _error_ & _pred._ & _act._ & _error_ & _pred._ & _act._ & _error_ & _pred._ & _act._ & _error_ \\ \hline \hline Conv & 2304 & 2304 & (+0\%) & 1052 & 1052 & (+0\%) & 151K & 138K & (+9.4\%) & 155K & 166K & (-6.6\%) \\ MaxPool & 0 & 0 & (+0\%) & 0 & 0 & (+0\%) & 22K & 17K & (+29.4\%) & 16K & 18K & (-11.1\%) \\ Gemm & 128 & 128 & (+0\%) & 456 & 456 & (+0\%) & 11K & 10K & (+1.0\%) & 15K & 18K & (-16.6\%) \\ ReLU & 0 & (+0\%) & 0 & & (+0\%) & 1.0K & 1.4K & (-28.5) & 2.2K & 2.2K & (+0\%) \\ \hline DMA & & & & & 51 & & 2.9K & & & 4.7K & \\ X-BAR & & 0 & & & 0 & & 1.7K & & & 1.4K & \\ \hline
**Total** & 2432 & 2432 & (+0\%) & 1559 & 1559 & (+0\%) & 189K & 171K & (+7.8\%) & 194K & 210K & (-9.4\%) \\ _Avail._ & & _(2520)_ & & _(1824)_ & & _(274K)_ & & & _(548K)_ & \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison of predicted and synthesised resources for C3D designs on a ZCU102 board.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & CSD & Slowonly & R(2+1)D-18 & R(2+1)D-34 & X3D-M \\ \hline FLOPs (G) [1] & 38.61 & 54.81 & 54.81 & 1, 291 & 6.97 \\ Parameter (M) & 78.41 & 32.51 & 33.41 & 63.72 & 3, 82 \\ Num. of Layers & 27 & 174 & 82 & 1, 54 & 396 \\ Num. of Conv Layers & 8 & 1 & 535 & 1 & 37 & 1 & 69 \\ Spatial dimensions & \(112\times 12\) & \(256\times 256\) & \(112\times 112\) & \(112\times 112\) & \(256\times 256\) \\ Num. of Frames & 16 & 8 & 16 & 16 & 16 & 16 \\ UCF101 & & 83.2 & 94.54 & 88.66 & 92.27 & 96.52 \\ Accuracy (\%) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \hline \end{tabular} \({}^{1}\) FLOPs are reported as MAC operations.
\end{table} TABLE IV: Characteristics of the evaluated 3D CNN models.
that target only the specific workload, and thus are hand-tuned, where the produced designs from HARDFLOW3D are generated by a single toolflow. By parametrising all aspects of the hardware and automating the design space exploration, outstanding performance is achieved across a multitude of networks, targeting more than any existing accelerator.
Figure 8 provides a more extensive and direct comparison to existing works on C3D model. Since HARFLOW3D can target all of the platforms that previous studies have targeted for C3D, results for each of them have been collected for a direct comparison with all of the existing works. The results are evaluated in DSP efficiency, GOPs normalised over the device's DSPs. As shown in Figure 8, HARFLOW3D achieves \(1.89\times\) increased DSP efficiency on ZC706 compared to H. Fan [5]. On ZCU102, \(5.03\times\) greater results are achieved than M. Sun [11]. On VC709 HARFLOW3D achieves \(1.27\times\) better DSP efficiency than Z. Liu [8], whereas obtains nearly equal performance being only \(1.008\times\) off compared to J. Shen [9]. Compared to T. Teng [13] on VC707, the DSP efficiency is \(1.48\times\) lesser, however the comparison cannot be considered direct since the specific design uses fixed-point 8 arithmetic precision. In comparison to J. Shen [9] on VUS440, the DSP efficiency is inferior by \(2.16\times\).
Table VI outlines a comparison between an RTX 3090, a server-grade cutting-edge GPU, and a ZCU106, a mid-range FPGA board. Despite the difference in scale between the two devices, the results for energy/clip demonstrate the efficiency of the HARFLOW3D toolflow.
## VIII Conclusion
This paper presents HARFLOW3D, the first 3D CNN FPGA toolflow for 3D CNNs that supports a wide range of models and devices. A series of transformations during optimization, in conjunction with a novel hardware implementation supporting runtime parametrization of the hardware nodes, enabled comparable and even greater performance in comparison to prior hand-tuned designs. Future directions may be the support of additional 3D CNN models with different backbones, such as Inception-like architectures (I3D). Furthermore, expansion into domains other than HAR, such as 3D semantic segmentation, medical imaging (CT and MRI scans), 3D object detection from point clouds and Transformer-based HAR models will be investigated.
## Acknowledgment
For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Accepted Manuscript version arising.
\begin{table}
\begin{tabular}{c c c} \hline & GPU & FPGA (HARFLOW3D) \\ \hline Platform & RTX 3090 & ZCU106 \\ Clock Frequency & 1.7 GHz & 200 MHz \\ Precision & 32-bit float & 16-bit fixed \\ Latency/clip (ms) & 6.93 & 182.81 \\ Power (W) & 234.1 & 9.44 \\ Energy/clip (J) & 1.62 & 1.72 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparison of HARFLOW3D against a GPU for the C3D model. We achieve comparable energy efficiency.
\begin{table}
\begin{tabular}{c c c} \hline & GPU & FPGA (HARFLOW3D) \\ \hline Platform & RTX 3090 & ZCU106 \\ Clock Frequency & 1.7 GHz & 200 MHz \\ Precision & 32-bit float & 16-bit fixed \\ Latency/clip (ms) & 6.93 & 182.81 \\ Power (W) & 234.1 & 9.44 \\ Energy/clip (J) & 1.62 & 1.72 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparison of HARFLOW3D with existing works, for 3D CNN HAR models. Our tool can target all models, including X3D.
Fig. 8: Comparison of the DSP efficiency between HARFLOW3D and previous works. The wide range of the supported FPGA platforms allows for a direct comparison. We demonstrate competitive, and in some cases superior performance.
Fig. 7: Pareto front of DSP utilization against latency for the R(2+1)D-34 model on a ZCU102 platform during Simulated Annealing optimization.
\begin{table}
\begin{tabular}{c c c} \hline & GPU & FPGA (HARFLOW3D) \\ \hline Platform & RTX 3090 & ZCU106 \\ Clock Frequency & 1.7 GHz & 200 MHz \\ Precision & 32-bit float & 16-bit fixed \\ Latency/clip (ms) & 6.93 & 182.81 \\ Power (W) & 234.1 & 9.44 \\ Energy/clip (J) & 1.62 & 1.72 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparison of HARFLOW3D against a GPU for the C3D model. We achieve comparable energy efficiency. |
2310.18013 | Location and symmetry of superconductivity in neutron stars | Earlier, it was a standard assumption that the entire core of neutron stars
is superconducting. However, the matter contents in the inner core has been
unknown even qualitatively, because the density of matter in that region is
expected to be higher than the nuclear saturation density 0.16
$\mathrm{fm}^{-3}$. As a consequence, no reliable model exists that would
describe the neutron star matter in the inner core of neutron stars. Thus, a
possibility of presence of normal, nonsuperconducting, plasma in the inner core
cannot be excluded as of today. This point is supported by the numerical
calculations performed in [1]. The calculations are based on the equation of
state and the proton Cooper pairing gap energy derived from the chiral
effective field theory. The numerical results show that the superconducting gap
goes to zero beyond the depth about 1 km below the crust-core boundary. Given
that the stellar radius is of the order of 12 km, therefore the superconducting
proton matter is expected to exist only in a thin layer at the tip of the outer
core. Recently it has been realized that the symmetry of superconductor is
anisotropic in the lasagna region of the pasta phases located at the bottom of
the crust. However the question of whether this symmetry is continuous or
discreet was unsolved. The numerical calculations performed in [1] have shown
that the tunneling rate between the adjacent slabs in the entire range of the
corresponding densities is negligibly small. Thus, a discreet model is
necessary for the description of the lasagna region. Uncertainties and future
directions of the research are discussed. | Dmitry Kobyakov | 2023-10-27T09:40:10Z | http://arxiv.org/abs/2310.18013v1 | # Location and symmetry of superconductivity in neutron stars
###### Abstract
Earlier, it was a standard assumption that the entire core of neutron stars is superconducting. However, the matter contents in the inner core has been unknown even qualitatively, because the density of matter in that region is expected to be higher than the nuclear saturation density \(0.16\) fm\({}^{-3}\). As a consequence, no reliable model exists that would describe the neutron star matter in the inner core of neutron stars. Thus, a possibility of presence of normal, nonsuperconducting, plasma in the inner core cannot be excluded as of today. This point is supported by the numerical calculations performed in [1]. The calculations are based on the equation of state and the proton Cooper pairing gap energy derived from the chiral effective field theory. The numerical results show that the superconducting gap goes to zero beyond the depth about 1 km below the crust-core boundary. Given that the stellar radius is of the order of 12 km, therefore the superconducting proton matter is expected to exist only in a thin layer at the tip of the outer core. Recently it has been realized that the symmetry of superconductor is anisotropic in the lasagna region of the pasta phases located at the bottom of the crust. However the question of whether this symmetry is continuous or discreet was unsolved. The numerical calculations performed in [1] have shown that the tunneling rate between the adjacent slabs in the entire range of the corresponding densities is negligibly small. Thus, a discreet model is necessary for the description of the lasagna region. Uncertainties and future directions of the research are discussed.
The location of superconducting protons and symmetry properties of the order parameter are crucial for the spectrum of hydromagnetic waves in neutron stars. The hydromagnetic waves transfer energy from the inner part of the star to its outer layers in the observable processes such as the magnetar giant flares and the following quasiperiodic oscillations, the glitches of the spin frequency. Thus, understanding of the spectrum of the hydromagnetic waves is a crucial part of astrophysical models designed for generating novel knowledge about the structure of the neutron star matter.
The existence of superconductivity in neutron star has been for the first time considered in 1969 in [2] and it has been shown that the strong proton-proton interaction must lead to the S-wave Cooper pairing and to superconductivity in the core (where the nuclear matter is expected to be a uniform quantum liquid). Since then, the standard physical picture has been that the neutron star core is completely filled by the proton superconductor with the isotropic order parameter.
However, recent calculations [1] based on the energy-dependent proton Cooper pairing gap energy derived from the chiral effective field theory of baryons [3], have shown that the superconductor fills only a thin layer at the tip of the core with thickness of the order of 1 kilometer. To the best of my knowledge this result has not been discussed before and therefore is novel.
The conclusion that the superconductor fills only a thin layer of the core does not depend neither on the equation of state (EoS) in the crust, nor on the polytropic exponentials in the inner core (where the matter density is higher than the nuclear saturation density). This result is shown in figure 11 of [1] and has been obtained from the solution of the equations of the force balance between the gravitational force and the pressure gradient supplemented by the EoS consisting of three parts: The first is the EoS of the solid crust, the second is the EoS of the outer core and the third is the extrapolation of the EoS into the inner core.
The EoS of the crust has been taken from the literature with the data obtained from the Barcelona-Catania-Paris-Madrid EoS (see the references in [1]). However, for the self-consistency it is desirable to calculate the pressure from the same EoS as used in the outer core. In the outer core, I used the EoS derived from the chiral effective field theory. The energy per baryon is given by the following expression:
\[\varepsilon(n,x)=\varepsilon_{0}\left[\frac{3}{5}\left[x^{\frac{5}{3}}+(1-x)^{ \frac{5}{3}}\right]\left(\frac{2n}{n_{0}}\right)^{\frac{2}{3}}-\left[\alpha_{1 }\left(x-x^{2}\right)+\alpha_{2}\right]\frac{n}{n_{0}}+\left[\eta_{1}\left(x- x^{2}\right)+\eta_{2}\right]\left(\frac{n}{n_{0}}\right)^{7}\right]. \tag{1}\]
where \(\varepsilon_{0}=36.84\) MeV, \(\alpha_{1}=2\alpha-4\alpha_{L}\), \(\alpha_{2}=\alpha_{L}\), \(\eta_{1}=2\eta-4\eta_{L}\), \(\eta_{2}=\eta_{L}\). The function \(\varepsilon(n,x)\) is shown in Fig. 1 by light-gray. For comparison, in Fig. 1 the energy per baryon from the work by Baym, Bethe and Pethick (1971) (see the references in [1]) by middle-gray; for convenience, the dark-gray shows the zero surface. The mass density includes the rest mass of protons and neutrons, their interaction energy and the relativistic mass of the electrons:
\[\rho=m_{p}xn+m_{n}(1-x)n+\frac{n\varepsilon}{c^{2}}+\frac{(9\pi)^{2/3}}{4} \frac{\hbar}{c}\left(xn\right)^{\frac{4}{3}}, \tag{2}\]
where \(x=n_{p}/n\). From the empirical properties of nuclear matter we have the (minus) binding energy per baryon \(\omega_{0}=16\) MeV and the pressure of atomic nucleus \(P_{\rm nuc}(n=n_{0},x=1/2)=0\). From these properties I obtain
\[\alpha=\frac{4}{5}+\frac{2\gamma}{\gamma-1}\left(\frac{1}{5}+\frac{\omega_{0} }{\varepsilon_{0}}\right),\quad\eta=\frac{2}{\gamma-1}\left(\frac{1}{5}+\frac {\omega_{0}}{\varepsilon_{0}}\right). \tag{3}\]
The incompressibility, symmetry energy and its slope are given by
\[K =9\epsilon_{0}\left[-\frac{2}{15}+\gamma\left(\frac{1}{5}+\frac{ \omega_{0}}{\epsilon_{0}}\right)\right], \tag{4}\] \[S_{0} =\epsilon_{0}\left(\frac{3}{5}2^{2/3}+\frac{\omega_{0}}{\epsilon_ {0}}-\alpha_{L}+\eta_{L}\right),\] (5) \[L =3\epsilon_{0}\left(\frac{2}{5}-\alpha_{L}+\gamma\eta_{L}\right). \tag{6}\]
In the numerical calculations in [1], I used two sets of the parameters: \((\gamma,\alpha_{L},\eta_{L})=(4/3,1.385,0.875)\) and \((\gamma,\alpha_{L},\eta_{L})=(1.45,1.59,1.11)\), which lead, correspondingly, to
\[(K,S_{0},L) =(236\,\mathrm{MeV},32.3\,\mathrm{MeV},20.1\,\mathrm{MeV}), \tag{7}\] \[(K,S_{0},L) =(261\,\mathrm{MeV},33.4\,\mathrm{MeV},46.4\,\mathrm{MeV}). \tag{8}\]
In the inner core, I used the generalized polytropic EoS, where the pressure was defined in three consequent regions of the matter density as following:
\[P[\rho(r)]\propto\rho^{\Gamma}, \tag{9}\]
where \(r\) is the radial coordinate and the values of \(\Gamma\) are given in the caption to figure 2 in [1].
From the other hand, at densities higher than the nuclear saturation density the contents of the neutron star matter is even qualitatively unknown and a reliable description of the structure of the inner core does not exist. Therefore, at present it is impossible to exclude the existence of the normal plasma (without superconductivity) in the inner core of neutron stars.
In [1], I have considered the structure of the superconductor at the crust-core boundary. In 2018, it has been for the first time noticed [4; 5] that the ground state of neutron star matter at the crust core boundary corresponds to the structure when the proton liquid is distributed over thin slabs, thus making the symmetry of the order parameter anisotropic. Thus, a question arises whether the anisotropic symmetry is continuous or discreet. In case of a continuous symmetry, the superconducting density may be described by a tensor. On the contrary, if the superconductor is distributed over the slabs which are not connected by the Josephson tunneling, the system should be described by a discreet model. In order to specify the symmetry type in [1], I have calculated probabilities of quantum mechanical tunneling between the adjacent layers (in figures 6-9) and found that in mostly, the tunneling that defines the Josephson coupling is negligibly small. It then follows that the ground state of the superconductor should be described by the discreet model.
The results obtained in [1] specify the basic assumptions of future models for the calculation of the magnetic torque exerted on the structure by the stellar magnetic induction. Such calculations are necessary for investigation of the role of thermal fluctuations in the neutron star matter in order to find out whether the pasta region is ordered or disordered at typical realistic conditions.
In the future work, it is necessary to investigate the following issues. (i) Specify the radial position and independently confirm the existence of the lasagna region of the pasta structure within other EoS. (ii) Investigate whether the ground state in the realistic neutron star matter is ordered or disordered. (iii) Specify the thickness and separation of the slabs. (iv) Investigate the uncertainty related to the mutual orientation of the slabs and the magnetic field. (v) Study the hydromagnetic waves at the boundary between the superconducting and the normal plasmas.
_Acknowledgements._ This work was supported by the Center of Excellence "Center of Photonics" funded by The Ministry of Science and Higher Education of the Russian Federation, contract No. 075-15-2022-316.
_Translated by the author._
|
2305.15959 | A Universal Quantum Technology Education Program | Quantum technology is an emerging cutting-edge field which offers a new
paradigm for computation and research in the field of physics, mathematics and
other scientific disciplines. This technology is of strategic importance to
governments globally and heavy investments and budgets are being sanctioned to
gain competitive advantage in terms of military, space and education. Due to
this, it is important to understand the educational and research needs required
to implement this technology at a large scale. Here, we propose a novel
universal quantum technology master's curriculum which comprises a balance
between quantum hardware and software skills to enhance the employability of
professionals thereby reducing the skill shortage faced by the academic
institutions and organizations today. The proposed curriculum holds the
potential to revolutionize the quantum education ecosystem by reducing the
pressure of hiring PhDs faced by startups and promoting the growth of a
balanced scientific mindset in quantum research. | Sanjay Vishwakarma, Shalini D, Srinjoy Ganguly, Sai Nandan Morapakula | 2023-05-25T11:57:14Z | http://arxiv.org/abs/2305.15959v2 | # A Universal Quantum Technology Education Program
###### Abstract
Quantum technology is an emerging cutting-edge field which offers a new paradigm for computation and research in the field of physics, mathematics and other scientific disciplines. This technology is of strategic importance to governments globally and heavy investments and budgets are being sanctioned to gain competitive advantage in terms of military, space and education. Due to this, it is important to understand the educational and research needs required to implement this technology at a large scale. Here, we propose a novel universal quantum technology master's curriculum which comprises a balance between quantum hardware and software skills to enhance the employability of professionals thereby reducing the skill shortage faced by the academic institutions and organizations today. The proposed curriculum holds the potential to revolutionize the quantum education ecosystem by reducing the pressure of hiring PhDs faced by startups and promoting the growth of a balanced scientific mindset in quantum research.
Masters program, quantum education, quantum science
## I Introduction
The development of quantum computing started towards the early 20th century. With Max Planck's hypothesis in the 1900's era of quantum theory had begun. The contributions of Albert Einstein's photoelectric effect in 1905, De Broglie in 1924 proving the particle nature of light, Heisenberg's uncertainty principle, Erwin Schrodinger's formulation of quantum mechanics based on waves, Max Born's interpretation of wave function laid the foundation for understanding the quantum world. The main breakthrough happened with the development of quantum algorithms. Deutsch algorithm has theoretically proved more efficient than a classical algorithm, Peter Shor's 1994 factoring algorithm [1] will make a revolution in cryptography when implemented in a large-scale quantum computer, and Grover algorithm's [2] ability to search in the unsorted database in \(\sqrt{N}\) tries is considered to be faster than classical counterpart, VQE's contribution in quantum chemistry [3] are astonishing development in the quantum field.
Apart from the above-mentioned points, a few other reasons to move from classical computing are three quantum mechanical properties. These are Superposition, Interference and Entanglement. Superposition creates \(2^{n}\) dimensions where an increase in qubits has an exponential increase in dimensions, Interference leverages parallel computation, and Entangled qubits properties are known if one of the qubit's properties is known irrespective of how far the qubits are placed. Along with it speed, efficiency, solving complex problems, data security, scientific and medical research, and advancement in AI are also the reasons to shift to quantum computing. There are many opportunities to start as a complete beginner to get a glimpse. The Coding School is sponsored by IBM which is a complete 8-month dedicated course to give a solid understanding of The Introduction to Quantum Computing, QWorld, QOSF, and IBM Spring and Fall Challenge.IBM certified Qiskit developer exam and IBM Qiskit advocate are the world's only official exams organised by IBM to test the knowledge of basic to medium computing skills.
Even though the mentioned opportunities are a good start, the depth of subject knowledge will be developed only by enrolling in formal education. A dedicated syllabus with an enrichment of varied research areas must be curated to create a strong foundation for the research mindset. Along with it, students need to be equipped enough to read research papers in a strategic way to get the most out of them. Once they are able to identify the areas of development quantum workforce will be extended to most of the fields. There might be a day in the future when we will use tech gadgets in our daily routine [4]. As schools play a major role in one's education, we strongly hope that a lot of schools start inculcating quantum subjects such as quantum physics, and basic quantum chemistry into their high school curriculum [5]. Thus, quantum computing is a promising field with rich resources and a developing research mindset. Once set, quantum will outsource present technologies and the world will reach its peak scientific advancement [6].
In this paper, we propose a novel universal quantum education curriculum for master's level degree programs. We start defining the course framework in section II, followed by a detailed description of the subjects and the semester-wise categorification of the subjects in section III. In section IV, we throw light on the industrial aspect and global outlook of
our curriculum and its need.
## II Course Framework
The two-year master's program is designed to equip all students with the essential knowledge required to grow and sustain in the field of quantum science and technology. In this section, we introduce the course framework of this program. The pedagogy is built on a number of fundamental ideas, such as the following:
* A variety of diversified learning opportunities that support the development of deep conceptual understanding and improve students' capacity for successful application of technical and professional skills.
* The course is designed in such a way that all the students are taught most of the modules using problem-based learning [7].
* As quantum computing is a diverse field that has wide range of applications, the course offers specializations to students based on their interests and future goals.
The Universal Quantum Technology Education Program's (UQTPEP) curriculum is created to be at the cutting edge of current quantum technology research trends. The curriculum is designed to guarantee students are not only well-versed in established quantum concepts but are also conversant with the most recent advances in quantum science and technology since it recognizes the rapidly dynamic nature of the field. A detailed list of master's programs in quantum science and technology and their overview is discussed in the paper [8].
Modern research issues are incorporated into the course modules as one of the main approaches to promoting this relationship. For instance, students are introduced to quantum error correction and fault tolerance in the quantum information and computing module; these topics are crucial in the field's ongoing research. In line with the increased interest in these fields in both academia and industry, the quantum software development and simulation module also contains instructions on quantum algorithms and quantum machine learning.
The UQTPEP also emphasizes the importance of applying quantum technologies in real-world settings. Students are encouraged to apply the theoretical information [9] they have acquired to actual quantum problems through practical laboratory sessions and a sizable research project in the second year. This offers students a priceless chance to interact with current research questions and acquire hands-on experience in the industry.
Additionally advantageous to the curriculum are the close ties to the quantum research community. Many of the program's teachers are now engaged in quantum technology research, ensuring that the teaching is guided by the most recent advancements in this field. Regular guest lectures from industry experts offer further perspectives on the current condition of the industry and the practical uses of quantum technology.
Additionally, the program aggressively promotes student participation in the larger quantum research community. Students can gain first-hand knowledge of the quantum research and development process through opportunities for internships or placements in research institutions or quantum technology enterprises. The relationship between students' academic work and the larger research community is further strengthened by encouraging them to attend and present at conferences on quantum technologies. In conclusion, there is a strong connection between the UQTP curriculum and contemporary quantum technology research trends. The program makes sure that its graduates are prepared to contribute to this interesting and quickly developing topic by including the most recent research advancements in the curriculum and offering chances for practical application and involvement with the quantum research community.
The Universal Quantum Technology Education Programme (UQTPEP) acknowledges that quantum technology transcends conventional divisions between physics, computer science, engineering, and mathematics and is intrinsically interdisciplinary [5]. This is why the program puts a lot of focus on interdisciplinary learning and cooperation, giving students lots of chances to interact with others and ideas from many sectors. The program's core curriculum, which covers a variety of subjects from quantum physics [10] to quantum computing and software development, reflects the interdisciplinary nature of the UQTP. Students obtain a wide understanding of quantum technology that crosses several traditional subjects by studying these various fields. Additionally, the programme encourages students to choose elective courses that are unrelated to their primary field of study. These elective courses give students the chance to explore subjects that may not be directly relevant to their chosen career path, but are nonetheless crucial for gaining a comprehensive grasp of quantum technology. To acquire exposure to the software side of quantum technology, a student specializing in the hardware pathway can decide to take a module in quantum machine learning or quantum cryptography.
The UQTPEP encourages interdisciplinary cooperation through a number of initiatives outside of the traditional curriculum [11]. For instance, group projects are a crucial component of the curriculum. These projects frequently involve interdisciplinary teams of students solving challenging issues, promoting cooperation, and allowing students to benefit from one another's experience. These projects foster a rich interdisciplinary learning environment by allowing students from the software track to collaborate with peers from the hardware and superconducting circuits pathways. Additionally, the second-year research project of the programme provides a fantastic opportunity for multidisciplinary study. Each student's project is in keeping with their chosen career path, however due to the nature of quantum technology research, cross-disciplinary collaboration is frequently necessary. For instance, a student working on a project with a hardware focus might team up with academics who are experts in quantum materials or algorithms. Last but not least, the UQTP strongly promotes student interaction with the larger academic and business world, both inside and outside the realm of quantum technology. This could be participating in multidisciplinary seminars and workshops, doing an
internship with a variety of businesses, or working on projects or conducting research with other researchers or business experts. In conclusion, the Universal Quantum Technology Education Program goes beyond traditional disciplinary boundaries, fostering an interdisciplinary learning environment that mirrors the diverse and interconnected nature of quantum technology research and development. Through a combination of a broad curriculum, collaborative projects, and engagement with the wider academic and industry community, the program prepares students to thrive in the interdisciplinary world of quantum technology.
## III Compulsory Modules and Electves
In order to create a world-class industry and academia standard curriculum, it is necessary to define a few subjects which are of relevance to the program and provides a unique way to bind the topics together. We define several subjects to be taught in our proposed master's program which defines hardware and software aspects of quantum technology. The curriculum is designed taking the reference of the research papers [12, 13, 14, 15, 16][17]. The paper [18], provided a customizable outreach program that is accessible to students in high school and the early stages of their undergraduate studies, requires no sophisticated mathematics, and can be utilized to introduce concepts to students in a short amount of time. [19] reports on the results of a survey of instructors instructing introduction to courses at universities across the US, mostly at the undergraduate or hybrid undergraduate/graduate level, and by conducting follow-up focus interviews with six specific teachers. Teaching with the help of visualization techniques is also a great advantage as it helps students understand and grasp the concepts of quantum computing better [20]. We define details of some of the important subjects proposed in our curriculum and explain their significance.
### _Introduction to Quantum Algorithms_
The subject "Introduction to quantum algorithms" provides a foundational grasp of the algorithms employed in the field of quantum computing. Quantum algorithms are specifically intended to take advantage of quantum systems' unique features in order to solve computing problems more efficiently than classical algorithms. Students will study the main concepts and principles behind quantum algorithms in this class. They will investigate several algorithms that use quantum phenomena including superposition, entanglement, and interference to execute calculations. These algorithms are very different from classical algorithms, which are based on classical binary logic and work with classical bits. The basic building blocks of quantum algorithms, such as quantum gates, qubits (quantum bits), and quantum circuits, are often included in the curriculum. Students will learn about fundamental algorithms such as Shor's algorithm for factoring huge numbers and Grover's algorithm for unstructured search. They will comprehend the mathematics and ideas underpinning these algorithms, as well as their potential applications in cryptography, optimization, simulation, and machine learning.
Furthermore, students will investigate quantum algorithm design concepts, learning how to identify issues that can benefit from quantum computation and build appropriate solutions. They will learn about quantum oracle creation, quantum Fourier transforms, amplitude amplification and other fundamental quantum algorithm approaches. Throughout the course, students may be introduced to quantum computing programming languages and frameworks such as Qiskit, Microsoft Quantum Development Kit (Q#), and others. Handson exercises and projects that allow students to develop and simulate quantum algorithms may be incorporated to reinforce theoretical notions. Students should have a good foundation in quantum algorithms at the end of the course, allowing them to understand the unique capabilities and promise of quantum computing. They will be prepared to analyze problems, identify appropriate quantum algorithms, and contribute to quantum technological advancements.
### _Semiconductor Devices_
The subject "Semiconductor Devices" is concerned with the underlying principles, operation, and applications of semiconductor devices in the realm of electronics. It gives pupils a thorough understanding of the building components that constitute the foundation of modern electrical systems. Students will learn about the physics and engineering of semiconductor devices, which are primarily based on the properties of semiconducting materials like silicon. They will study about the behaviour of electrons and holes in semiconductors, pn junction creation, and carrier transport concepts in these devices.
The curriculum often includes diodes, bipolar junction transistors (BJTs), metal-oxide-semiconductor field-effect transistors (MOSFETs), and other sophisticated devices such as photodiodes and optoelectronic devices. Students will investigate the principles of operation, device properties, and applications of these devices in various electronic systems. Key subjects in the discipline may include diode and transistor biassing and small-signal analysis, the study of load lines and operating zones, amplifier architectures, frequency response, and basic device fabrication procedures. Students will also learn about the constraints and trade-offs that go into the design and operation of these devices.
Furthermore, advanced semiconductor devices such as integrated circuits (ICs) and sensors may be covered. Students will learn about IC fabrication techniques, device scaling, and integrating several components on a single chip. They might also learn about design considerations for sensors that use semiconductor materials to detect physical quantities. Students may participate in practical exercises, laboratory work, and projects throughout the course to reinforce theoretical concepts and acquire hands-on experience with semiconductor devices. They may also investigate new trends and advances in semiconductor technology, such as nanoscale devices, power electronics, and semiconductor materials other than silicon.
Students should have a comprehensive understanding of semi
conductor devices, their underlying concepts, and their significance in electronic systems at the end of the course. They will be able to analyse, design, and troubleshoot electronic circuits that make use of these devices. This foundation will be helpful for future studies or professions in electrical engineering, the semiconductor industry, integrated circuit design, and electronic system development.
### _Laboratory and Research Skills_
The information needed to work on cutting-edge quantum technology research and development is covered in "Laboratory and Research Skills for Quantum Technology". The course offers practical lab instruction where students operate, manipulate, and conduct experiments with quantum systems. This could entail performing experiments in quantum physics, establishing quantum communication networks, or working with quantum computing technology.
Equally important are research abilities. These include the capacity to plan experiments, examine data, create and test theories, and stay abreast of the quantum technology field's quick evolution. Students are better equipped to progress in quantum technology with these skills, whether they choose to work in academic research or the private sector.
This topic has a dramatic impact on a global education programme on quantum technologies. It enables students to translate abstract quantum principles into concrete technological advances by bridging the gap between theoretical understanding and actual implementation. The development of quantum technology and the training of the following generation of quantum scientists and engineers depend on this fusion of theory and practice.
### _Introduction to Quantum Photonics_
The crucial topic "Introduction to Quantum Photonics" explores the relationship between quantum mechanics and light, or more particularly, photons. The focus of the study is on the fundamental ideas underlying the production, control, and measurement of photons within the framework of quantum mechanics. As they pertain to photonics, this includes comprehending phenomena like quantum entanglement, quantum superposition, and quantum interference.
The practical use of these ideas in quantum sensing, quantum computing, and quantum communication will also be demonstrated to the students. For instance, scientists might study quantum teleportation, quantum cryptography, photonic quantum bits (qubits), and the creation of quantum networks.
As photonics is a fast-evolving topic with many potential applications in quantum technology, it plays a vital part in a programme for universal quantum technology education. It serves as a foundation for comprehending the processing and transmission of quantum information, making it a prerequisite for anyone wishing to specialise in quantum technology. Furthermore, a solid understanding of quantum photonics can lead to a variety of research and job prospects because optical systems are frequently employed to create quantum technologies.
## IV International Industry Focused Curriculum
In the current landscape, we can observe a noticeable talent shortage in the quantum technology market primarily due to an absence of a universal quantum education curriculum which combines industry-relevant skills with an innovation-driven mindset. Due to this, most industries are focusing on hiring PhDs who are already trained to innovate and skilled in the
execution of multiple complex research projects relevant to the quantum industry. But the hiring of such candidates comes at a cost of financial strain on the startups that are facing budget delays and tightening.
In order to resolve the challenges, a world-class and industry-standard curriculum is required at the level of a master's degree because such a program is closer to PhDs and brings more specialisation of topics opening pathways to different career streams. This approach proves to be more economical and time-saving since it does not necessitate long study time and research. Furthermore, these master's specialisations provide in-depth knowledge about a particular technology that is in line with the job descriptions given by organisations today. Additionally, the industry-oriented curriculum ensures a strategic move towards achieving a smoother transition into the quantum workforce, irrespective of geographical boundaries, thus optimizing talent utilisation in this niche field,
The cutting-edge master's degree programs must maintain a balance between the theoretical and practical aspects of the subjects by continuously following international and industry-relevant standards. The curriculum should be comprehensive enough to cover the skills and criteria required by companies globally with ample internships and research projects. This can prepare the students for multiple roles in quantum industries such as research software engineers, nanofabrication engineers, research scientists in quantum chemistry or algorithms, measurement and design engineers and so on. This promotes a positive factor of employability and the shortening of skill-gap present among emerging professionals today.
While developing a cutting-edge curriculum, it is important to advocate for equal opportunities, inclusion, and diversity for the marginalized sections of society that can ensure a clear career path for the students and professionals in the quantum space. Placing a strong emphasis on professional development will enable organizations to foster a culture of growth and will aid in the cultivation of professional allegiance to that respective industry.
A quantum-ready workforce has the potential to serve not only the commercial industries but also the government agendas for national security and intelligence. It will be important to upskill scientists and professionals who could contribute to the development of the nation's defence and enable technological breakthroughs. Therefore, there is an urgent call for industry-relevant quantum technology syllabi so as to achieve global competitiveness in this sector and enhance the education sector of this field.
## V Conclusion
The revolution of quantum technology is currently in its very early stages and it is difficult to predict the far future of this technology. However, by observing the current pace of research, it is possible that we will witness truly amazing technological breakthroughs in the upcoming years. In order to have a successful quantum ecosystem, the most significant role is that of having cutting-edge quantum education and research that enables professionals to be at the forefront of their careers and businesses. In this paper, we proposed novel ideas for the development of a universal master's degree program which will allow students and professionals to gain industry-relevant niche skills and provides them with an overall overview of the current research and business scenarios happening in the quantum space along with career growth. We hope that our proposed master's curriculum is adopted by academic institutions globally to make their system robust and competitive. Furthermore, we are ambitious that our universal methodology of syllabus is able to solve the pressing needs of the current quantum education system, and empower and inspire students and professionals to pursue their careers in this futuristic field.
## Acknowledgment
S.G. acknowledges the support received from Woxsen University to develop the curriculum for quantum education. S.N.M. is grateful to Karunya Institute of Technology and Science for providing the opportunity to participate in the syllabus design process.
|
2310.11989 | Image Clustering with External Guidance | The core of clustering is incorporating prior knowledge to construct
supervision signals. From classic k-means based on data compactness to recent
contrastive clustering guided by self-supervision, the evolution of clustering
methods intrinsically corresponds to the progression of supervision signals. At
present, substantial efforts have been devoted to mining internal supervision
signals from data. Nevertheless, the abundant external knowledge such as
semantic descriptions, which naturally conduces to clustering, is regrettably
overlooked. In this work, we propose leveraging external knowledge as a new
supervision signal to guide clustering, even though it seems irrelevant to the
given data. To implement and validate our idea, we design an externally guided
clustering method (Text-Aided Clustering, TAC), which leverages the textual
semantics of WordNet to facilitate image clustering. Specifically, TAC first
selects and retrieves WordNet nouns that best distinguish images to enhance the
feature discriminability. Then, to improve image clustering performance, TAC
collaborates text and image modalities by mutually distilling cross-modal
neighborhood information. Experiments demonstrate that TAC achieves
state-of-the-art performance on five widely used and three more challenging
image clustering benchmarks, including the full ImageNet-1K dataset. | Yunfan Li, Peng Hu, Dezhong Peng, Jiancheng Lv, Jianping Fan, Xi Peng | 2023-10-18T14:20:55Z | http://arxiv.org/abs/2310.11989v3 | # Image Clustering with External Guidance
###### Abstract
The core of clustering is incorporating prior knowledge to construct supervision signals. From classic k-means based on data compactness to recent contrastive clustering guided by self-supervision, the evolution of clustering methods intrinsically corresponds to the progression of supervision signals. At present, substantial efforts have been devoted to mining internal supervision signals from data. Nevertheless, the abundant external knowledge such as semantic descriptions, which naturally conduces to clustering, is regrettably overlooked. In this work, we propose leveraging external knowledge as a new supervision signal to guide clustering, even though it seems irrelevant to the given data. To implement and validate our idea, we design an externally guided clustering method (Text-Aided Clustering, TAC), which leverages the textual semantics of Wordlet to facilitate image clustering. Specifically, TAC first selects and retrieves WordNet nouns that best distinguish images to enhance the feature discriminability. Then, to improve image clustering performance, TAC collaborates text and image modalities by mutually distilling cross-modal neighborhood information. Experiments demonstrate that TAC achieves state-of-the-art performance on five widely used and three more challenging image clustering benchmarks, including the full ImageNet-1K dataset.
Clustering, Deep Clustering, Image Clustering
## 1 Introduction
Image clustering aims at partitioning images into different groups in an unsupervised fashion, which is a long-standing task in machine learning. The core of clustering resides in incorporating prior knowledge to construct supervision signals. According to different choices of supervision signals, one could roughly divide the evolution of clustering methods into three eras, _i.e._, classic clustering, deep clustering, and self-supervised clustering as depicted in Fig 1. At the early stage, classic clustering methods build upon various assumptions on the data distribution, such as compactness [1, 2], hierarchy [3], connectivity [4, 5, 6], sparsity [7, 8], and low rank [9, 10, 11]. Though having achieved promising performance, classic clustering methods would produce suboptimal results confronting complex and high-dimensional data. As an improvement, deep clustering methods equip clustering models with neural networks to extract discriminative features [12, 13, 14, 15, 16, 17, 18, 19, 20]. In alignment with priors such as cluster discriminability [21] and balance [22], various supervision signals are formulated to optimize the clustering network. In the last few years, motivated by the success of self-supervised learning [23, 24, 25], clustering methods turn to creating supervision signals through data augmentation [26, 27, 28, 29, 30, 31, 32, 33] or momentum strategies [34, 35].
Though varying in the method design, most existing clustering methods design supervision signals in an internal manner. Despite the remarkable success achieved, the internally guided clustering paradigm faces an inherent limitation. Specifically, the hand-crafted internal supervision signals, even enhanced with data augmentation, are inherently upper-bounded by the limited information in the given data. For example, from the images of "penguins" and "polar bears", one can hardly distinguish them based on living spaces given the similar image backgrounds. Luckily, beyond the internal signals, we notice there also exists well
Fig. 1: The evolution of clustering methods could be roughly divided into three eras, including **i) classic clustering**, which designs clustering strategies based on data distribution assumptions; **ii) deep clustering**, which extracts clustering-layoutable features with deep neural networks by formulating clustering objectives as loss functions, and **iii) self-supervised clustering**, which constructs supervision signals through data augmentations or momentum strategies. As shown in the figure, each era remarkably improves the clustering accuracy. In this work, instead of mining the internal supervision, we propose exploring external knowledge to facilitate image clustering. We categorize such a novel paradigm as **iv) externally guided clustering**. By leveraging the semantics in the text modality, our TAC pushes the clustering accuracy to state-of-the-art with a remarkable improvement.
established external knowledge that potentially conduces to clustering, while having been regrettably and largely ignored. In the above example, we could easily distinguish the images according to the living spaces, given the external knowledge that "penguins" live in the Antarctic while "polar bears" live in the Arctic. In short, from different sources or modalities, the external knowledge can serve as promising supervision signals to guide clustering, even though it seems irrelevant to the given data. Compared with exhaustively mining internal supervision signals from data, it would yield twice the effect with half the effort by incorporating rich and readily available external knowledge to guide clustering.
In this work, we propose a simple yet effective externally guided clustering method TAC (Text-Aided Clustering), which clusters images by incorporating external knowledge from the text modality. In the absence of class name priors, there are two challenges in leveraging the textual semantics for image clustering, namely, _i_) how to construct the text space, and _ii_) how to collaborate images and texts for clustering. For the first challenge, ideally, we expect the text counterparts of between-class images to be highly distinguishable so that clustering can be easily achieved. To this end, inspired by the zero-shot classification paradigm in CLIP [36], we reversely classify all nouns from WordNet [37] to image semantic centers. Based on the classification confidence, we select the most discriminative nouns for each image center to form the text space and retrieve a text counterpart for each image. Intriguingly, Fig. 2 demonstrates that in certain cases, the retrieved nouns could describe the image semantics even better than the manually annotated class names. For the second challenge, we first establish an extremely simple baseline by concatenating the images and text counterparts, which already significantly enhances the k-means clustering performance without any additional training. For better collaboration, we propose to mutually distill the neighborhood information between the text and image modalities. By specifically training clustering heads, the proposed TAC achieves state-of-the-art performance on five widely used and three more challenging image clustering datasets. Without loss of generality, we evaluate TAC on the pre-trained CLIP model in our experiments, but TAC could adapt to any vision-language pre-trained (VLP) model by design.
The major contributions of this work could be summarized as follows:
* Unlike previous clustering works that exhaustively explore and exploit supervision signals internally, we propose leveraging external knowledge to facilitate clustering. We summarize such a novel paradigm as externally guided clustering, which provides an innovative perspective on the construction of supervision signals.
* To implement and validate our idea, we propose an externally guided clustering method TAC, which leverages the textual semantics to facilitate image clustering. Experiments demonstrate the superiority of TAC over eight datasets, including ImageNet-1K. Impressively, in most cases, TAC even outperforms zero-shot CLIP in the absence of class name priors.
* The significance of TAC is two-fold. On the one hand, it proves the effectiveness and superiority of the proposed externally guided clustering paradigm. On the other hand, it suggests the presence of more simple but effective strategies for mining the zero-shot learning ability inherent in VLP models.
## 2 Related Work
In this section, we first review the deep clustering methods. Then we briefly introduce the zero-shot classification paradigm of VLP models which also utilizes the text modality to perform visual tasks.
### _Deep Image Clustering_
In addition to effective clustering strategies, discriminative features also play an important role in achieving promising clustering results. Benefiting from the powerful feature extraction ability of neural networks, deep clustering methods show their superiority in handling complex and high-dimensional data [12, 14, 21]. The pioneers in deep clustering focus on learning clustering-favorable features through optimizing the network with clustering objectives [13, 16, 15, 22, 20, 38]. In recent years, motivated by the success of contrastive learning, a series of contrastive clustering methods achieve substantial performance leaps on image clustering benchmarks [26, 29, 34]. Instead of clustering images in an end-to-end manner, several works initially learn image embedding through uni-modal pre-training and subsequently mine clusters based on neighborhood consistency [30, 31] or pseudo-labeling [32]. By disentangling representation learning and clustering, these multi-stage methods enjoy higher flexibility for their easy
Fig. 2: The observations of this paper. As a showcase, two image examples from the ImageNet-Dogs dataset are illustrated. For each example, we show the manually annotated class names and the nouns obtained by the proposed TAC, as well as the zero-shot classification probabilities. From the example, one could arrive at two observations, namely, _i_) visually similar samples could be better distinguished in the text modality, and _ii_) manually annotated class names are not always the best semantic description. As shown, zero-shot CLIP falsely classifies both images to the “Blenheim Spaniel” class, whereas the nouns obtained by our TAC successfully separate them. Such an observation suggests a great opportunity to leverage the external knowledge (hidden in the text modality in this showcase) to facilitate image clustering.
adaption to superior pre-trained models. A recent study [35] demonstrates that the clustering performance could be further improved when equipping clustering models with more advanced representation learning methods [25]. Very recently, SIC [39] attempts to generate image pseudo labels from the textual space.
Though having achieved remarkable progressions, almost all existing deep image clustering methods mine supervision signals internally. However, the internal supervision signals are inherently bounded by the given images. Instead of pursuing internal supervision signals following previous works, we propose a new paradigm that leverages external knowledge to facilitate image clustering. Diverging from the most related work SIC that solely computes text semantic centers for pseudo labeling, our TAC retrieves discriminative text counterparts for mutual distillation. In addition, while SIC requires elaborate parameter tuning on different datasets, our TAC achieves promising results on all eight datasets with the same set of hyper-parameters, except the distillation temperature for large cluster numbers. We hope the simple design and engaging performance of TAC could attract more attention to the externally guided clustering.
### _Zero-shot Classification_
Recently, more and more efforts have been devoted to multimodal, especially vision-language pre-training (VLP). By learning the abundant image-text pairs on the Internet, VLP methods [40, 41, 42], with CLIP [36] as a representative, have achieved impressive performance in multimodal representation learning. More importantly, unlike uni-modal pre-trained models that require additional fine-tuning, VLP models could adapt to various tasks such as classification [36], segmentation [43], and image captioning [42] in a zero-shot manner. Here, we briefly introduce the zero-shot image classification paradigm in CLIP as an example. Given names of \(K\) classes such as "plane", "train" or "car", CLIP first assembles them with prompts like "a photo of [CLASS]", where the [CLASS] token is replaced by the specific class name. Then, CLIP computes the text embeddings \(\{w_{i}\}_{i=1}^{K}\) of the prompted sentences with its pre-trained text encoder. Finally, CLIP treats the embeddings \(\{w_{i}\}_{i=1}^{K}\) as the weight of the classifier, and predicts the probability of image \(\mathbf{v}\) belonging to the \(i\)-th class as
\[p(y=i|\mathbf{v})=\frac{\exp(\sin(v,w_{i})/\tau)}{\sum_{j=1}^{K}\exp(\sin(v,w_ {j})/\tau)}, \tag{1}\]
where \(v\) denotes the image features extracted by the pre-trained image encoder, and \(\tau\) is the learned _softmax_ temperature.
As CLIP is trained to retrieve the most correlated text for each image, its consistent form between pre-training and inference leads to promising results in zero-shot image classification. However, such a paradigm requires prior knowledge of class names, which is unavailable in the image clustering task. To leverage CLIP for image clustering, the most direct approach is performing k-means [1] on the image embeddings. Nevertheless, the performance of k-means is limited and the textual semantics are underutilized. In this work, we explore a more advanced paradigm for image clustering, by taking full advantage of both the pre-trained image and text encoders. Briefly, we construct a text counterpart for each image by retrieving nouns that best distinguish images of different semantics, which significantly improves the feature discriminability. To collaborate the text and image modalities, we propose to mutually distill cross-modal neighborhood information, which further improves the clustering performance. Intriguingly, our experiments demonstrate that the proposed method outperforms zero-shot CLIP in most cases, even in the absence of class name priors. We believe such intriguing results could bring some insights into the paradigm design of leveraging VLP models for downstream classification and clustering.
## 3 Method
In this section, we present TAC, a simple yet effective externally supervised clustering method. The overview of TAC is illustrated in Fig. 3. In brief, we first propose a text counterpart construction strategy to exploit the text modality in Sec. 3.1. By selecting and retrieving discriminative nouns for each image, TAC yields an extremely simple baseline that significantly improves the k-means performance at negligible cost. To better collaborate the text and image modalities, in Sec. 3.2, we propose a cross-modal mutual distillation strategy to train cluster heads in both modalities, further improving the clustering performance.
### _Text Counterpart Construction_
The textual semantics are naturally favored in discriminative tasks such as classification and clustering. Ideally, clustering could be easily achieved if images have highly distinguishable counterparts in the text modality. To this end, in the absence of class name priors, we propose to select a subset of nouns from WordNet [37] to compose the text space, which is expected to exhibit the following two merits, namely, _i_) precisely covering the image semantics; and _ii_) highly distinguishable between images of different semantics.
The image semantics of different granularities could be captured by k-means with various choices of \(k\). A small value of \(k\) corresponds to coarse-grained semantics, which might not be precise enough to cover the semantics of images at cluster boundaries. Oppositely, a large value of \(k\) produces fine-grained semantics, which might fail to distinguish images from different classes. To find image semantics of appropriate granularity, we estimate the value of \(k\) based on the data number \(N\) and target cluster number \(K\), namely,
\[k=\max\{N/300,K*3\}. \tag{2}\]
The motivation behind such an estimation is two-fold. On the one hand, for large datasets, we hypothesize a cluster of \(\tilde{N}=300\) images is compact enough to be described by the same set of nouns. On the other hand, for small datasets, we hypothesize the semantics of images from one class could be precisely covered by three sets of nouns. In our experiments, such estimation works on datasets of different sample sizes and target cluster numbers. With the estimated value of \(k\)
we apply k-means on image embeddings to compute the image semantic centers by
\[s_{l}=\sum_{i=1}^{N}\mathbbm{1}_{v_{i}\in l}\;v_{i},l\in[1,k], \tag{3}\]
where \(\mathbbm{1}_{v_{i}\in l}\) is the indicator which equals one iff image \(v_{i}\) belongs to the \(l\)-th cluster.
Next, we aim to find discriminative nouns to describe each semantic center. Here, motivated by the zero-shot classification paradigm of CLIP, we reversely classify all nouns from WordNet into \(k\) image semantic centers. Specifically, the probability of the \(i\)-th noun belonging to the \(l\)-th image semantic center is
\[p(y=l|\mathbf{t_{i}})=\frac{\exp(\sin(t_{i},s_{l}))}{\sum_{j=1}^{k}\exp(\sin( t_{i},s_{j}))}, \tag{4}\]
where \(\mathbf{t_{i}}\) denoted the \(i\)-th noun prompted like CLIP, and \(t_{i}\) is the feature extracted by the text encoder. To identify highly representative and distinguishable nouns, we select the top \(\gamma\) confident nouns for each image semantic center. Formally, the \(i\)-th noun would be select for the \(l\)-th center if
\[p(y=k|\mathbf{t_{i}})\geq\bar{p}(y=k), \tag{5}\] \[\bar{p}(y=k)=\mathrm{sort}\{p(y=k|\mathbf{t_{i}})|\operatorname{ argmax}p(y|\mathbf{t_{i}})=k\}[\gamma], \tag{6}\]
where \(\bar{p}(y=k)\) corresponds to the \(\gamma\)-th largest confidence of nouns belonging to the \(l\)-th center. In practice, we fix \(\gamma=5\) on all datasets.
The selected nouns compose the text space catering to the input images. Then, we retrieve nouns for each image to compute its counterpart in the text modality. To be specific, let \(\{\tilde{\mathbf{t}}_{i}\}_{i=1}^{M}\) be the set of \(M\) selected nouns with \(\{\tilde{t}_{i}\}_{i=1}^{M}\) being their text embeddings, we compute the text counterpart \(\tilde{t}_{i}\) for image \(v_{i}\) as
\[\tilde{t}_{i}=\sum_{j=1}^{M}p(\bar{t}_{j}|v_{i})\bar{t}_{j}, \tag{7}\] \[p(\bar{t}_{j}|v_{i})=\frac{\exp(\sin(v_{i},\tilde{t}_{j})/\bar{ \tau})}{\sum_{k=1}^{M}\exp(\sin(v_{i},\tilde{t}_{k})/\bar{\tau})}, \tag{8}\]
where \(\bar{\tau}\) controls the softness of retrieval, which is fixed to \(0.005\) in all our experiments. The design of soft retrieval is to prevent the text counterparts of different images from collapsing to the same point. After the text counterpart construction, we arrive at an extremely simple baseline by applying k-means on the concatenated features \([\tilde{t}_{i},v_{i}]_{i=1}^{N}\). Notably, such an implementation requires no additional training or modifications on CLIP, but it could significantly improve the clustering performance compared with directly applying k-means on the image embeddings.
### _Cross-modal Mutual Distillation_
Though concatenating text counterparts and image embeddings improves the k-means performance, it is suboptimal for collaborating the two modalities. To better utilize multimodal features, we propose the cross-modal mutual distillation strategy. Specifically, let \(\mathcal{N}(v_{i})\) be a random nearest neighbor of \(v_{i}\), we introduce a cluster head \(f:v\to p\in\mathcal{R}^{K}\) to predict the soft cluster assignments for images \(v_{i}\) and \(\mathcal{N}(v_{i})\), where \(K\) is the target cluster number. Formally, we denote the soft cluster assignments for \(n\) images and their neighbors as
\[P=\left[\begin{array}{c}p_{1}\\ \cdots\\ p_{n}\end{array}\right]\text{ and }P^{\mathcal{N}}=\left[\begin{array}{c}p_{1}^{ \mathcal{N}}\\ \cdots\\ p_{n}^{\mathcal{N}}\end{array}\right]. \tag{9}\]
Fig. 3: Overview of the proposed TAC. **(Left)** TAC first classifies all nouns from WordNet to image semantic centers, and selects the most discriminative nouns of each image center to construct the text space. After that, TAC retrieves nouns for each image to compute its counterpart in the text space. By concatenating the image and retrieved text, we arrive at an extremely simple baseline with significantly improved k-means performance. **(Right)** To better collaborate the text and image modalities, TAC trains cluster heads by mutually distilling the neighborhood information. In brief, TAC encourages images to have consistent cluster assignments with the nearest neighbors of their counterparts in the text embedding space, and vice versa. Such a cross-modal mutual distillation strategy further boosts the clustering performance of TAC.
Likewise, we introduce another cluster head \(g:\tilde{t}_{i}\to q_{i}\in\mathcal{R}^{K}\) to predict the soft cluster assignments for text counterpart \(\tilde{t}_{i}\) and its random nearest neighbor \(\mathcal{N}(\tilde{t}_{i})\), resulting in the cluster assignment matrices
\[Q=\left[\begin{array}{c}q_{1}\\ \cdots\\ q_{n}\end{array}\right]\text{ and }Q^{\mathcal{N}}=\left[\begin{array}{c}q_{1}^{ \mathcal{N}}\\ \cdots\\ q_{n}^{\mathcal{N}}\end{array}\right]. \tag{10}\]
Let \(\hat{p}_{i},\hat{p}_{i}^{\mathcal{N}},\hat{q}_{i},\hat{q}_{i}^{\mathcal{N}}\) be the \(i\)-th column of assignment matrices \(P,P^{\mathcal{N}},Q,Q^{\mathcal{N}}\), the cross-modal mutual distillation loss is defined as follows, namely,
\[L_{\mathrm{Dis}}=\sum_{i=1}^{K}L_{i}^{v\to t}+L_{i}^{t\to v}, \tag{11}\]
\[L_{i}^{v\to t}=-\log\frac{\exp(\mathrm{sim}(\hat{q}_{i},\hat{p}_{i}^{ \mathcal{N}})/\hat{\tau})}{\sum_{k=1}^{K}\exp(\mathrm{sim}(\hat{q}_{i},\hat{p }_{k}^{\mathcal{N}})/\hat{\tau})}, \tag{12}\]
\[L_{i}^{t\to v}=-\log\frac{\exp(\mathrm{sim}(\hat{p}_{i},\hat{q}_{i}^{ \mathcal{N}})/\hat{\tau})}{\sum_{k=1}^{K}\exp(\mathrm{sim}(\hat{p}_{i},\hat{q }_{k}^{\mathcal{N}})/\hat{\tau})}, \tag{13}\]
where \(\hat{\tau}\) is the _softmax_ temperature parameter. The distillation loss \(L_{\mathrm{Dis}}\) has two effects. On the one hand, it minimizes the between-cluster similarity, leading to more discriminative clusters. On the other hand, it encourages consistent clustering assignments between each image and the neighbors of its text counterpart, and vice versa. In other words, it mutually distills the neighborhood information between the text and image modalities, bootstrapping the clustering performance in both. In practice, we set the number of nearest neighbors \(\hat{N}=50\) on all datasets.
Next, we introduce two regularization terms to stabilize the training. First, to encourage the model to produce more confident cluster assignments, we introduce the following confidence loss, namely,
\[L_{\mathrm{Con}}=-\log\sum_{i=1}^{n}p_{i}^{\top}q_{i}, \tag{14}\]
which would be minimized when both \(p_{i}\) and \(q_{i}\) become one-hot. Second, to prevent all samples from collapsing into only a few clusters, we introduce the following balance loss, _i.e._,
\[L_{\mathrm{Bal}}=-\sum_{i=1}^{K}\left(\bar{p}_{i}\log\bar{p}_{i}+ \bar{q}_{i}\log\bar{q}_{i}\right), \tag{15}\]
\[\bar{p}=\frac{1}{n}\sum_{i=1}^{n}p_{i}\in\mathcal{R}^{K},\bar{q}=\frac{1}{n} \sum_{i=1}^{n}q_{i}\in\mathcal{R}^{K}, \tag{16}\]
where \(\bar{p}\) and \(\bar{q}\) correspond to the cluster assignment distribution in the image and text modality, respectively.
Finally, we arrive at the overall objective function of TAC, which lies in the form of
\[L_{\mathrm{TAC}}=L_{\mathrm{Dis}}+L_{\mathrm{Con}}-\alpha\cdot L_{\mathrm{Bal}}, \tag{17}\]
where \(\alpha\) is the weight parameter which we fix to 5 in all the experiments.
The pipeline of the proposed TAC is summarized in Algorithm 1, including the simple baseline without training and the full version with cross-modal mutual distillation.
## 4 Experiments
In this section, we evaluate the proposed TAC on five widely used and three more challenging image clustering datasets. A series of quantitative and qualitative comparisons, ablation studies, and hyper-parameter analyses are carried out to investigate the effectiveness and robustness of the method.
### _Experimental Setup_
We first introduce the datasets and metrics used for evaluation, and then provide the implementation details of TAC.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Training Split & Test Split & \# Training & \# Test & \# Classes \\ \hline STL-10 & Train & Test & 5,000 & 8,000 & 10 \\ CIFAR-10 & Train & Test & 50,000 & 10,000 & 10 \\ CIFAR-20 & Train & Test & 50,000 & 10,000 & 20 \\ ImageNet-10 & Train & Val & 13,000 & 500 & 10 \\ ImageNet-Dogs & Train & Val & 19,500 & 750 & 15 \\ \hline DTD & Train-Val & Test & 3,760 & 1,880 & 47 \\ UCF-101 & Train & Val & 9,537 & 3,783 & 101 \\ ImageNet-1K & Train & Val & 1,281,167 & 50,000 & 1,000 \\ \hline \hline \end{tabular}
\end{table} TABLE I: A summary of datasets used for evaluation.
#### 4.1.1 Datasets
To evaluate the performance of our TAC, we first apply it to five widely-used image clustering datasets including STL-10 [44], CIFAR-10 [45], CIFAR-20 [45], ImageNet-10 [46], and ImageNet-Dogs [46]. With the rapid development of pre-training and clustering methods, we find clustering on relatively simple datasets such as STL-10 and CIFAR-10 is no longer challenging. Thus, we further evaluate the proposed TAC on three more complex datasets with larger cluster numbers, including DTD [47], UCF-101 [48], and ImageNet-1K [49]. Following recent deep clustering works [30, 31], we train and evaluate TAC on the train and test splits, respectively. The brief information of all datasets used in our evaluation is summarized in Table I.
#### 4.1.2 Evaluation metrics
We adopt three widely-used clustering metrics including Normalized Mutual Information (NMI) [50], Accuracy (ACC), and Adjusted Rand Index (ARI) [51] to evaluate the clustering performance. Higher values of these metrics indicate better clustering results.
#### 4.1.3 Implementation details
Following previous works [39], we adopt the pre-trained CLIP model with ViT-B/32 [52] and Transformer [53] as image and text backbones, respectively. Consistent with the CLIP preprocessing pipeline, we scale, random crop, and normalize images before feeding them into the ViT backbone. For nouns from WordNet [37], we assemble them with prompts like "A photo of [CLASS]" before feeding them into the Transformer backbone. The two cluster heads \(f\) and \(g\) are two-layer MLPs of dimension \(512\)-\(512\)-\(K\), where \(K\) is the target cluster number. We train \(f\) and \(g\) by the Adam optimizer with an initial learning rate of \(1e-3\) for \(20\) epochs, with a batch size of \(512\). We fix \(\tau=5e-3\), \(\hat{\tau}=0.5\), and \(\alpha=5.0\) in all the experiments. The only exception is that on UCF-101 and ImageNet-1K, we change \(\hat{\tau}\) to \(5.0\), batch size to \(8192\), and training epochs to \(100\), catering to the large cluster number. We evaluate TAC under five random runs and report the average performance. All experiments are conducted on a single Nvidia RTX 3090 GPU on the Ubuntu 20.04 platform with CUDA 11.4. In our experiments, it takes only one minute to train TAC on the CIFAR-10 dataset.
### _Main Results_
Here we compare TAC with state-of-the-art baselines on five classic and three more challenging image clustering datasets, followed by feature visualizations to show the superiority of the proposed TAC.
#### 4.2.1 Performance on classic clustering datasets
We first evaluate the proposed TAC on five widely-used image clustering datasets, compared with 15 deep clustering baselines including JULE [13], DEC [16], DAC [17], DCCM [18], IIC [38], PICA [20], CC [26], IDFD [28], MiCE [27], GCC [34], NNM [31], TCC [29], SPICE [32], SCAN [30], and SIC [39]. In addition, we include two CLIP [36] baselines, namely, k-means on image embeddings and zero-shot classification that requires prior knowledge of class names.
As shown in Table II, by simply retrieving a text counterpart for each image, the proposed TAC successfully mines "free" semantic information from the text encoder. Without any additional training, TAC (no train) substantially improves the k-means clustering performance, especially on more complex datasets. For example, it achieves 14.4% and 43.5% ARI improvements on CIFAR-20 and ImageNet-Dogs, respectively. When further enhanced with the proposed cross-modal mutual distillation strategy, TAC achieves state-of-the-art clustering performance, even surpassing zero-shot CLIP on all five datasets. Notably, the improvement brought by leveraging the text modality is significant on
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} \hline Dataset & \multicolumn{3}{c}{STL-10} & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{3}{c}{CIFAR-20} & \multicolumn{3}{c}{ImageNet-10} & \multicolumn{3}{c}{ImageNet-Dogs} & \multirow{2}{*}{AVG} \\ \hline Metrics & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI & NMI \\ \hline JULE & 18.2 & 27.7 & 16.4 & 19.2 & 27.2 & 13.8 & 10.3 & 13.7 & 3.3 & 17.5 & 30.0 & 13.8 & 5.4 & 13.8 & 2.8 & 15.5 \\ DEC & 27.6 & 35.9 & 18.6 & 25.7 & 30.1 & 16.1 & 13.6 & 18.5 & 5.0 & 28.2 & 38.1 & 20.3 & 12.2 & 19.5 & 7.9 & 21.2 \\ DAC & 36.6 & 47.0 & 25.7 & 39.6 & 52.2 & 30.6 & 18.5 & 23.8 & 8.8 & 39.4 & 52.7 & 30.2 & 21.9 & 27.5 & 11.1 & 31.0 \\ DCCM & 37.6 & 48.2 & 26.2 & 49.6 & 62.3 & 40.8 & 28.5 & 32.7 & 17.3 & 60.8 & 71.0 & 55.5 & 32.1 & 38.3 & 18.2 & 41.3 \\ IIC & 49.6 & 59.6 & 39.7 & 51.3 & 61.7 & 41.1 & 22.5 & 25.7 & 11.7 & — & — & — & — & — & — \\ PICA & 61.1 & 71.3 & 53.1 & 59.1 & 69.6 & 51.2 & 31.0 & 33.7 & 17.1 & 80.2 & 87.0 & 76.1 & 35.2 & 35.3 & 20.1 & 52.1 \\ CC & 76.4 & 85.0 & 72.6 & 70.5 & 79.0 & 63.7 & 43.1 & 42.9 & 26.6 & 85.9 & 89.3 & 82.2 & 44.5 & 42.9 & 27.4 & 62.1 \\ IDFD & 64.3 & 75.6 & 57.5 & 71.1 & 81.5 & 66.3 & 42.6 & 42.5 & 26.4 & 89.8 & 95.4 & 90.1 & 54.6 & 59.1 & 41.3 & 63.9 \\ MGE & 63.5 & 75.2 & 57.5 & 73.7 & 83.5 & 69.8 & 43.6 & 44.0 & 28.0 & – & – & — & 42.3 & 43.9 & 28.6 & – \\ GCC & 68.4 & 78.8 & 63.1 & 76.4 & 85.6 & 72.8 & 47.2 & 47.2 & 30.5 & 84.2 & 90.1 & 82.2 & 49.0 & 52.6 & 36.2 & 64.3 \\ NNM & 66.3 & 76.8 & 59.6 & 73.7 & 83.7 & 69.4 & 48.0 & 45.9 & 30.2 & – & – & — & 60.4 & 58.6 & 44.9 & – \\ TCC & 73.2 & 81.4 & 68.9 & 79.0 & 90.6 & 73.3 & 47.9 & 49.1 & 31.2 & 84.8 & 89.7 & 82.5 & 55.4 & 59.5 & 41.7 & 67.2 \\ SPICE & 81.7 & 90.8 & 81.2 & 73.4 & 83.8 & 70.5 & 44.8 & 46.8 & 29.4 & 82.8 & 92.1 & 83.6 & 57.2 & 64.6 & 47.9 & 68.7 \\ SCAN & 69.8 & 80.9 & 64.6 & 79.7 & 88.3 & 77.2 & 48.6 & 50.7 & 33.3 & – & – & — & 61.2 & 59.3 & 45.7 & – \\ \hline CLIP (k-means) & 91.7 & 94.3 & 89.1 & 70.3 & 74.2 & 61.6 & 49.9 & 45.5 & 28.3 & 96.9 & 98.2 & 96.1 & 39.8 & 38.1 & 20.1 & 66.3 \\ TAC (no train) & 92.3 & 94.5 & 89.5 & 80.8 & 90.1 & 79.8 & 60.7 & 55.8 & 42.7 & 92.5 & 98.6 & 97.0 & 75.1 & 75.1 & 63.6 & 79.5 \\ \hline CLIP (zero-shot) & 93.9 & 97.1 & 93.7 & 80.7 & 90.0 & 79.3 & 55.3 & 58.3 & 39.8 & 95.8 & 97.6 & 94.9 & 73.5 & 72.8 & 58.2 & 78.7 \\ SIC & 95.3 & 98.1 & 95.9 & **84.7** & **92.6** & **84.4** & 59.3 & 58.3 & 43.9 & 97.0 & 98.2 & 96.1 & 69.0 & 69.7 & 55.8 & 79.9 \\ TAC & **95.5** & **98.2** & **96.1** & 84.1 & 92.3 & 83.9 & **61.1** & **60.7** & **44.8** & **98.5** & **99.2** & **98.3** & **80.6** & **83.0** & **72.2** & **83.2** \\ \hline \end{tabular}
\end{table} TABLE II: Clustering performance on five widely-used image clustering datasets. The bottom five rows correspond to methods leveraging pre-trained CLIP. The best and second best results are denoted in **bold** and underline, respectively.
ImageNet-Dogs, demonstrating the potential of texts in distinguishing visually similar samples. Such compelling results demonstrate that beyond the current zero-shot classification paradigm, alternative simple but more effective strategies exist for mining the VLP model's ability in image classification and clustering.
#### 4.2.2 Performance on more challenging datasets
We note that with the advance of pre-training and clustering methods, some classic datasets are no longer challenging enough for clustering. For example, our TAC achieves 98.2% and 99.2% clustering accuracy on STL-10 and ImageNet-10, respectively. Persistently pursuing performance gain on these relatively simple datasets might fall into the overfitting trap, leading to poor generalization ability of clustering methods, which is also unhealthy for the clustering community development. Hence, we further introduce three more challenging image clustering datasets for benchmarking. We include two representative baselines SCAN [30] and SIC [39] for comparison. For SCAN, we replace its first pre-training step with the pre-trained CLIP image encoder, and use its semantic clustering loss to train the cluster head. For SIC, as its source code is not released, we reimplement it following the details provided in the paper. Moreover, we also include the k-means and zero-shot performance of CLIP for benchmarking.
The clustering results of TAC and baseline methods are provided in Table III. Firstly, we observe TAC without additional training could consistently boost the k-means performance, which achieves a 10% improvement in clustering accuracy on ImageNet-1K. Secondly, with the cross-modal mutual distillation strategy, TAC outperforms zero-shot CLIP and achieves the best clustering results on DTD and UCF-101. For example, compared with SCAN which solely mines clusters from the image modality, our TAC exhibits a 7.6% accuracy gain on UCF-101 by exploiting the textual semantics. Compared with the previous CLIP-based image clustering method SIC, our TAC also outperforms it by 6.8% in accuracy, demonstrating the superiority of the proposed paradigm. On the largest dataset ImageNet-1K, our TAC outperforms SCAN and SIC even without additional training, and the performance gap in clustering accuracy increases to 13.5% and 11.2% when TAC is further boosted with the proposed cross-modal mutual distillation strategy. Notably, the zero-shot CLIP yields better performance than our TAC on ImageNet-1K, given its advantage in the substantial prior knowledge of 1K class names. We further evaluate our TAC by replacing the selected discriminative nouns with prior class names. The results show that TAC successfully outperforms zero-shot CLIP when leveraging the class name priors, which demonstrates the superior zero-shot classification ability of TAC. Notably, TAC achieves inferior performance with the prior class names than retrieved nouns on the DTD dataset. Such a result verifies the effectiveness of the proposed text counterpart construction strategy, as well as our observation that manually annotated class names are not always the best
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Dataset & DTD & \multicolumn{4}{c}{UCF-101} & \multicolumn{4}{c}{ImageNet-1K} & \multirow{2}{*}{AVG} \\ \cline{2-2} \cline{7-10} Metrics & NMI & ACC & ARI & NMI & ACC & ARI & NMI & ACC & ARI \\ \hline CLIP (k-means) & 57.3 & 42.6 & 27.4 & 79.5 & 58.2 & 47.6 & 72.3 & 38.9 & 27.1 & 50.1 \\ TAC (no train) & 60.1 & 45.9 & 29.0 & 81.6 & 61.3 & 52.4 & 77.8 & 48.9 & 36.4 & 54.8 \\ \hline CLIP (zero-shot) & 56.5 & 43.1 & 26.9 & 79.9 & 63.4 & 50.2 & **81.0** & **63.6** & **45.4** & 56.7 \\ SCAN & 59.4 & 46.4 & 31.7 & 79.7 & 61.1 & 53.1 & 74.7 & 44.7 & 32.4 & 53.7 \\ SIC & 59.6 & 45.9 & 30.5 & 81.0 & 61.9 & 53.6 & 77.2 & 47.0 & 34.3 & 54.6 \\ TAC & **62.1** & **50.1** & **34.4** & **82.3** & **68.7** & **60.1** & 79.9 & 58.2 & 43.5 & **59.9** \\ TAC & 60.8 & 48.0 & 32.6 & 83.0 & 71.3 & 61.9 & 81.2 & 63.9 & 47.0 & 61.1 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Clustering performance on three more challenging image clustering datasets. The best and second best results are denoted in **bold** and underline, respectively. \(\dagger\) means that using prior class names instead of nouns selected from WordNet.
Fig. 4: Visualization of features extracted by different methods on the ImageNet-Dogs training set, with the corresponding k-means clustering ARI annotated on the top. a) image embedding directly obtained from the CLIP image encoder; b) text counterparts constructed by noun selection and retrieval; c) the simple TAC baseline by concatenating images and text counterparts; d) representation learned by TAC through cross-modal mutual distillation.
semantic description.
#### 4.2.3 Visualization
To provide an intuitive understanding of the clustering results, we visualize the features obtained at four different steps of TAC in Fig. 4. The clustering performance by applying k-means on the features is annotated at the top. Fig. 4(a) shows the image features extracted by the pre-trained CLIP image encoder. As can be seen, images of different breeds of dogs are mixed, leading to a poor clustering ARI of 23.4%. Luckily, by selecting and retrieving discriminative nouns, we can construct discriminative text counterparts for images. Thanks to the textual discriminability, visually similar samples could be better distinguished in the text modality as shown in Fig. 4(b). As shown in Fig. 4(c), by simply concatenating images and retrieved text counterparts, TAC significantly improves the feature discriminability without any additional training, yielding a promising clustering ARI of 61.3%. Finally, when equipped with the proposed cross-modal mutual distillation strategy, TAC could better collaborate the image and text modalities, leading to the best clustering ARI of 71.7%. We visualize the embedding learned by TAC before the _softmax_ layer in Fig. 4(d), which shows the best within-clustering compactness and between-cluster scatteredness.
### _Ablation Studies_
To prove the robustness, effectiveness, and generalization ability of the proposed TAC, we conduct ablation studies on text counterpart construction strategies, loss terms, and image backbones.
#### 4.3.1 Variants of text counterpart construction
Recall that to select representation nouns for text counterpart construction, we first classify all nouns from WordNet to image semantic centers found by applying k-means on image embeddings. Here, we investigate the robustness and necessity of the noun selection step. Specifically, we adopt three other classic clustering methods to compute semantic centers, including agglomerative clustering (AC), spectral clustering (SC), and DBSCAN. For AC and SC, we set the target cluster number to the same as k-means (estimated by Eq. 2). For DBSCAN, we tune the density parameter until it produces the same number of clusters. As shown in Table IV, the training-free TAC achieves better performance with SC, while the performance is similar among k-means, AC, and SC when further boosted with cross-modal mutual distillation. The performance degradation on DBSCAN is probably due to the poor quality of image embeddings. In practice, we find DBSCAN tends to treat a portion of samples as outliers, and thus it cannot precisely cover the image semantics, leading to suboptimal performance. To investigate the necessity to filter discriminative nouns, we further append a baseline by retrieving text counterparts from all nouns. According to the results, TAC encounters a performance drop on both datasets, but the influence is milder on UCF-101, which could be attributed to the richer image semantics in that dataset. In summary, the results demonstrate the effectiveness of discriminative noun selection, as well as the robustness of TAC against different clustering methods used for text counterpart construction.
#### 4.3.2 Effectiveness of each loss term
To understand the efficacy of the three loss terms \(L_{\text{Dis}}\), \(L_{\text{Con}}\), and \(L_{\text{Bal}}\) in Eq. 11, 14, and 15, we evaluate the performance of TAC with different loss combinations. According to the results in Table V, we arrive at the following four conclusions. First, each loss itself is not sufficient to produce good clustering results. Second, the balance loss \(L_{\text{Bal}}\) could prevent cluster collapsing. Without \(L_{\text{Bal}}\), TAC assigns most images to only a few clusters, leading to poor clustering performance on both datasets. Third, the confidence loss \(L_{\text{Con}}\) is necessary for datasets with large cluster numbers. Without \(L_{\text{Con}}\), TAC can still achieve promising results on ImageNet-Dogs but encounters a performance drop on UCF-101. The reason is that the cluster assignments would be less confident when the cluster number is large. In this case, the regularization efficacy of \(L_{\text{Bal}}\) would be alleviated, which explains the performance degradation on UCF-101. Fourth, \(L_{\text{Dis}}\) could effectively distill the neighborhood information between the text and image modalities, leading to the best clustering performance.
#### 4.3.3 Variants of CLIP image backbones
In our experiments, for a fair comparison with previous works [39], we adopt ViT-B/32 as the image backbone. Here, to investigate whether the proposed TAC generalizes to other backbones, we further evaluate its performance on ResNet-50, ResNet-101, and ViT-B/16. We also report the k-means and zero-shot performance of CLIP for comparison. As shown in Table VI, the performance of TAC is positively correlated with the size of image backbones.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Semantic} & \multicolumn{2}{c}{ImageNet-Dogs} & \multicolumn{3}{c}{UCF-101} \\ \cline{3-8} & Space & NMI & ACC & ARI & NMI & ACC & ARI \\ \hline \multirow{4}{*}{TAC (no train)} & k-means & 75.1 & 75.1 & 63.6 & 81.6 & 61.3 & 52.4 \\ & AC & 73.4 & 72.0 & 61.2 & 81.9 & 63.7 & **54.8** \\ & SC & **77.4** & **75.2** & **65.9** & **82.2** & **65.5** & 54.7 \\ & DBSCAN & 68.8 & 64.3 & 51.0 & 81.1 & 61.8 & 52.3 \\ & None & 70.3 & 68.7 & 53.6 & 81.3 & 63.2 & 52.8 \\ \hline \multirow{4}{*}{TAC} & k-means & **80.6** & 83.0 & **72.2** & 82.3 & 68.7 & 60.1 \\ & AC & 78.4 & 81.7 & 69.5 & **82.4** & **69.3** & 60.2 \\ \cline{1-1} & SC & 79.2 & **83.1** & 70.8 & 82.3 & 69.1 & **60.4** \\ \cline{1-1} & DBSCAN & 75.5 & 80.4 & 65.5 & 80.6 & 66.2 & 56.4 \\ \cline{1-1} & None & 75.7 & 78.7 & 66.0 & 81.2 & 67.0 & 58.3 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Clustering performance of TAC using different clustering methods for text counterpart construction. (AC: agglomerative clustering, SC: spectral clustering, None: using all nouns from WordNet)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\(L_{\text{Dis}}\)} & \multirow{2}{*}{\(L_{\text{Con}}\)} & \multirow{2}{*}{\(L_{\text{Bal}}\)} & \multicolumn{2}{c}{ImageNet-Dogs} & \multicolumn{3}{c}{UCF-101} \\ \cline{4-9} & & & NMI & ACC & ARI & NMI & ACC & ARI \\ \hline ✓ & & & 71.4 & 69.5 & 38.1 & 69.3 & 7.6 & 13.6 \\ & ✓ & & 57.2 & 14.3 & 24.3 & 52.1 & 3.4 & 8.6 \\ & & ✓ & 15.1 & 19.3 & 4.1 & 43.5 & 16.2 & 5.7 \\ ✓ & ✓ & & 72.5 & 57.0 & 45.3 & 55.6 & 3.6 & 9.9 \\ ✓ & ✓ & ✓ & **80.6** & **83.5** & **72.3** & 70.5 & 45.1 & 34.5 \\ & ✓ & ✓ & 78.2 & 81.8 & 69.6 & 81.6 & 67.3 & 59.1 \\ ✓ & ✓ & ✓ & **80.6** & 83.0 & 72.2 & **82.3** & **68.7** & **60.1** \\ \hline \hline \end{tabular}
\end{table} TABLE V: The performance of TAC with different combinations of the loss terms.
On all four image backbones, TAC achieves comparable performance with zero-shot CLIP without any additional training, modifications on the CLIP, or prior knowledge of class names. When further enhanced with cross-modal mutual distillation, TAC outperforms zero-shot CLIP in all cases. The promising results prove the generalization ability of our TAC to various image backbones.
### _Parameter Analyses_
Though we fix the set of hyper-parameters in all experiments except a larger \(\hat{\tau}\) to handle large cluster number, there are six tunable hyper-parameters in TAC, namely, the expected compact cluster size \(\tilde{N}\), the number of discriminative nouns for each image semantic center \(\gamma\), the retrieval softness temperature \(\hat{\tau}\), the number of nearest neighbors \(\hat{N}\), the mutual distillation temperature \(\hat{\tau}\), and the weight of the balance loss \(\alpha\). We evaluate the performance of TAC under various choices of these hyper-parameters on ImageNet-Dogs and UCF-101. The results are shown in Fig. 5.
#### 4.4.1 Expected compact cluster size \(\tilde{N}\)
To capture semantics with appropriate granularity, we empirically hypothesize a cluster of \(\tilde{N}=300\) images is compact enough to be described by the same set of nouns. Here, we test various choices of \(\tilde{N}\) to see how the granularity of image semantics influences the final clustering performance. As shown in Fig. 5(a), TAC achieves inferior performance under large \(\tilde{N}\) on both datasets. The reason is that under an excessively large cluster size, the semantics would be overly coarse-grained, thereby failing to precisely delineate the images at cluster boundaries. Conversely, when the cluster size is overly small, the excessively fine-grained semantics would break cluster structures, leading to performance degradation on ImageNet-Dogs. Notably, the proper range of \(\tilde{N}\) is smaller on UCF-101 than ImageNet-Dogs, owing to the former's richer image semantics. The results demonstrate the necessity and effectiveness of the proposed cluster number estimation criterion in Eq. 2.
#### 4.4.2 Number of discriminative nouns \(\gamma\)
After computing image semantic centers, we classify all nouns into the centers and select the top \(\gamma\) nouns of each center as discriminative nouns. We try various choices of \(\gamma\), and the results are shown in Fig. 5(b). As can be seen, a solitary noun is insufficient to cover the semantics of each image
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{3}{c}{ImageNet-Dogs} & \multicolumn{3}{c}{UCF101} & \multicolumn{3}{c}{AVG} \\ \cline{3-8} & & NMI & ACC & ARI & NMI & ACC & ARI & \\ \hline \multirow{4}{*}{CLIP} & ResNet-50 & 35.3 & 35.2 & 18.9 & 75.9 & 53.5 & 42.6 & 43.6 \\ & ResNet-101 & 50.5 & 47.9 & 32.9 & 78.5 & 56.3 & 46.3 & 52.1 \\ (k-means) & ViT-8/32 & 39.8 & 38.1 & 20.1 & 79.5 & 58.2 & 47.6 & 47.2 \\ & ViT-8/16 & 48.5 & 42.3 & 29.4 & 81.4 & 60.2 & 51.9 & 52.3 \\ \hline \multirow{4}{*}{\begin{tabular}{c} TAC \\ (no train) \\ \end{tabular} } & ResNet-50 & 75.5 & 74.7 & 62.6 & 76.1 & 55.7 & 44.4 & 64.8 \\ & ResNet-101 & 76.8 & 74.0 & 62.8 & 77.9 & 59.1 & 48.8 & 66.5 \\ & ViT-8/23 & 75.1 & 75.1 & 63.6 & 81.6 & 61.3 & 52.4 & 68.2 \\ & ViT-8/16 & 81.3 & 80.8 & 71.2 & 82.5 & 63.3 & 54.8 & 72.3 \\ \hline \multirow{4}{*}{CLIP (zero-shot)} & ResNet-50 & 76.0 & 75.9 & 60.6 & 75.2 & 59.9 & 45.2 & 65.5 \\ & ResNet-101 & 72.9 & 73.6 & 58.2 & 77.4 & 62.0 & 48.6 & 65.4 \\ & ViT-8/23 & 73.5 & 72.8 & 58.2 & 79.9 & 63.4 & 50.2 & 66.3 \\ & ViT-8/16 & 80.4 & 80.4 & 68.4 & 81.9 & 68.0 & 57.0 & 72.7 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} TAC \\ \end{tabular} } & ResNet-50 & 76.1 & 79.8 & 66.6 & 76.3 & 60.6 & 49.3 & 68.1 \\ & ViT-8/23 & 80.6 & 83.0 & 72.2 & 82.3 & 68.7 & 60.1 & 74.5 \\ \cline{1-1} & ViT-8/16 & **82.1** & **85.7** & **74.5** & **83.3** & **70.4** & **61.8** & **76.3** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The clustering performance of TAC and CLIP variations with different image backbones.
Fig. 5: Analyses on six tunable hyper-parameters in the proposed TAC. The first three hyper-parameters influence both TAC with and without training. The last three hyper-parameters only influence the cross-modal mutual distillation process of TAC.
center. Conversely, an excessive number of nouns would falsely enrich the semantics, leading to inferior performance. The results show that the performance of TAC is stable with discriminative noun number \(\gamma\) in a typical range of 3-10.
#### 4.4.3 Retrieval softness temperature \(\tilde{\tau}\)
Recall that we introduce a softness temperature \(\tilde{\tau}\) when retrieving nouns to construct text counterparts. The smaller choice of \(\tilde{\tau}\) leads to more discriminative text counterparts. However, when \(\tilde{\tau}\) is too small, text counterparts of different images would collapse into the same noun, which damages the neighborhood information. As shown in Fig. 5(c), the clustering performance of TAC improves as the temperature \(\tilde{\tau}\) increases to 0.01. However, continually increasing \(\tilde{\tau}\) leads to a performance decrease, as samples of different semantics would be mixed in the text modality.
#### 4.4.4 Number of nearest neighbors \(\hat{N}\)
To collaborate the text and image modalities, we propose to mutually distill their neighborhood information. Here, we evaluate the performance of TAC with different numbers of nearest neighbors \(\hat{N}\) in Fig. 5(d). The results demonstrate that TAC is robust to diverse numbers of \(\bar{N}\). In our experiments, we simply fix \(\bar{N}=50\) on all datasets of varying sizes. However, we find the performance of TAC could be further boosted if \(\bar{N}\) is delicately tuned.
#### 4.4.5 Mutual distillation temperature \(\hat{\tau}\)
The mutual distillation temperature \(\hat{\tau}\) controls the strength of pushing different clusters apart. With a small \(\hat{\tau}\), the model emphasizes separating neighboring clusters, at the expense of paying less attention to distinguishing other clusters. Thus, for small cluster numbers where clusters are easier to separate, a smaller \(\hat{\tau}\) is recommended to amplify the distinction between similar clusters. However, for large cluster numbers, a larger \(\hat{\tau}\) is preferable to pay more attention to the overall discrimination between all clusters. According to the results shown in Fig. 5(e), TAC is robust against the value of \(\hat{\tau}\), but a proper choice can still improve the clustering performance.
#### 4.4.6 Weight of the balance loss \(\alpha\)
The balance loss \(L_{\mathrm{Bal}}\) prevents the model from assigning most samples to a few clusters. As shown in Fig. 5(f), \(\alpha\) should be at least larger than \(1\) to prevent model collapse. In our experiments, we fix \(\alpha=5\) and find it gives promising results on all eight datasets. In practice, the value of \(\alpha\) could be tuned according to the prior knowledge of the class distribution evenages.
## 5 Conclusion
In this paper, instead of focusing on exhaustive internal supervision signal construction, we innovatively propose leveraging the rich external knowledge, which has been repeatably overlooked before, to facilitate clustering. As a specific implementation, our TAC achieves state-of-the-art image clustering performance by leveraging textual semantics, demonstrating the effectiveness and promising prospect of the proposed externally guided clustering paradigm. In the future, the following directions could be worth exploring. On the one hand, in addition to the modalities this work focuses on, the external knowledge widely exists in different sources, domains, models, and so on. For example, one could utilize the pre-trained objective detection or semantic segmentation models to locate the semantic object for boosting image clustering. On the other hand, instead of focusing on image clustering, it is worth exploring the external knowledge for clustering other forms of data, such as text and point cloud. Overall, we hope this work could serve as a catalyst, motivating more future studies on externally guided clustering, which is believable to be a promising direction for both methodology improvement and real-world application.
|
2307.12697 | Proton induced reactions on 114Sn and 120Sn targets at energies up to 18
MeV | We measure cross sections of proton-induced reactions on tin up to energies
of 18 MeV using the stacked-foil activation technique, and report first
experimental values for 114Sn(p,{\alpha})111In, {^{120}}Sn(p,{\alpha})117gIn,
and 114Sn(p,x)113Sn reactions. Measured cross sections have been compared to
existing experimental values and numerical calculations based on Talys1.95. Our
data are in good agreement with all previous experiments, and also with
Talys1.95 results except for 120Sn(p,{\alpha})117m,gIn reactions where the
numerical calculations are shifted to higher energies. We also measure isomeric
cross section ratios for 117m,gIn and 120m,gSb pairs as functions of the
incident proton energy. | G. H. Hovhannisyan, T. M. Bakhshiyan, G. V. Martirosyan, R. K. Dallakyan, A. R. Balabekyan | 2023-07-24T11:24:05Z | http://arxiv.org/abs/2307.12697v1 | **Proton Induced Reactions on \({}^{114}\)Sn and \({}^{120}\)Sn Targets**
###### Abstract
We measure cross sections of proton-induced reactions on tin up to energies of 18 MeV using the stacked-foil activation technique, and report first experimental values for \({}^{114}\)Sn(p,\(\alpha\))\({}^{111}\)In, \({}^{120}\)Sn(p,\(\alpha\))\({}^{117g}\)In, and \({}^{114}\)Sn(p,x)\({}^{113}\)Sn reactions. Measured cross sections have been compared to existing experimental values and numerical calculations based on Talys1.95. Our data are in good agreement with all previous experiments, and also with Talys1.95 results except for \({}^{120}\)Sn(p,\(\alpha\))\({}^{117m,g}\)In reactions where the numerical calculations are shifted to higher energies. We also measure isomeric cross section ratios for \({}^{117m,g}\)In and \({}^{120m,g}\)Sb pairs as functions of the incident proton energy.
Keywords: proton induced reactions, stack foils activation method, excitation function, cross-section, \({}^{114}\)Sn and \({}^{120}\)Sn targets
## 1 Introduction
Excitation functions of proton-induced nuclear reactions are of interest in a variety of fields, ranging from astrophysical applications to medical radioisotope production [1]. The knowledge of the excitation functions allows one to approximately determine the production yields of radionuclides and to estimate the impurity fractions. There are microscopic and phenomenological models for calculating the cross-sections of nuclear reactions for energy ranges of interest, but full microscopic understanding of nuclear interactions remains an open problem.
Obtaining experimental data on nuclear reaction cross-sections is crucial for verifying the predictions of existing models and refining their parameters.
The emission of one or more nucleons in the interactions of light projectiles with nuclei is generally well described by the existing models. The emission of complex particles, such as alpha particles, is usually more difficult to predict. When analyzing (p,\(\alpha\)) reaction cross sections, three components are usually separated: direct, pre-equilibrium, and compound, and each of them can be analyzed by the appropriate theory. The analysis of the compound-nucleus component may be made with statistical theories that require only the nuclear level densities and the transmission coefficients of the incoming and outgoing particles. Direct and pre-equilibrium processes allow the study of more fundamental questions, since their characteristics depend on the reaction mechanism and the structure of the nucleus. Triton capture and knock-out of \(\alpha\)-particles are considered to be the main mechanisms of the direct processes. The \(\alpha\)-particle knock-out mechanism could provide a powerful probe of the \(\alpha\)-particle cluster structure of nuclei [2]. There is evidence indicating the difference in the contribution of direct processes depending on the nucleon composition of the target-nucleus; in particular, it is greater for neutron-rich nuclei [3].
In this article we investigate proton-induced nuclear reactions on \({}^{114}\)Sn and \({}^{120}\)Sn targets for energies up to 18 MeV. We measure cross sections of \({}^{114}\)Sn(p,\(\alpha\))\({}^{111}\)In and \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\rm{m},\rm{g}}\)In, \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb, \({}^{120}\)Sn(p,n)\({}^{120\rm{m},\rm{g}}\)Sb, and \({}^{114}\)Sn(p,x)\({}^{113}\)Sn reactions. Tin has a magic proton number (Z=50) and 10 stable isotopes with different deformation by neutrons. The knowledge of activation cross section of tin isotopes can be essential for understanding the processes governing the reactions and clarifying model parameters. However, the number of experimental works on enriched tin targets is not large. There are measurements on natural composition tin targets in the energy range covering the excitation functions [4-6]. Most measurements of proton nuclear reaction on enriched tin samples in the peak region are for energies up to 9.1 MeV and do not cover the entire excitation function range [7-17,19-23] with only a few with measurements at higher energies: for \({}^{124}\)Sn(p,n)\({}^{124}\)Sb reaction there are measurements for energies up to 17 MeV [24,25], for \({}^{112}\)Sn(p,x) - up to 23.6 MeV [26], for \({}^{119}\)Sn(p,n), \({}^{119}\)Sn(p,2n) - up to 15.2 MeV [27], and for \({}^{120}\)Sn(p,\(\alpha\)) - up to 22 MeV [18].
Experimental details
We irradiated a stack of enriched \({}^{114}\)Sn and \({}^{120}\)Sn foils using 18 MeV proton beam provided by compact medical cyclotron IBA Cyclone18/18. The stack was composed of 6 blocks of \({}^{\rm nat}\)Cu-\({}^{114}\)Sn-\({}^{\rm nat}\)Cu-\({}^{120}\)Sn layers where tin foils were 20 to 40 \(\upmu\)m thick and copper foils were 20 \(\upmu\)m thick. The irradiation was 5 min long with a collimated 1 \(\upmu\)A proton beam of the same diameter as the target (1.2 cm).
After the irradiation the foils in the stack were detached, and the \(\gamma\)-spectra of each target were measured with a high-purity germanium (HPGe) detector GEM15P4-70. The energy resolution of the HPGe detector was 1.66 keV FWHM at 1332.5 keV peak of \({}^{60}\)Co, and 0.618 keV FWHM at 122 keV peak of \({}^{57}\)Co. The efficiency of the detector was estimated using standard \(\gamma\)-sources \({}^{133}\)Ba, \({}^{137}\)Cs, \({}^{60}\)Co, \({}^{22}\)Na with known activities (supplied by Spectrum Techniques, USA), covering the whole energy range of the studied \(\gamma\)-rays. The measurements were performed periodically during several days, with the first measurement done 40 minutes after the end of irradiation (to allow short-lived isotope cross-section measurements). Residual nuclei were identified by their half-lives and the energies of characteristic gamma lines. A typical \(\gamma\)-spectrum of the irradiated \({}^{120}\)Sn target is plotted in Fig.1.
Figure 1: \(\gamma\)–ray spectrum of the irradiated \({}^{120}\)Sn.
The isotopic compositions of the targets are given in Table 1. Reaction thresholds, Q-values, half-lives, and branching intensities of the identified radioactive residuals formed in the \({}^{114}\)Sn, \({}^{120}\)Sn targets are given in Table 2. Q-values and the threshold energies of the reactions were calculated using the Q-value calculator [28], and the decay data were taken from [29-31].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Target} & \multicolumn{8}{c|}{Isotopic composition (\%)} \\ \cline{2-10} & 112 & 114 & 115 & 116 & 117 & 118 & 119 & 120 & 122 & 124 \\ \hline \({}^{114}\)Sn & 0.4 & 63.2 & 0.9 & 10.6 & 3.4 & 8.4 & 2.6 & 8.8 & 0.9 & 0.9 \\ \hline \({}^{120}\)Sn & \(<\)0.01 & \(<\)0.01 & \(<\)0.01 & 0.04 & 0.06 & 0.10 & 0.12 & 99.6 & 0.05 & 0.02 \\ \hline \end{tabular}
\end{table}
Table 1: Isotopic composition of the targets.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Reaction**} & Q value & \multirow{2}{*}{E\({}_{\rm th}\) (MeV)} & \multirow{2}{*}{T\({}_{1/2}\)} & \multirow{2}{*}{E\({}_{\rm f}\) (keV),} & \multirow{2}{*}{I\({}_{\rm f}\)(\%)} \\ \cline{1-2} \cline{6-7} & (MeV) & & & & & \\ \hline \({}^{114}\)Sn(p,\(\alpha\))\({}^{111g}\)In & 2.69 & 0 & 2.805 d & 171.28 & 90.7 \\ \hline \({}^{114}\)Sn(p,\(\alpha\))\({}^{111m}\)In & 2.69 & 0 & 7.7 min & 537.22 & 87 \\ \hline \({}^{114}\)Sn(p,pn)\({}^{113}\)Sn & -10.31 & 10.39 & 115.1 d & 391.71 & 64.97 \\ \({}^{114}\)Sn(p,d)\({}^{113}\)Sn & -8.08 & 8.15 & & & \\ \hline \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb & -14.99 & 15.12 & 6.67 min & 332.4 & 14.8 \\ \hline \({}^{120}\)Sn(p,n)\({}^{120m}\)Sb & -3.64 & 3.49 & 5.76 d & 197.3 & 87 \\ \hline \({}^{120}\)Sn(p,n)\({}^{120g}\)Sb & -3.64 & 3.49 & 15.89 min & 703.8 & 0. 149 \\ \({}^{120}\)Sn(p,\(\alpha\))\({}^{117m}\)In & 2.71 & 0 & 116.2 min & 315.302 & 19.1 \\ \hline \({}^{120}\)Sn(p,\(\alpha\))\({}^{117g}\)In & 2.71 & 0 & 43.2 min & 553 & 100 \\ \hline \end{tabular}
\end{table}
Table 2: List of the identified residues in the proton-induced reactions of \({}^{114}\)Sn, \({}^{120}\)Sn targets and their spectroscopic characteristics.
## 3 Data analysis
In Table 3 we present our measurements of reaction cross-sections \(\sigma\), which were determined as
\[\sigma=\frac{A_{obs}\lambda\frac{t_{3T}^{3}}{t_{3I}}}{\Phi N_{nucl}\ \varepsilon\ I_{ \gamma}\left(1\!-\!e^{-\lambda t_{1}}\right)e^{-\lambda t_{2}}\left(1\!-\!e^{ -\lambda t_{3T}}\right)}\] (1).
Here, \(A_{obs}\) is the observed number of \(\gamma\)-rays under the photo-peak, \(\lambda\) is the decay constant, \(\varepsilon\) is the detector efficiency, \(N_{nucl}\) is the number of target nuclei per area, \(\Phi\) is the proton flux, \(I_{\gamma}\) is the intensity of the product gamma line, \(t_{1}\) is the irradiation time, \(t_{2}\) is the time between the end of the bombardment and the beginning of the measurement, \(t_{3T}\) is the measurement real time, and \(t_{3I}\) is the measurement live time [32-34].
The actual energy of the external beam may differ from the nominal beam energy [35]. To determine the actual energy and the beam intensity, we placed a thin (20 \(\upmu\)m) copper monitor foil in front of the stack. The ratio of the \({}^{63}\)Cu(p,n)\({}^{63}\)Zn and \({}^{63}\)Cu(p,2n)\({}^{62}\)Zn monitor reaction cross sections was calculated using eq. (2) :
\[\frac{\sigma_{{}^{62}Zn}}{\sigma_{{}^{63}Zn}}=\frac{A_{{}^{62}Zn}\left(1\!-\!e ^{-\lambda_{63}Zn^{\prime}t_{1}}\right)}{A_{{}^{63}Zn}\left(1\!-\!e^{-\lambda_ {62}Zn^{\prime}t_{1}}\right)} \tag{2}\]
where \(A_{{}^{62}Zn}\), \(\lambda_{{}^{63}Zn}\) and \(A_{{}^{63}Zn}\), \(\lambda_{{}^{62}Zn}\) are activities and decay constants of \({}^{62}\)Zn and \({}^{63}\)Zn, respectively [36]. The primary beam energy was estimated to be \((18\pm 0.2)\) MeV by comparing the cross section ratio obtained from eq.(2) to the recommended cross-sections from IAEA [37]. We also calculate the beam intensity from monitor reactions from eq. (1). The proton flux in each monitor target was determined using the recommended cross sections of \({}^{\rm na}\)Cu(p,x)\({}^{62,63,65}\)Zn reactions from [37].
The average proton energy in each layer of the stack (defined as the mean of incoming \(E_{\rm in}\) and outgoing \(E_{\rm out}\) energies from the layer) and their uncertainties (\(\Delta E=(E_{\rm in}\!-\!E_{\rm out})/2\)) were
calculated using SRIM-2013 [38]. The total energy uncertainties for each layer also include uncertainties from the primary beam energy (0.2 MeV) and target thicknesses (5%).
When calculating reaction cross section uncertainties, we add uncertainties from the following sources in quadrature: statistical errors on counts (0.5-16%), uncertainties in the flux (up to 7%), detector efficiency (4%), and target thicknesses (5%). The resulting cross section uncertainties are in the range of 9-19%.
## 4 Results and discussion
In figures 2- 6 we show the comparison of our measured cross-sections (given in Table 3) with published experimental data [4-5, 8-14, 17, 20, 23, 25] and theoretical values obtained using TALYS1.95 code [39].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Energy (MeV)} & \multicolumn{4}{c|}{Cross-section (mb)} \\ \cline{2-5} & \({}^{114}\)Sn(p,\(\alpha\))\({}^{111(\text{m+g})}\)In & \({}^{114}\)Sn(p,\(\alpha\))\({}^{111\text{m}}\)In & \({}^{114}\)Sn(p,x)\({}^{113(\text{m+g})}\)Sn (cum) & \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb \\ \hline \(17.34\pm 0.89\) & \(7.98\pm 0.72\) & \(0.58\pm 0.11\) & \(281\pm 25\) & \(21.58\pm 4.10\) \\ \hline \(15.79\pm 0.81\) & \(6.59\pm 0.59\) & – & \(138\pm 13\) & – \\ \hline \(14.13\pm 0.72\) & \(3.96\pm 0.38\) & – & \(26.49\pm 2.91\) & – \\ \hline \(12.13\pm 0.64\) & \(2.52\pm 0.23\) & – & \(2.65\pm 0.50\) & – \\ \hline \(10.52\pm 0.54\) & \(1.30\pm 0.19\) & – & – & – \\ \hline \(8.46\pm 0.50\) & \(0.54\pm 0.09\) & – & – & – \\ \hline \multirow{2}{*}{Energy (MeV)} & \multicolumn{4}{c|}{Cross-section (mb)} \\ \cline{2-5} & \({}^{120}\)Sn(p,n)\({}^{120\text{m}}\)Sb & \({}^{120}\)Sn(p,n)\({}^{120\text{m}}\)Sb & \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\text{m}}\)In & \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\text{m}}\)In & \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\text{m}}\)In \\ \hline \(16.57\pm 0.85\) & \(23.34\pm 2.59\) & \(108\pm 15\) & \(1.20\pm 0.22\) & \(5.22\pm 0.46\) \\ \hline \(14.97\pm 0.77\) & \(39.66\pm 3.58\) & \(131\pm 18\) & \(1.10\pm 0.11\) & \(3.52\pm 0.33\) \\ \hline \(13.27\pm 0.69\) & \(49.08\pm 4.42\) & \(215\pm 20\) & \(0.76\pm 0.07\) & \(1.82\pm 0.16\) \\ \hline \(11.48\pm 0.61\) & \(71.56\pm 10.73\) & \(551\pm 55\) & \(0.50\pm 0.05\) & \(1.08\pm 0.11\) \\ \hline \(9.51\pm 0.51\) & \(46.78\pm 7.03\) & – & \(0.17\pm 0.03\) & \(0.46\pm 0.09\) \\ \hline \(7.31\pm 0.41\) & \(14.56\pm 2.18\) & – & – & – \\ \hline \end{tabular}
\end{table}
Table 3: Cross-sections of the radionuclides formed in the proton-induced reactions of \({}^{114}\)Sn, \({}^{120}\)Sn targets at different energies.
4.1. Reactions on the \({}^{114}\)Sn target
\({}^{111}\)In is directly produced from the \({}^{114}\)Sn(p,\(\alpha\)) reaction which has a Coulomb barrier of 7.04 MeV and no threshold. The isomeric state of \({}^{111}\)In (\(I\)\(P\)=1/2\(\cdot\), \(T_{1/2}\) = 7.7 min) transits (IT = 100%) to the ground state (\(I\)\(P\)=9/2\({}^{+}\), \(T_{1/2}\) = 2.805 d), which further decays (\(\varepsilon\): 100%) to stable \({}^{111}\)Cd. Because of its short half-life we only managed to estimate the cross-section of the isomeric state \({}^{111}\)In for the target \(\gamma\)-spectra of which were measured first.
\({}^{114}\)Sn(p,\(\alpha\))\({}^{111(m+\delta)}\)In reaction cross-sections were measured on a 63.2% enriched \({}^{114}\)Sn target using interference-free \(\gamma\)-lines \(E_{\gamma}\) = 171.28 keV (\(I_{\gamma}\) = 90.7%) and \(E_{\gamma}\) = 245.39 keV (\(I_{\gamma}\) = 94.1%). The only other reaction that contributes to \({}^{111}\)In production from our is \({}^{112}\)Sn(p,2p)\({}^{111}\)In (\(E_{\rm th}\) = 7.62 MeV). Since \({}^{112}\)Sn content in the target was only 0.4%, and \({}^{112}\)Sn(p,2p)\({}^{111(m+\delta)}\)In reaction cross section is small (\(\leq\)0.5 mb in discussed energies region according to TALYS1.95) the contribution of this reaction can be neglected. Reactions on other tin isotopes are not considered due to their high thresholds. Fig. 2 shows the measured cross-section of the discussed reaction along with Talys1.95 simulation results. Talys1.95 data are in a good agreement with the measured data. We didn't find any experimental results for the comparison.
Figure 2: Excitation function of the \({}^{114}\)Sn(p,\(\alpha\))\({}^{111}\)In reaction.
\({}^{113}\)Sb is produced in \({}^{114}\)Sn(p,2n) reactions with 15.12 MeV threshold energy and decays (\(T_{1/2}=6.67\) min) to \({}^{113}\)Sn by electron capture (\(\varepsilon\) 100%). \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb reaction cross-section was measured using the interference-free \(\gamma\)-line \(E_{\gamma}\)= 332.41 keV (\(I_{\gamma}\)= 14.8%). The contribution of the side reaction \({}^{112}\)Sn(p,\(\gamma\))\({}^{113}\)Sb does not exceed the uncertainty of measured cross section for the reaction \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb (\({}^{112}\)Sn content in the target is 0.4%, and TALYS cross-section of \({}^{112}\)Sn(p,\(\gamma\))\({}^{113}\)Sb reaction at 17 MeV is 2.12 mb). Fig.3 shows \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb reaction cross sections. For this reaction, there is one additional measurement available [26]; neither of the experimental points is in agreement with the calculations, and additional measurements are required to construct the full shape of the excitation function.
**Fig.3.** Excitation function of the \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb reaction.
Ground state of \({}^{113g}\)Sn (\(I\)\(P\)=1/2\({}^{+}\), \(T_{1/2}\) = 115.09 d) forms through direct production with (p,pn) reaction, decay (IT 91%) of the metastable \({}^{113m}\)Sn state (\(I\)\(P\) = 7/2\({}^{+}\), \(T_{1/2}\) = 21.4 min)), and decay of the parent \({}^{113}\)Sb. \({}^{113}\)Sn formation chain is presented in Fig 3.
Full cumulative cross-sections were measured after the total decay of the two short-lived parents through the 391.7 keV \(\gamma\)-ray line (\(I_{\gamma}\) = 64.97%) of \({}^{113g}\)Sn. Contributions of other Sn isotopes contained in the target to the \({}^{113}\)Sn production can be neglected: \({}^{115}\)Sn(p,p2n)\({}^{113}\)Sn reaction (E\({}_{\rm th}\) = 9.45 MeV) can be neglected as \({}^{115}\)Sn content in the target is only 0.9%, and \({}^{115}\)Sn(p,x)\({}^{113}\)Sn reaction cross section is small (\(\leq\)3.5\(\cdot\)10\({}^{-3}\) mb for the discussed energy region according to TALYS1.95). \({}^{112}\)Sn(p,\(\gamma\))\({}^{113}\)Sb reaction contribution does not exceed the uncertainty of our measurement for the \({}^{114}\)Sn(p,2n)\({}^{113(m+g)}\)Sn(cum) reaction cross section (TALYS1.95 reaction cross sections for the energies 12.13-15.79 MeV are \(\leq\)7.01 mb). Fig. 5 shows the comparison of \({}^{114}\)Sn(p,x)\({}^{113(m+g)}\)Sn(cum) reaction cross-section measurements with the values from Talys1.95 simulations showing good agreement at low energies.
Figure 4: \({}^{113}\)Sn formation chain.
_4.2. Reactions on the \({}^{120}\)Sn target_
Here we measure \({}^{120}\)Sb and \({}^{117}\)In isotope production cross-sections. The purity of the \({}^{120}\)Sn target used in the experiments is 99.6%, making the contributions of other tin isotopes to product yields negligibly small: \({}^{119}\)Sn and \({}^{118}\)Sn contents in the target are 0.12% and 0.10% and \({}^{119}\)Sn(p,\(\gamma\))\({}^{120}\)Sb and \({}^{118}\)Sn(p,2p)\({}^{117}\)In reaction cross sections in the discussed energy region are \(\leq\)0.99 mb and \(\leq\)10-7 mb, respectively.
\({}^{120}\)Sb is produced in the \({}^{120}\)Sn(p,n) reaction in both ground (\(I\)\({}^{P}\)=8-, \(T_{1/2}\) = 5.76 d) and isomeric (\(I\)\({}^{P}\)=1+, \(T_{1/2}\) = 15.89 min) states. Both states of \({}^{120}\)Sb decay to the stable \({}^{120}\)Sn. We measure the cross-sections of both states (summarized in Table 3) using interference-free \(\gamma\)-lines \(E_{\gamma}=197.3\) keV (I\({}_{\gamma}\) = 87%) and \(E_{\gamma}=1023.1\) keV (I\({}_{\gamma}\) = 99.4%) for \({}^{120}\)Sn(p,n)\({}^{120}\)Sb reaction and \(E_{\gamma}=703.8\) keV (I\({}_{\gamma}\) = 0.149%) and \(E_{\gamma}=988.6\) keV (I\({}_{\gamma}\) = 0.063%) for \({}^{120}\)Sn(p,n)\({}^{120}\)Sb reaction.
Fig. 6 shows the excitation functions of these reactions along with the data from Refs. [4-5, 8, 11-13] and Talys1.95 simulation results. Note that Refs. [4-5] used a \({}^{nat}\)Sn target so for a
Figure 5: Excitation function of the \({}^{114}\)Sn(p,x)\({}^{113}\)Sn.
comparison to the rest of the measurements we adjusted their results to a pure \({}^{120}\)Sn target; these measurements may also include a small contribution from \({}^{119}\)Sn(p,\(\gamma\))\({}^{120\rm{m}}\)Sb cross section.
All data for \({}^{120}\)Sn(p,n)\({}^{120\rm{(m+\delta)}}\)Sb and \({}^{120}\)Sn(p,n)\({}^{120\rm{g}}\)Sb reactions agree well with each other and with Talys1.95 calculations. Some discrepancies are present for the \({}^{120}\)Sn(p,n)\({}^{120\rm{m}}\)Sb reaction at energies higher than the peak energy.
According to TALYS prediction, compound nucleus reaction plays a crucial role at low incident energies. Besides this, pre-equilibrium processes have a sizable contribution to reaction cross sections for incident energies between 10 and (at least) 200 MeV [31]. For the \({}^{120}\)Sn(p,n)\({}^{120\rm{m},\rm{g}}\)Sb reaction we performed TALYS1.95 calculations with two models, pure compound model and compound \(+\) pre-equilibrium model. Pre-equilibrium processes play a role in formation of both \({}^{120\rm{m}}\)Sb and \({}^{120\rm{g}}\)Sb, but are much more pronounced for the metastable state as seen in Fig.5 at high energies.
\({}^{120}\)Sn(p,\(\alpha\)) reaction results in both the ground and isometric states of \({}^{117}\)In. The isomeric state (\(I\!\!P=1/2^{\cdot}\), \(T_{1/2}=116.2\) min), transits to the ground state (\(I\!\!P=\)9/2\({}^{+}\), \(T_{1/2}=43.2\) min) with \(\Gamma\!\!T=\)
47.1%. The independent cross-section of the isomeric state (\({}^{117\rm{m}}\)In) was determined using the line \(E_{\gamma}=315.302\) keV (\(I_{\gamma}=19\%\) ).
We calculate \({}^{117\rm{g}}\)In cross-section as a sum of direct production from \({}^{120}\)Sn(p,\(\alpha\)) reaction and through decay of \({}^{117\rm{m}}\)In during and after the irradiation.
The independent cross sections of daughter nuclei \({}^{117\rm{g}}\)In was calculated by the following relation [42].
\[\sigma_{d}=\frac{\lambda_{d}}{(1-e^{-\lambda_{d}t_{1}})e^{-\lambda_{d}t_{2}}( 1-e^{-\lambda_{d}t_{3}})}\times\]
\[\left[\frac{\lambda_{obs}}{\Phi N_{mucl~{}\varepsilon~{}\nu}}-\sigma_{p}f\frac {\lambda_{p}\lambda_{d}}{\lambda_{d}-\lambda_{p}}\left(\frac{(1-e^{-\lambda_{p }t_{1}})e^{-\lambda_{p}t_{2}}\left(1-e^{-\lambda_{p}t_{3}}\right)}{\lambda_{p} ^{2}}-\frac{(1-e^{-\lambda_{d}t_{1}})e^{-\lambda_{d}t_{2}}(1-e^{-\lambda_{d}t _{3}})}{\lambda_{d}^{2}}\right)\right] \tag{3}\]
where \(\lambda_{p}\) and \(\lambda_{d}\) are the parent and daughter decay constants, \(f\) specifies the fraction of parent nuclei decaying to a daughter nucleus, and the rest of the symbols are the same as in eq.(1).
Fig. 7 shows the excitation functions of \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\rm{m}}\)In, \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\rm{g}}\)In, \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\rm{m}+\rm{g}}\)In reactions. We found only one publication with the experimental results for \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\rm{m}}\)In reaction [18] which agree with our data. All experimental data are shifted towards lower energies compared to TALYS1.95 calculation.
In TALYS1.95, the cross section of the (p,\(\alpha\)) reaction is given by the sum of compound and pre-equilibrium components. The latter is described by the exciton model. Pick-up and knock-out mechanisms, which play an important role for (p,\(\alpha\)) reactions [2], are not covered by the exciton model. A specially developed phenomenological model is included in TALYS1.95 for covering these processes [39].
In this paper, we have considered how the excitation function of the reaction (p,\(\alpha\)) changes depending on the nuclear level model. Nuclear level densities are the most important values for compound nucleus decay width calculations in the statistical model. TALYS1.95 includes six different models for nuclear levels, and we made calculations with all of them. The six models are Constant Temperature + Fermi gas model (CTM) (this model is default in TALYS), Back-shifted Fermi gas Model (BFM), Generalised Superfluid Model (GSM), Skyrme-Hartree-Fock-Bogoluybov level densities from numerical tables (SHFB), Gogny-Hartree-Fock-Bogoluybov level densities from numerical tables (GHFB), and Temperature-dependent Gogny-Hartree-Fock-Bogoluybov level densities from numerical tables (T-GHFB) [39]. The results of calculations for various models differ significantly from each other, with T-GHFB results for the \({}^{120}\)Sn(p,\(\alpha\)) reaction being closest to experimental ones (Fig. 7). The same model is consistent with the experiments for the \({}^{120}\)Sn(p,n)\({}^{120g}\)Sb reaction but overestimates the cross section of metastable \({}^{120m}\)Sb (Fig. 6). For reactions on the \({}^{114}\)Sn target we performed calculations only with the CTM model (Fig. 2,3,5) since the agreement with the experiment was good.
**5. Isomeric cross section ratio**
Measurements of isomeric cross section ratios (IR) are important for studies of nuclear structure, especially the level density and the discrete level structure of residual nuclei [9, 43]. It has been shown that IRs depend on several factors, such as the type and energy of the projectile, type of the emitted particle, spin of the target nucleus, and spins of the ground and isomeric states [40, 44]. Isomeric cross-section ratios measurement is important for properly choosing the input model parameters [45, 46].
We measure cross-sections of two isomeric pairs \({}^{120\rm m,g}\)Sb and \({}^{117\rm m,g}\)In formed in \({}^{120}\)Sn(p,n) and \({}^{120}\)Sn(p,\(\alpha\)) reactions. IR is defined as the cross-section ratio of the higher spin state to the lower one. Measured and calculated (via TALYS1.95) IRs of the \({}^{120\rm m,g}\)Sb pair are presented in Fig. 8. TALYS1.95 data are consistent with low energy data of Ref. [8] and also with our data, with the exception of the highest energy measurement.
IR dependence on the incident proton energy for the \({}^{117\rm m,g}\)Sb pair is presented in Fig.9. Our data is in good agreement with TALYS1.95 calculations while the data of [18] are inconsistent with both.
## 6 Conclusions
We measured \({}^{114}\)Sn(p,\(\alpha\))\({}^{111(\text{m}+\phi)}\)In, \({}^{114}\)Sn(p,pn)\({}^{113(\text{m}+\phi)}\)Sn, \({}^{114}\)Sn(p,2n)\({}^{113}\)Sb, \({}^{120}\)Sn(p,n)\({}^{120\text{m},\text{g}}\)Sb, \({}^{120}\)Sn(p,\(\alpha\))\({}^{117\text{m},\text{g}}\)In reaction cross sections and compared them to published experimental data and TALYS1.95 numerical calculations to check the predictive ability of TALYS1.95 for tin isotopes.
TALYS1.95 provides a good description for most discussed reactions. The largest discrepancy between the experimental and calculated data was noted for the \({}^{120}\)Sn(p,\(\alpha\))\({}^{117}\)In reaction. For the \({}^{120}\)Sn target calculations with all available level density models included in TALYS1.95 were performed and it was shown that the results strongly depend on the choice of the model for the \({}^{120}\)Sn(p,\(\alpha\))\({}^{117}\)In and \({}^{120}\)Sn(p,n)\({}^{120\text{m}}\)Sb reactions.
For the \({}^{120}\)Sn(p,\(\alpha\))\({}^{117}\)In reaction temperature-dependent Gogny-Hartree-Fock-Bogoluybov level density gives the best agreement to the experimental values, but some discrepancy between the calculations and experiment still remains.
For the \({}^{120}\)Sn(p,n) reaction the choice of the nuclear level density model is essential for the metastable state (\({}^{120\text{m}}\)Sb) production and much smaller for the ground state (\({}^{120\text{g}}\)Sb). However, due to the scatter of the available experimental data, it is impossible to unambiguously conclude that either of the two discussed models has the best predictive ability.
Isomeric cross section ratios of \({}^{120m,g}\)Sb and \({}^{117m,g}\)In pairs formed in \({}^{120}\)Sn(p,n) and \({}^{120}\)Sn(p,\(\alpha\)) reactions are mostly well captured by Talys1.95 calculations in the discussed energy region.
**Acknowledgements**
The authors are thankful to the staff of Cyclotron18/18 of the Radioisotope Production Center, Yerevan, for the operation of the accelerator, and assistance in transporting the irradiated targets.
|
2301.11010 | On the Optimal Beamwidth of UAV-Assisted Networks Operating at
Millimeter Waves | The millimeter-wave (mm-wave) bands enable very large antenna arrays that can
generate narrow beams for beamforming and spatial multiplexing. However,
directionality introduces beam misalignment and leads to reduced energy
efficiency. Thus, employing the narrowest possible beam in a cell may not
necessarily imply maximum coverage. The objective of this work is to determine
the optimal sector beamwidth for a cellular architecture served by an unmanned
aerial vehicle (UAV) acting as a base station (BS). The users in a cell are
assumed to be distributed according to a Poisson Point Process (PPP) with a
given user density. We consider hybrid beamforming at the UAV, such that
multiple concurrent beams serve all the sectors simultaneously. An optimization
problem is formulated to maximize the sum rate over a given area while limiting
the total power available to each sector. We observe that, for a given transmit
power, the optimal sector beamwidth increases as the user density in a cell
decreases, and varies based on the height of the UAV. Thus, we provide
guidelines towards the optimal beamforming configurations for users in rural
areas. | Manishika Rawat, Marco Giordani, Brejesh Lall, Abdelaali Chaoub, Michele Zorzi | 2023-01-26T09:52:24Z | http://arxiv.org/abs/2301.11010v1 | # On the Optimal Beamwidth of UAV-Assisted Networks Operating at Millimeter Waves
###### Abstract
The millimeter-wave (mm-wave) bands enable very large antenna arrays that can generate narrow beams for beamforming and spatial multiplexing. However, directionality introduces beam misalignment and leads to reduced energy efficiency. Thus, employing the narrowest possible beam in a cell may not necessarily imply maximum coverage. The objective of this work is to determine the optimal sector beamwidth for a cellular architecture served by an unmanned aerial vehicle (UAV) acting as a base station (BS). The users in a cell are assumed to be distributed according to a Poisson Point Process (PPP) with a given user density. We consider hybrid beamforming at the UAV, such that multiple concurrent beams serve all the sectors simultaneously. An optimization problem is formulated to maximize the sum rate over a given area while limiting the total power available to each sector. We observe that, for a given transmit power, the optimal sector beamwidth increases as the user density in a cell decreases, and varies based on the height of the UAV. Thus, we provide guidelines towards the optimal beamforming configurations for users in rural areas.
UAV-BS, millimeter-wave, optimal sector beamwidth, rural connectivity.
## I Introduction
According to the World Bank, 43 % of the world population lives in rural areas [1]. However, these regions remain mostly unserved while the world prepares to roll out the fifth generation (5G) of mobile networks. According to a report published by the International Telecommunication Union in 2021, the share of Internet users in urban areas is twice the number in rural areas [2]. The primary cause behind this is the lack of communication infrastructure in remote areas. The next generation of wireless networks (6G) emerges as a solution to this challenge [3]: for example, 6G is focusing on non-terrestrial networks (NTN) using Unmanned Aerial Vehicles (UAVs), High Altitude Platforms (HAPs), and satellites to promote ubiquitous and high-capacity global connectivity [4]. Notably, these modules can serve as aerial base stations (BS) or to assist the terrestrial BS in providing on-demand, cost-effective coverage in unserved and poorly served areas [5].
UAVs, in particular, have been proposed to bridge the digital divide and provide on-demand networks for applications such as disaster management, medical camps, network exploration, and surveillance [6, 7]. Deploying a UAV as a BS (UAV-BS) is quick and affordable compared to a terrestrial network infrastructure, and when operating in the millimeter-wave (mm-wave) bands can promote cost-effective ubiquitous coverage, high throughput, and low latency even in rural areas [8].
However, how to deploy UAV-BSs is a challenging design issue, and has been discussed in detail in [9, 10]. At the same time, allocating the (limited) resources to the ground users is critical. Several resource allocation problems have been formalized in the literature to meet different quality of service (QoS) requirements such as coverage, fairness, and energy efficiency [11, 12, 13, 14, 15, 16, 17]. An energy-efficient UAV communication system was proposed in [11] by optimizing the UAV trajectory and jointly considering the communication throughput and the energy consumption. In [12], the authors introduced an approximate beam pattern and provided a solution for the UAV deployment and beam gain allocation to maximize the capacity over a given area. In [13], multi-UAV communication and non-orthogonal multiple access (NOMA) have been combined for the purpose of constructing high-capacity Internet of Things (IoT) uplink transmission systems. The channel assignment, the uplink transmit power of IoT nodes, and the flying heights of UAVs have been jointly optimized to maximize the system capacity. In [14], NOMA transmission with UAV-BS is implemented to provide coverage over a dense user region. A beam scanning approach has been proposed to identify the optimal area within the user region, and hence maximize the achievable sum rate. In [15], the locations of transceivers in the downlink and uplink were modeled using a Poisson Point Process (PPP) or Poisson Cluster Process to derive closed-form expressions of the coverage probability. In [16], an optimal resource allocation problem has been investigated for downlink coverage. It considered concurrent transmission to all sectors, and an asymptotically-optimal solution has been proposed to solve a mixed integer non-linear programming problem. The authors in [17] have proposed an intelligent UAV-BS placement and power allocation algorithm to maximize the sum rate in a region.
The optimal beamwidth for UAV-assisted multi-user systems has been studied in [18] considering the main lobe of the directional antenna serving a cell. The ground terminals were partitioned into disjoint clusters, which were sequentially served by the UAV, and the joint UAV height and beamwidth
have been investigated. The altitude, beamwidth, and location of the UAV and the bandwidth allocated to each user were jointly optimized in [19] for uplink UAV communication to minimize the sum uplink power. An algorithm was proposed to obtain a suboptimal solution by assigning different bandwidths to ground terminals. In [20], the UAV location and antenna beamwidth were jointly optimized for quasi-stationary and mobile UAVs. The impact of the beamwidth on UAV location and trajectory has been investigated for minimizing serving time and increasing throughput. All of these works, however, assume that the vertical main lobe of the UAV antenna covers all users.
In this paper we determine the optimal sector beamwidth for a UAV-assisted cellular network implementing hybrid beamforming, where users are distributed according to a PPP with a given user density. An optimization problem is developed for efficient resource allocation to maximize the sum rate in a sector while ensuring fairness. We observe that the optimal sector beamwidth depends on the user density, the height of the UAV-BS, and the propagation scenario. For a sub-urban scenario, with cell radius of \(100\) m and a UAV-BS deployed at a height of \(100\) m, the optimal beamwidth decreases from \(10^{\circ}\) to \(5^{\circ}\) as the user density increases from \(0.0005\) to \(0.002\) UEs/m\({}^{2}\). On the other hand, for a fixed user density of \(0.0005\) UEs/m\({}^{2}\), the optimal beamwidth initially decreases from \(12^{\circ}\) to \(10^{\circ}\) and then increases to \(15^{\circ}\) as the UAV height increases from \(10\) m to \(200\) m. We further observe that the number of sectors required to optimally serve a given number of users in an urban region is much larger than that in a rural region for the same QoS requirements.
The paper has been organized as follows. In Sec. II we present our system model; in Sec. III we describe the optimization problem to maximize the sum rate in a cell within given constraints, and present the algorithm we used to compute the sum rate as a function of the sector beamwidth; in Sec. IV we show our simulation results; conclusions are given in Sec. V.
**Notations:** For a random variable \(X\), \(X\sim\text{Poisson}(x)\) denotes that \(X\) is Poisson distributed with rate parameter \(x\), \(X\sim\text{U}(x_{1},x_{2})\) indicates that \(X\) is uniformly distributed in the range \((x_{1},x_{2}]\), and \(X\sim\text{Tr}(x_{1},x_{2})\) denotes that \(X\) has triangular distribution in the range \((x_{1},x_{2}]\)[21].
## II System Model
The system model involves a UAV-BS deployed at a height \(h\) and serving a cellular area of radius \(R\), as shown in Fig. 1. The cell is divided into \(S\) sectors, each of beamwidth \(\theta\). The UAV is mounted with a uniform planar array (UPA) antenna of \(N\) elements such that \(\theta\approx 2/N\) radians [22]. The users, represented by crosses in Fig. 1, are distributed in the cell according to a PPP with density \(\lambda\). The location of the \(k^{th}\) user in a cell is given by \((d_{k},\phi_{k})\), where \(d_{k}\) is the horizontal distance between the \(k^{th}\) user and the UAV-BS and \(\phi_{k}\) is the phase of the \(k^{th}\) user measured counterclockwise. Here, \(\phi\sim\text{U}(0,2\pi)\) and \(d\sim\text{Tr}(0,R)\). The average number of users in a cell is \(M=\pi R^{2}\lambda\). We assume fixed transmit power \(P_{t}\) at the UAV, and orthogonal frequency division multiplexing (OFDM) to serve multiple users on a single beam. Therefore, the total bandwidth \(B\) is split into \(N_{c}\) subcarriers to serve multiple users in a sector. We operate at mm-wave frequency in an effort to maximize the communication capacity, and assume hybrid beamforming at the UAV such that all sectors are served by concurrent beams [23].
The data rate \(r_{n,k}\) for the \(n^{th}\) subcarrier allocated to the \(k^{th}\) user is given by
\[r_{n,k}=(B/N_{c})\log_{2}(1+P_{n,k}\gamma_{n,k}), \tag{1}\]
where \(\gamma_{n,k}\) is given by [24], i.e.,
\[\begin{array}{l}\gamma_{n,k}(l_{k})=\\ \frac{|\mathrm{n}_{LoS}^{n,k}|^{2}PL_{LoS}^{n,k}(l_{k})P_{r}(l_{k})+|\mathrm{ n}_{NLoS}^{n,k}|^{2}PL_{NLoS}^{n,k}(l_{k})(1-P_{r}(l_{k}))}{N_{0}B/N_{c}GG_{r}}.\end{array} \tag{2}\]
In Eq. (2), \(l_{k}=\sqrt{h^{2}+d_{k}^{2}}\) is the distance between the UAV and the \(k^{th}\) user, and \(\mathrm{h}_{LoS}^{n,k}\) and \(\mathrm{h}_{NLoS}^{n,k}\) are the path gains for the line-of-sight (LoS) and non-line-of-sight (NLoS) paths between the UAV and the \(k^{th}\) user. Specifically, \(k\in\{1,2,\ldots,K_{s}\}\), where \(K_{s}\) represents the number of users in the \(s^{th}\) sector, \(s\in\{1,2,\ldots,S\}\), and \(n\in\{1,2,\ldots,N_{c}\}\).
Fig. 1: Geometry of the scenario where a UAV-BS serves a cellular area of radius \(R\).
is the noise power spectral density, and \(G\) and \(G_{r}\) represent the beamforming gains of the transmitting and receiving antennas, respectively. \(P_{r}(l_{k})\) is the LoS probability between the UAV and the \(k^{th}\) user, and is given by
\[P_{r}(l_{k})=\frac{1}{1+e^{\alpha_{1}\psi^{3}+\alpha_{2}\psi^{2}+ \alpha_{3}\psi+\alpha_{4}}}, \tag{3}\]
where \(\psi=\sin^{-1}{(h/l_{k})}\) is the elevation angle of the UAV with respect to the \(k^{th}\) user, and \(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\), and \(\alpha_{4}\) are parameters of the LoS probability distribution defined for suburban, urban, dense-urban, and high-rise-urban environments in [25, Table 2]. \(PL_{LoS}\) and \(PL_{NLoS}\) represent the path losses for LoS and NLoS links; for mm-wave communication [26], they are expressed in dB as
\[PL_{LoS}(l_{k})=61.4+20\log_{10}(l_{k})+\mathcal{N}(0,33.64); \tag{4}\] \[PL_{NLoS}(l_{k})=72.0+29.2\log_{10}(l_{k})+\mathcal{N}(0,75.69). \tag{5}\]
where, \(\mathcal{N}(\mu,\sigma^{2})\) represents a normal random variable with mean \(\mu\) and variance \(\sigma^{2}\). In the following section we will develop an optimization problem to maximize the sum rate in a cell while limiting the total power available in a sector.
## III Sum Rate Maximization Problem
The sum rate maximization problem for a sector can be expressed as:
\[\max_{\pi_{n,k},P_{n,k}} \sum_{k=1}^{K_{s}}\sum_{n=1}^{N_{c}}r_{n,k}\pi_{n,k}\] (6a) s.t. \[C_{1}:\sum_{k=1}^{K_{s}}\pi_{n,k}\leq 1\quad\forall n, \tag{6b}\] \[C_{2}:\sum_{n=1}^{N_{c}}r_{n,k}\pi_{n,k}\geq R_{0}\quad\forall k,\] (6c) \[C_{3}:\sum_{k=1}^{K_{s}}\sum_{n=1}^{N_{c}}P_{n,k}\leq\frac{P_{t} \theta}{360},\] (6d) \[C_{4}:\!\pi_{n,k}\in\{0,1\}\quad\forall n,k,\] (6e) \[C_{5}:\!P_{n,k}\geq 0\quad\forall n,k. \tag{6f}\]
In (6a), we introduce a binary variable \(\pi_{n,k}\) to ensure that at least one subcarrier is allocated to one user in a sector in the beam serving time. Here, \(\theta\) is the beamwidth of a sector, and \(R_{0}\) is the bit rate. \(P_{n,k}\) is the power transmitted over the \(n^{th}\) subcarrier allocated to the \(k^{th}\) user. Constraints \(C_{1}\) and \(C_{4}\) ensure that the same subcarrier is not allocated to different users, and thus \(\pi_{n,k}\) can take only binary values such that the sum across the users is less than or equal to \(1\). \(C_{2}\) ensures a minimum data rate to each user. As the cell is divided into sectors, the total power available to a sector would be proportional to the sector beamwidth. Therefore, \(C_{3}\) limits the total power available to a sector. This limits the maximum data rate available to users as the sector beamwidth decreases or the number of sectors in a cell increases. \(C_{5}\) ensures that the power allocated to each user is non-negative.
The optimization problem in (6a) is a mixed integer non-linear programming problem (MINLP). Due to the non-convexity, the global optimal solution cannot be achieved. The time taken to run this module increases proportionally with the number of users and sectors. However, we can simplify the problem using the mixed integer (MI) property of \(\pi_{n,k}\). The simplified optimization problem can be expressed as
\[\max_{\pi_{n,k},P_{n,k}} \sum_{k=1}^{K_{s}}\sum_{n=1}^{N_{c}}r_{n,k}\] (7a) s.t. \[C_{1},C_{4},C_{5}, \tag{7b}\] \[C_{6}:\sum_{n=1}^{N_{c}}r_{n,k}\geq R_{0}\quad\forall k,\] \[C_{7}:\sum_{k=1}^{K_{s}}\sum_{n=1}^{N_{c}}P_{n,k}\leq\pi_{n,k} \frac{P_{t}\theta}{360},\] (7c) \[C_{8}:\!r_{n,k}\leq\pi_{n,k}R_{max}\quad\forall n,k. \tag{7d}\]
We have omitted \(\pi_{n,k}\) from (7a) and (7b) to reduce the redundancy in problem formulation. The condition that \(r_{n,k}=0\) when \(\pi_{n,k}=0\) is enforced by \(C_{8}\). \(C_{5}\) and \(C_{7}\) ensure that \(P_{n,k}\) drops to zero when \(\pi_{n,k}=0\). Thus, the algorithm needs to run only for non-zero entries in \(\pi_{n,k}\), which reduces the problem complexity. \(R_{max}\) in \(C_{8}\) is the maximum data rate
that can be achieved. It can be used to limit the number of iterations of the solver to save the processing time.
The resource allocation problem in (7) can be explained by taking the example of Fig. 1. Here, \(S=8\) and we set \(N_{c}=4\) so that both \(\pi_{n,k}\) and \(P_{n,k}\) for the second sector measured counterclockwise in Fig. 1 are of size \(4\times 2\). Thus, the four subcarriers in a sector can be allocated to two users. Because of \(C_{1}\), one of the possible solutions would be
\[\pi_{n,k}=\begin{pmatrix}1&0\\ 0&1\\ 0&1\\ 0&1\\ \end{pmatrix}\text{. Accordingly, }P_{n,k}=\begin{pmatrix}P_{1,2}&0\\ 0&P_{2,3}\\ 0&P_{3,3}\\ 0&P_{4,3}\\ \end{pmatrix}\text{ such}\]
that \(\sum_{n=1}^{4}\sum_{k=2}^{3}P_{n,k}\leq P_{t}/8\).
Algorithm 1 specifies the steps to compute the sum rate per cell and average rate per user as a function of \(\theta\) for a given user density. First, we generate users distributed as a PPP in a cell of radius \(R\) for a given value of \(\theta\) and \(\lambda\). The users in the same sector are then categorized for analysis. The optimization function maximizes the sum rate per sector, which is added to produce the sum rate per cell. The average rate per user is computed by dividing the sum rate per cell by the number of users. Here, \(\mathbb{N}\) represents the number of Monte Carlo (MC) simulations and \(div(x)\) represents the divisors of \(x\).
## IV Results and Discussions
In this section, we evaluate the impact of user density and UAV height on the optimal sector beamwidth of a cell. We work with \(B=1\) GHz, \(P_{t}=10\) W, \(N_{c}=30\), \(R_{0}=1\) Gbps, \(R_{max}=50\) Gbps, \(f_{c}=28\) GHz, and \(N_{0}=-174\) dBm/Hz. The channel is assumed to be Rician with a distribution parameter of \(8\) dB [27]. We use the solving constraint integer program (_scip_) in the General Algebraic Modeling System (GAMS), a high-level modeling system for mathematical optimization to solve the MINLP problem in (6a). We consider (i) a rural/sub-urban scenario with a lower user density \(\lambda=\{0.0005,0.0008,0.001,0.002\}\) UEs/m\({}^{2}\) in a cell of radius \(R=100\) m (Sec. IV-A), and (ii) an urban scenario with a higher user density \(\lambda=\{0.05,0.08,0.1\}\) UEs/m\({}^{2}\) in a cell of radius \(R=10\) m (Sec. IV-B). The impact of the UAV height is explored in Sec. IV-C. We assume \(G=N\) and \(G_{r}=1\) for the analysis [22]. The results have been obtained for \(500\) MC simulations.
### _Rural/Sub-Urban Scenario_
Fig. 2(a) plots the sum rate obtained by solving the optimization problem in (7) as a function of \(\theta\). The result is obtained for \(R=100\) m, \(h=100\) m, and different values of \(\lambda\). First, we observe that as the user density increases, the number of users in the cell increases, thus increasing the sum rate per cell, which validates the accuracy of our results. Initially, as \(\theta\) increases the sum rate also increases given that the power allocated to each sector increases proportionally. However, after a certain value of \(\theta\), the sum rate decreases exponentially. This can be attributed to the lower spatial reuse as \(\theta\) increases, i.e., as the number of sectors decreases. Therefore, we conclude that the smallest beamwidth does not necessarily ensure the maximum sum rate. The sector beamwidth at which the sum rate is maximized is represented by \(\theta_{opt}\). According to Fig. 2(a), \(\theta_{opt}=10^{\circ}\) for \(\lambda=0.0005\). As \(\lambda\) increases from \(0.0008\) to \(0.002\), \(\theta_{opt}\) decreases from \(9^{\circ}\) to \(5^{\circ}\), respectively. This implies that in a rural environment, it
Fig. 2: Sum rate per cell, average rate per user, and Jain’s fairness index in a rural/sub-urban scenario with \(R=100\) m and \(h=100\) m, as a function of \(\theta\), for different user densities.
is desirable to operate through wider beams as the density of users decreases.
Fig. 2(b) plots the average rate per user as a function of \(\theta\) for different values of \(\lambda\). The use of the mm-wave technology ensures that the rate is always above 1 Gbps, and in line with the requirements of most 5G/6G applications. As expected, as the number of users in the cell increases, the average rate per user decreases. Moreover, as \(\theta\) increases the average rate also increases, and follows the same trend of the sum rate as explained in the previous paragraph. However, \(\theta_{opt}\) decreases as the user density increases. This is because, as the number of users in each sector grows, the amount of power allocated to each user decreases. Consequently, in order to maximize the sum rate, the objective function drives \(\theta\) towards smaller values. Therefore, we conclude that for a densely populated rural region, a higher number of sectors would be optimal.
In order to demonstrate the impact of the fairness among the users in a cell, Fig. 2(c) plots Jain's Fairness index as a function of \(\theta\) for different user densities. It is computed as \(J=\frac{(\sum_{i}R_{i})^{2}}{M\sum_{i}R_{i}^{2}}\), where \(R_{i}\) represents the rate per user [28]. The value of Jain's fairness index for \(\lambda=\{0.0005,0.0008,0.001,0.002\}\) at \(\theta_{opt}\) is \(\{0.96,0.9504,0.9461,0.9345\}\), respectively. We observe that \(J\) increases with \(\theta\), and good fairness is achieved at the optimal sector beamwidth. This can be attributed to constraint \(C_{6}\) in (7), where a minimum data rate is guaranteed to each user. Thus, we conclude that the proposed resource allocation ensures fairness among all the users in a cell.
### _Urban Scenario_
Fig. 3(a) plots the sum rate as a function of \(\theta\) for various user densities in a cell of radius \(R=10\) m. The UAV is deployed at \(h=20\) m. As observed in Sec. IV-A, the sum rate initially increases and then decreases exponentially after \(\theta\) reaches its optimal value. However, \(\theta_{opt}\) obtained here is generally smaller than in the rural/sub-urban scenario. At \(\lambda=0.05\), \(\theta_{opt}=8^{\circ}\) which gives \(M=15.7\) and \(S=45\) to serve the users optimally. As the user density increases from \(\lambda=0.08\) to \(0.1\), \(\theta_{opt}\) decreases from \(6^{\circ}\) to \(5^{\circ}\). This implies that the number of sectors required to serve all the users increase from \(S=60\) to \(S=72\), respectively. A similar trend for the value of \(\theta_{opt}\) is observed in the plot of the average rate per user in Fig. 3(b). Notice that these results were obtained up to a beamwidth of \(120^{\circ}\), with the cell having only three sectors to provide a more comprehensive picture.
### _Impact of the UAV Height_
Fig. 4 plots the average rate per user as a function of \(\theta\) and \(h\) in a rural/sub-urban cell with radius \(R=100\) m and \(\lambda=0.0005\) UEs/m\({}^{2}\). The LoS probability (\(P_{r}\)) as a function of \(h\) defined in Eq. (3) is shown in the inset. We observe that \(P_{r}\) is small for \(h\leq 50\) m. As a result, the average rate per user for \(h=50\) m is higher than for \(h=10\) m. As \(h\) continues to increase, the LoS probability increases, but so does the path loss. The impact of the path loss becomes the dominant factor when \(h>100\) m, even though the LoS probability also increases. The effect is also visible in the value of \(\theta_{opt}\), which decreases initially from \(12^{\circ}\) to \(10^{\circ}\) as \(h\) increases from \(50\) m to \(100\) m, and then increases to \(\theta_{opt}=15^{\circ}\) at \(h=200\) m. This
Fig. 3: Sum rate per cell and average rate per user in an urban scenario with \(R=10\) m and \(h=20\) m as a function of \(\theta\), for different user densities.
is because, for \(50\leq h\leq 100\) m, the path loss is relatively small, thus the optimization problem drives \(\theta\) to a smaller value to increase the antenna gain and the sum rate. However, to compensate for the very high path loss at \(h>100\) m, the optimization problem attempts to increase the power allocated to the users by increasing \(\theta_{opt}\). Therefore, we conclude that the optimal sector beamwidth is a function of \(h\).
In order to compare the optimal number of sectors required to serve a given number of users in the rural/sub-urban and urban scenarios, we consider two configurations with the same LoS probability, i.e., \(R=100\) m and \(h=50\) m, and \(R=10\) m and \(h=20\) m, respectively. For the first configuration with \(\lambda=0.0005\) and \(M=15.7\), the optimal number of sectors at \(\theta_{opt}\) is \(S=30\). This is much lower than \(S=45\) obtained in the second configuration with the same number of users in Sec. IV-B. Consequently, the number of elements required in a UPA antenna decreases from \(N=15\) in an urban environment to around \(N=12\) in a sub-urban scenario. This implies a considerable reduction in the cost and complexity of the antenna system for serving a rural/sub-urban area.
## V Conclusion
In this work, we investigated the optimal sector beamwidth for a cell served by a UAV acting as a BS. Assuming that users are distributed according to a PPP, we observed that there is an optimal beamwidth to maximize the sum rate over a region. For a given transmit power, this optimal beamwidth is a function of the user density and the height of the UAV. Based on simulations we observed that, for the same LoS probability, \(S=45\) sectors are required in an urban region vs. \(S=30\) sectors in a rural region to optimally serve an average number of \(15.7\) users in a cell with the same QoS requirements. Also, we showed that a remote area with lower density of users can be optimally served by a UAV-BS flying at lower altitude. This implies lower complexity in terms of antenna size and transmit power, and less spatial reuse and interference among the sectors, respectively, compared to the urban environment. Our conclusions encourage the use of UAV-BSs to connect remote and poorly served areas.
|
2304.05863 | Detecting axion dark matter with Rydberg atoms via induced electric
dipole transitions | Long-standing efforts to detect axions are driven by two compelling
prospects, naturally accounting for the absence of charge-conjugation and
parity symmetry breaking in quantum chromodynamics, and for the elusive dark
matter at ultralight mass scale. Many experiments use advanced cavity resonator
setups to probe the magnetic-field-mediated conversion of axions to photons.
Here, we show how to search for axion matter without relying on such a cavity
setup, which opens a new path for the detection of ultralight axions, where
cavity based setups are infeasible. When applied to Rydberg atoms, which
feature particularly large transition dipole elements, this effect promises an
outstanding sensitivity for detecting ultralight dark matter. Our estimates
show that it can provide laboratory constraints in parameter space that so far
had only been probed astrophysically, and cover new unprobed regions of
parameter space. The Rydberg atomic gases offer a flexible and inexpensive
experimental platform that can operate at room temperature. We project the
sensitivity by quantizing the axion-modified Maxwell equations to accurately
describe atoms and molecules as quantum sensors wherever axion dark matter is
present. | Georg Engelhardt, Amit Bhoonah, W. Vincent Liu | 2023-04-12T13:55:51Z | http://arxiv.org/abs/2304.05863v2 | # Detecting axion dark matter with Rydberg atoms via induced electric dipole transitions
###### Abstract
Long-standing efforts to detect axions are driven by two compelling prospects, naturally accounting for the absence of charge-conjugation and parity symmetry breaking in quantum chromodynamics, and for the elusive dark matter at ultralight mass scale. Many experiments use the axion-photon coupling to probe the magnetic-field-mediated conversion of axions to photons. Here, we show that axion matter in a magnetic field induces electric dipole transitions in atoms and molecules. When applied to Rydberg atoms, which feature particularly large transition dipole elements, this effect promises an outstanding sensitivity for detecting ultralight dark matter. Our estimates show that it outperforms current experiments and other theoretical approaches based on axion-photon conversion by several orders of magnitude. The Rydberg atomic gases offer a flexible and inexpensive experimental platform that can operate at room temperature. We project the sensitivity by quantizing the axion-modified Maxwell equations to accurately describe atoms and molecules as quantum sensors wherever axion dark matter is present.
**Short title:**
**Rydberg atoms as super-sensitive axion detectors**
**Teaser:**
**A rigorous quantum treatment of the axion-Maxwell equations opens a new path for the detection of axion dark matter.**
## I Introduction
There is overwhelming astrophysical and cosmological evidence that approximately 85% of matter in the universe is in the form of non-luminous _dark matter_. Unfortunately, little is known about its nature beyond its gravitational influence on galactic and cosmological scales. While the search for historically popular models like Weakly Interacting Massive Particles (WIMPs) continues, it is important to expand experimental efforts to other well-motivated candidates. One such class of models, ultralight bosonic dark matter, is particularly favoured because of discrepancies that arise when simulations of structure formation with WIMP-like dark matter are compared to observations on galactic scales [1, 2, 3]. These tensions are somewhat alleviated when the dark matter is modelled not as a WIMP-like particle but as an ultralight boson with de-Broglie wavelength comparable to small-scale galactic structures, which corresponds to a mass of order \(10^{-21}\) eV/\(c^{2}\).
The axion is a famous candidate of bosonic dark matter originally predicted as the symmetry consequence of the Peccei-Quinn model as new physics beyond the standard model of elementary particle physics [13, 14, 15, 16]. It provides a "missing" elementary particle capable of naturally explaining why the charge-conjugation and parity symmetry (CP) are preserved in quantum chromodynamics (QCD) but are known to be violated in the electroweak interaction. This crucial conundrum has been coined the "strong CP" problem in elementary particle physics. While these _QCD axions_ may be the dark matter, it is possible that the latter are pseudo-scalars which, however, do not solve the strong charge-parity problem. To distinguish them from the _QCD axion_, such pseudo-scalars are called axion-like particles, and can usually be searched for with the same tools and techniques used to probe QCD axions. For the remainder of this article, we will use "axion" to refer to either a dark-matter candidate or a new QCD particle whenever the distinction is unimportant. It is interesting to note that, aside from their key role in high energy physics, axions have also been predicted to exist in condensed matter systems [17, 18, 19, 20, 21, 22] where they manifest in the form of magneto-electric transport effects. Thus, a discovery of the elusive axion particle will not only resolve the strong CP problem in QCD, but could simultaneously track down the missing dark matter in the universe. We will focus on the latter aspect in this work.
The axion-modified Maxwell equations predict that a magnetic field converts axions of energy
\(\frac{1}{2}m_{a}v^{2}\) (mass \(m_{a}\), velocity \(v\)) into photons of frequency \(\omega_{a}\) as sketched in Fig. 1(a) [16]. A typical experimental setup exploiting this resonant axion-photon conversion for the detection of the galactic axion field consists of two inductively-coupled microwave resonators as depicted in Fig. 1(b) [23]. A microwave photon, sourced by the axion field in the first resonator, is detected in the second resonator using a SQUID device, Rydberg atoms, or comparable single-photon detectors. The RBF-UF [12], ADMX [11] and HAYSTAC [24; 25; 26] experiments attempt to detect the axion field in the narrow 1-200 \(\mu\)eV mass range to which they are limited by construction. It should not be surprising then that most constraints on axions in the mass regime \(m_{a}c^{2}<2\mu\)eV are astrophysical or cosmological, making alternative laboratory probes valuable. Current approaches, e.g. ABRACADBRA [27], ADMX SLIC [28] and SHAFT [29] using lumped-element circuits to compensate for the frequency mismatch lose sensitivity as the mass of the axion is lowered. Notably a resonant cavity for the detection of ultralight axions would be comparable in size to a small galaxy.
In this article, we predict that a magnetic field mediates axion-induced electric dipole transitions in atoms, molecules, and similar quantum emitters, which allows for a highly sensitive search of ultralight axion dark matter. The proposed detection via dipole transitions, sketched in Fig. 1(c), is distinct from cavity detectors in Fig. 1(b), which try to detect the axion field indirectly via photon generation. A rigorous quantization of the axion Maxwell equations, which is carried out in this article, shows that the axion field formally acts as an electric field. Thus, highly sensitive electric field detectors using quantum emitters can be easily repurposed for the search of axion dark matter.
The axion-induced dipole transitions are particularly well suited to Rydberg states which feature long lifetimes, large electric dipole moments and polarizability [30]. Being quantum sensors, Rydberg atoms can take advantage of quantized energy levels, quantum coherence, or many-body entanglement to enhance detection sensitivity compared to classical systems [31]. Their aforementioned properties make them excellent candidates for electric field metrology in the radio-frequency regime, allowing sensitivities up to \(\mu\)Vm\({}^{-1}/\sqrt{\mathrm{Hz}}\)[32; 33; 34; 35; 36].
After introducing the axion-sourced electric field, we proceed to estimate the sensitivity of Rydberg atoms in a superheterodyne (supernet) detection configuration. For measurement campaigns over a period between seconds to months we project leading constraints in the ultralight mass regime that outperform existing exclusion limits [see Fig. 1(d)]. Details of the derivations can be found in the 'Methods' section and the Supplementary Materials
Figure 1: a) Magnetic field meditated axion-photon conversion. b) Typical resonator based setup for the detection of axion dark matter. c) Illustration of the axion-induced electric dipole transition in an atom mediated by a magnetic field. The dipole transitions can be detected spectroscopically. d) Overview of exclusion boundaries for \(m_{a}\) and \(g_{a\gamma\gamma}\). Experimentally excluded regions are marked by a solid line, while proposed experiments are marked by a dashed line. The projected exclusion bounds using axion-induced dipole transitions in Rydberg atoms achievable within one second, one hour, and one month measurement time are outlined. Also shown are constraints from: the CERN Axion Solar Telescope (CAST) [4], the polarization of the cosmic microwave background using results from the Planck Space Telescope Experiment [5], and indirect astrophysical constraints inferred from the supernova SN1987A [6], pulsar timing array [7], and quasar H1821+643 [8]. The dashed lines are projected sensitivities from proposed experiments such as the twisted anyon cavity [9] and DANCE [10]. Other constraints shown on the plot for axions of a higher mass are from ADMX [11] and RBF-UF [12].
(SM).
## II Result
### Axion-sourced electric field
Here, we give a heuristic derivation of the axion-sourced electric field that induces dipole transitions in atoms and similar quantum emitters. A mathematically rigorous derivation is given in the 'Methods' section and the SM. The quantization of the axion-photon Lagrangian \(\mathcal{L}=-g_{a\gamma\gamma}\sqrt{\frac{\epsilon_{0}}{\mu_{0}}}a\mathbf{E} \cdot\mathbf{B}\) yields, to linear order in the coupling constant \(g_{a\gamma\gamma}\), the axion/light-matter interaction Hamiltonian
\[\hat{H}_{\text{ALM}}=g_{a\gamma\gamma}\sqrt{\frac{\epsilon_{0}}{\mu_{0}}}\int d ^{3}\mathbf{r}\,\hat{a}(\mathbf{r})\hat{\mathbf{E}}(\mathbf{r})\cdot\hat{\mathbf{B}}( \mathbf{r}), \tag{1}\]
where \(\hat{\mathbf{E}}\) and \(\hat{\mathbf{B}}\), denote the electric and magnetic field operators, while \(\hat{a}\) is the (pseudo-scalar) axion field, which interacts via a coupling constant \(g_{a\gamma\gamma}\) with \(\hat{\mathbf{E}}\) and \(\hat{\mathbf{B}}\). \(\epsilon_{0}\) and \(\mu_{0}\) denote the vacuum permittivity and permeability. The axion field is measured in units of eV and the interaction constant \(g_{a\gamma\gamma}\) in units of \((\text{eV})^{-1}\).
Crucially, \(\hat{\mathbf{E}}\) is a physical observable and denotes the total electric field. In the presence of matter, one commonly introduces the displacement field
\[\hat{\mathbf{D}}(\mathbf{r})=\epsilon_{0}\hat{\mathbf{E}}(\mathbf{r})+\hat{\mathbf{P}}( \mathbf{r}), \tag{2}\]
that is sourced by the density of free charges \(\rho_{\text{free}}\) in the electric Gauss equation, i.e., \(\mathbf{\nabla}\cdot\hat{\mathbf{D}}=\rho_{\text{free}}\). The electric field sourced by the bounded charges is described by the polarization density \(\hat{\mathbf{P}}\). The displacement field can be understood as the electromagnetic field in the absence of matter and describes thus an external electric field or the quantized electric field in a cavity. The polarization density describes the electric field generated by the matter.
Let us consider an ensemble of \(N\) quantum emitters and denote their eigenstates by \(\ket{i,\mu}\), where \(i\) labels the quantum emitter, and \(\mu\) represents the collective electronic, vibrational and rotational quantum numbers. In the dipole approximation, the polarization density operator can be expressed in terms of the eigenstates as
\[\hat{\mathbf{P}}(\mathbf{r})=\sum_{i=1}^{N}\sum_{\mu,\nu}\mathbf{d}_{\mu,\nu}^{(i)} \ket{i,\mu}\bra{i,\nu}\delta(\mathbf{r}-\mathbf{r}_{i}), \tag{3}\]
where \(\delta(\mathbf{r})\) is the three-dimensional delta function, \(\mathbf{r}_{i}\) is the position of the \(i\)-th quantum emitter, and \(\mathbf{d}_{\mu,\nu}^{(i)}\) is the transition dipole moment between the two eigenstates \(\mu\) and \(\nu\). Inserting Eq. (2) into the Hamiltonian in Eq. (1) and using Eq. (3), we find a direct interaction of the axion field with the quantum emitters. The corresponding Hamiltonian reads
\[\hat{H}_{\text{a-d}}=-\sum_{i=1}^{N}\hat{\mathbf{E}}_{a}(\mathbf{r}_{i})\cdot\mathbf{ d}_{\mu,\nu}^{(i)}\ket{i,\mu}\bra{i,\nu} \tag{4}\]
and describes dipole transitions driven by the axion-sourced electric field
\[\hat{\mathbf{E}}_{a}=g_{a\gamma\gamma}c\hat{a}\hat{\mathbf{B}}. \tag{5}\]
This relation implies that one can repurpose sensitive atomic and molecular detectors of electric fields to search for axion dark matter. The (classical) magnetic field can be considered as an experimental switch controlling the effective interaction strength between the axion field and the quantum emitters. This allows to distinguish between a possible axion signal and background electric fields. The interaction is proportional to the speed of light, which additionally enhances the effective coupling.
### Rydberg atoms as axion detectors
Ultralight axions in the galactic halo exhibit wave-like behavior and must be treated as a classical time-varying background field [37], \(a(t)=a_{0}\cos{(\omega_{a}t+\phi_{a})}\), where \(a_{0}\) is the amplitude of the axion field, \(\phi_{a}\) its phase, and \(\hbar\omega_{a}\simeq m_{a}c^{2}+\frac{1}{2}m_{a}v^{2}\) its energy. Since the field has a small frequency dispersion described by the dark matter velocity distribution (average value \(10^{-3}c\)), a coherence time can be estimated to be \(\tau_{C}=\frac{2\hbar}{m_{a}c^{2}}10^{6}\), which defines the time scale over which the phase \(\phi_{a}\) can be considered to be constant [38; 39; 40]. For axions in the ultralight mass regime, this time scale will always be much larger than the measurement time in question. The amplitude \(a_{0}\) can be estimated via the galactic dark-matter energy density \(\rho=(m_{a}^{2}c^{4}a_{0}^{2})/(2\hbar^{3}c^{3})=0.3\cdot 10^{15}\frac{eV}{m^{3}}\). Resolving Eq. (5) for \(g_{a\gamma\gamma}\), we find that the sensitivity for the interaction parameter is given by
\[g_{a\gamma\gamma,*}=\frac{|\mathbf{E}_{*}|}{c\,|\mathbf{B}|}\sqrt{\frac{m_{a}^{2}c^{4}} {2\rho c^{3}\hbar^{3}}}, \tag{6}\]
which assumes that axions constitute 100% of the dark matter in the universe.
Equation 4 shows that atoms with large transition dipole moments, in particular Rydberg states, couple strongly to the axion field. Using an advanced electromagnetically-induced transparence (EIT) based superhet detection protocol, an electric field of \(|\mathbf{E}_{*}|=78\cdot 10^{-9}\text{Vm}^{-1}\) could be detected within 5000 s measurement time [32]. Taking this setup as the basis for our sensitivity projection, we estimate that Rydberg atoms are able to detect minimal electric fields of \(|\mathbf{E}_{*}|=30\cdot 10^{-9}\,\text{Vm}^{-1}\), \(|\mathbf{E}_{*}|=500\cdot 10^{-12}\,\text{Vm}^{-1}\), and \(|\mathbf{E}_{*}|=18\cdot 10^{-12}\,\text{Vm}^{-1}\) within a measurement time of \(t_{\text{m}}=1\,\text{s}\), \(t_{\text{m}}=1\,\text{h}\) and \(t_{\text{m}}=1\,\text{month}\), respectively, under ideal conditions.
Taking these values for reference, Fig. 1(d) shows the projected minimal detectable \(g_{a\gamma\gamma,*}\) evaluated via Eq. (6)
for a magnetic field of \(\left|\mathbf{B}\right|=5.6\,\mathrm{T}\). It can be readily seen that Rydberg atoms would place significantly stronger constraints on axions in the ultralight mass regime. For example, even with a measurement time of one second, our proposed setup would outperform the CAST helicoscope bounds [41] by several orders of magnitude. Also shown on the same plot are constraints inferred from the cosmic microwave background measured by the Planck Space Telescope Experiment[5], and indirect astrophysical constraints inferred from the supernova SN1987A [6], pulsar timing array [7], and quasar H1821+643 [8]. The dashed lines are projected sensitivities from proposed experiments such as the twisted anyon cavity [9] and DANCE [10]; Rydberg atoms significantly outperform these proposals while being operationally simpler to realize. For the experimental protocol under consideration, Rydberg atoms are about one order of magnitude short of reaching the parameter regime to which the QCD axion is constrained.
### Level structure of Rydberg atoms
Rydberg atoms are highly excited atoms featuring large dipole moments and polarizability. Each eigenstate can be characterized by the quantum numbers \(n,l,j,m\). The energies \(\epsilon_{\mu}\) depend mainly on the principle quantum number \(n\in\{1,2,3,\dots\}\) and the angular-momentum quantum number \(l=0,\dots,n-1\) (also denoted by \(s,p,d,f\dots\)). The numbers \(j\) and \(m\) characterize the total angular momentum quantum number and its \(z\) projection, respectively. The spectrum of a Rubidium atom is depicted in Fig. 1(a), where each column represents a different \(l\). The energy splitting between the highly excited Rydberg states \(n\gg 1\) rapidly decreases as \(n\) becomes large. In the presence of a static macroscopic magnetic field \(\mathbf{B}\parallel\mathbf{e}_{z}\), the eigenstates \(\left|\mu\right>\) are subject to a Zeemann shift, whose linear contribution reads as
\[\Delta\epsilon_{\mu}=\frac{\mu_{B}\left|\mathbf{B}\right|}{\hbar}\left<\mu\left| \left(\hat{L}_{z}+g_{z}\hat{S}_{z}\right)\right|\mu\right>\equiv K_{\mu}\left| \mathbf{B}\right|, \tag{7}\]
where \(\mu_{B}\) is the Bohr magneton, \(\hat{L}_{z}\) is the \(z\) projection operator of the angular momentum, \(\hat{S}_{z}\) is the projection of the spin, and \(g_{z}\) is the corresponding gyromagnetic factor. The Zeeman effect can be used to tune specific energy states such that the monochromatic axion field fulfills a resonance condition.
Because the electron is excited so far away from the nucleus, the transition dipole moments of two neighboring Rydberg states with principle quantum numbers \(n^{\prime}=n\pm 1\) are very large and scale as \(\left|\mathbf{d}_{\mu,\nu}\right|^{2}\propto n^{2}\), given that no optical selection rules are violated, c.f., Fig. 2(d). Thus, due to their energy spacing and the scaling of their dipole elements, Rydberg atoms are very sensitive to low-frequency and quasistatic electric fields.
### Sensitivity estimation
Recent experiments using EIT have demonstrated an outstanding sensitivity of Rydberg-atom superhet detectors for sensing electric fields [32]. The superhet configuration consists of two low energy states \(\left|1\right>\), \(\left|2\right>\), and two Rydberg states \(\left|3\right>\), \(\left|4\right>\), whose energetic location are marked in Fig. 2(a) and (b). The states \(\left|1\right>\) and \(\left|2\right>\) are coupled by the probe laser of frequency \(\omega_{p}\) and Rabi frequency \(\Omega_{p}\), while the states \(\left|2\right>\) and \(\left|3\right>\) are coupled by the coupling laser of frequency \(\omega_{C}\) and Rabi frequency \(\Omega_{C}\). The Rydberg states are coupled by the axion field of frequency \(\omega_{a}\) and Rabi frequency \(\Omega_{a}=g_{a\gamma\gamma}c_{a0}\left|\mathbf{d}_{3,4}\cdot\mathbf{B}\right|/\hbar\). The corresponding energies \(\epsilon_{3}\) and \(\epsilon_{4}\) are assumed to be close to resonance \(\epsilon_{4}-\epsilon_{3}\approx\hbar\omega_{a}\). The resonance condition can be adjusted using the Zeeman effect [see Fig. 2(c)]. To enhance the signal, the superhet detector uses a local oscillator with frequency \(\hbar\omega_{LO}=\epsilon_{4}-\epsilon_{3}\) and Rabi frequency \(\Omega_{LO}\gg\Omega_{a}\) which heterodynes the axion field.
The coupled Rydberg states form the states \(\left|-\right>\) and \(\left|+\right>\), which exhibit a slow monochromatic energy splitting \(\hbar\Omega(t)=\epsilon_{+}(t)-\epsilon_{-}(t)=\hbar\Omega_{LO}+\hbar\Omega_{a} \cos[(\omega_{a}-\omega_{LO})t]\). As \(\omega_{a}\approx\omega_{LO}\), we consider \(\Omega(t)\) to be quasistatic. Both states are coupled to state \(\left|2\right>\) with Rabi frequency \(\Omega_{C}/\sqrt{2}\). The resulting four-level system is depicted in Fig. 2(e).
The superhet detector senses the axion field via a change of the absorption of the probe laser traversing a cloud of Rydberg atoms, as sketched in Fig. 2(f). The linear absorption rate \(\alpha(\omega_{p})\) for the resonance condition \(\hbar\omega_{p}=\epsilon_{2}-\epsilon_{1}\) is shown in Fig. 2(g) as a function of \(\Delta\omega_{C}=\omega_{C}-(\epsilon_{3}+\epsilon_{2})/\hbar\). The absorption rate exhibits a dip around \(\Delta\omega_{C}=0\) for \(\Omega=0\) (not shown). This is the celebrated EIT affect, which is induced by a destructive interference between the transitions from \(\left|1\right>\) to \(\left|2\right>\) and \(\left|3\right>\) (\(\left|4\right>\)) to \(\left|2\right>\) in the system's stationary state, preventing the absorption of the probe laser [42; 43].
For a finite \(\Omega\), the atoms are not perfectly transparent closely around \(\left|\Delta\omega_{C}\right|\approx 0\). The energy splitting between the two states \(\left|-\right>\) and \(\left|+\right>\) results into two new transparency frequencies at \(\Delta\omega_{C}=\pm\Omega/2\). Their interference shifts the ideal transparency, which would appear for \(\Omega=0\) at \(\Delta\omega_{C}=0\), and gives rise to high sensitivity of the absorption rate as highlighted in the left inset of Fig. 2(g), which compares the absorption rate for \(\Omega=\Omega_{LO}\) and \(\Omega=\Omega_{LO}+\Omega_{a}\). A thorough analysis reveals that the signal-to-noise ratio (SNR) for a total measurement time \(t_{\text{m}}\) is given by
\[SNR=\sqrt{N}\Omega_{a}\tau, \tag{8}\]
where \(N\) is the number of Rydberg atoms, \(\tau=\sqrt{T_{c}\cdot t_{\text{m}}}\) is the effective measurement time, and \(T_{c}\) is the effective coherence time
\[T_{c}=\frac{4\Omega_{p}^{2}\Omega_{C}^{4}\left(1+\Gamma^{\prime}\right)^{2}}{ \left[\left(\gamma_{2}+\frac{\omega_{p}\sigma}{c}\Gamma\right)\left(\Omega_{ LO}^{2}+\gamma^{2}\right)+\Omega_{C}^{2}\gamma\right]^{3}}\frac{\gamma^{2}\Omega_{LO}^{2}}{ \left(\Omega_{LO}^{2}+\gamma^{2}\right)} \tag{9}\]
with \(\gamma=(\gamma_{3}+\gamma_{4})/2\), where \(\gamma_{2}\), \(\gamma_{3}\), and \(\gamma_{4}\) are the inverse life times of the states \(\left|2\right>\), \(\left|3\right>\), and \(\left|4\right>\), respectively. The Doppler effect for atoms of mass \(m_{R}\) at temperature \(T\) is described by the function \(\Gamma=\Gamma\left(\zeta/(\sqrt{2}\sigma)\right)\geq 0\), where \(\zeta=\gamma_{2}+\gamma\Omega_{c}^{2}/\left(\Omega_{LO}^{2}+\gamma^{2}\right)\) and \(\sigma^{2}=k_{B}T/m_{R}\), which is defined in the 'Methods' section. Its derivative \(\Gamma^{\prime}=\left.\partial_{x}\Gamma(x)\right.\left|_{x\rightarrow\zeta/ (\sqrt{2}\sigma)}\right.\) is bounded by \(-0.5<\Gamma^{\prime}<0\) and has a minor impact on \(T_{c}\). The Doppler effect thus enhances the dephasing rate \(\gamma_{2}^{\prime}\rightarrow\gamma_{2}+\frac{\omega_{c}\sigma}{\kappa}\Gamma\), which is discussed in detail in the SM. Equation (8) accounts for the projection noise in the atomic system, which sets the only fundamental limit for the SNR, while photon-shot and measurement noises have been neglected here [36].
As shown in the inset of Fig. 2(g), the SNR exhibits a turnover as a function of the local oscillator strength. For small \(\Omega_{LO}\), it increases linearly with \(\Omega_{LO}\) until it reaches a maximum around \(\Omega_{LO}\approx 5\gamma\). For large \(\Omega_{LO}\), the SNR vanishes with \(\Omega_{LO}^{-3}\). The maximum value of the coherence time is \(T_{c}\approx(\Omega_{p}/\Omega_{C})^{2}/\gamma\) which determines the optimal SNR.
Using Eqs. (5) and (8), we find an explicit expression for the projected sensitivity of the axion field
\[g_{a\gamma\gamma,\bullet}=\left(\frac{\left|K_{3}-K_{4}\right|}{\left|\epsilon _{3}-\epsilon_{4}\right|-m_{a}c^{2}}\right)\frac{\hbar}{\tau\sqrt{N}\left| \mathbf{d}_{3,4}\right|}\sqrt{\frac{m_{a}^{2}}{2\rho c\hbar^{3}}}, \tag{10}\]
where \(K_{3},K_{4}\) are defined in Eq. (7). The first fraction represents the inverse magnetic field \(B_{\text{res}}\) required for establishing a resonance condition, which suggests using states featuring similar Zeemann parameters \(K_{3}\approx K_{4}\) and a large detuning, as long as the external magnetic field \(B_{\text{res}}<10\,\text{T}\) is experimentally feasible. In the proposed Rydberg-atom detector, the magnetic field fulfills thus two tasks: (i) it establishes the resonance condition between the Rydberg states, which enhances the sensitivity to the electric field; (ii) it generates the axion-sourced electric field according to Eq. (5). The second fraction represents the minimal electric field which can be detected with the superhet configuration.
To estimate the projected sensitivity achievable with Rydberg atoms, we consider the states \(\left|1\right>=(n=5,l=0,j=1/2,m=1/2)\), \(\left|2\right>=(n=100,l=1,j=3/2,m=1/2)\), \(\left|3\right>=(n=100,l=2,j=5/2,m=1/2)\) and \(\left|4\right>=(n=99,l=3,j=7/2,m=1/2)\) of Rubidium atoms. Their energies and dephasing rates can be calculated using the ARC package [44]. The dipole matrix element of the Rydberg states is \(\left|\mathbf{d}_{3,4}\right|=6425ea_{0}\) (Bohr radius \(a_{0}\)). In the ultralight axion regime the optimal magnetic field \(\left|\mathbf{B}_{\text{res}}\right|\approx 5.6\,\text{T}\) determined by the first fraction in Eq. (10) is almost independent of the axion mass as \(m_{a}c^{2}\ll\epsilon_{d}-\epsilon_{c}\). The projected minimal electric fields
Figure 2: a) Energy levels of a Rubidium atom for \(n\geq 5\) resolved for the angular momentum orbitals \(s,p,d,f\). (b) depicts a magnification of highly-excited \(f\) orbitals, that are coupled to the Rydberg level \(\left|3\right>\) with quantum numbers \(n=100\) and \(l=2\) (d orbital). (c) is the same as (b), but for a finite external magnetic field \(B=B_{\text{res}}\), which establishes a resonance condition between levels \(\left|3\right>\) with \((n,l,j,m)=(100,2,3/2,1/2)\) and \(\left|4\right>\) with \((n,l,j,m)=(99,3,5/2,1/2)\). Gray lines depict the allowed dipole transitions for a linearly-polarized driving field. (d) Transition dipole matrix elements between the \(d\) and the \(f\) orbitals. (e) Effective four-level system considered in the description of the Rydberg-atom superheterodyne detector. Levels \(\left|-\right>\) and \(\left|+\right>\) appear upon coupling the states \(\left|3\right>\) and \(\left|4\right>\). (f) Sketch of the spectroscopic setup, in which the probe and coupling fields propagate in the opposite direction to mitigate the Doppler effect as explained in the SM [38]. (g) Absorption rate \(\alpha(\omega_{p})\) of the probe beam as a function of \(\Delta\omega_{C}=\omega_{C}-(\epsilon_{3}+\epsilon_{2})/\hbar\) at resonance \(\hbar\omega_{p}=\epsilon_{2}-\epsilon_{1}\). The left inset depicts the absorption rate for small \(\left|\Delta\omega_{C}\right|\lesssim\gamma\). The right inset depicts the signal-to-noise ratio (SNR) as a function of the local oscillator \(\Omega_{LO}\) at \(\Delta\omega_{C}=0\). Other parameters are \(\Omega_{p}=1.7\,\text{MHz}\), \(\Omega_{C}=23\,\text{MHz}\), \(B_{\text{res}}=5.6\,\text{T}\), temperature \(T=300K\).
and the related exclusion limits for \(g_{a\gamma\gamma}\) for the measurement times \(t_{\rm m}=1\,\)s, \(t_{\rm m}=1\,\)h and \(t_{\rm m}=1\,\)month have been discussed above.
## III Discussion
The rigorous quantization of the axion-Maxwell equations implies that a magnetic field mediates axion-induced electric dipole transitions in atoms, molecules, trapped ions, or similar quantum emitters. This presents an exciting opportunity for re-purposing existing electric field detectors based on atomic quantum sensors, which promise performance enhancement by means of quantum engineering [45], for axion detection. In this article, we proposed one such method using highly excited Rydberg states, whose large transition dipole elements make them excellent probes of axion-induced dipole transitions. We further conjecture that the dipole transitions can also enhance the sensitivity of other axion detectors, such as helioscopos [4] and phonon polaritons [46]. We note that the direct detection scheme proposed here is different from previous protocols using Rydberg atoms, which either employ the atoms for an indirect detection of the axion-sourced photon [47], or the coupling of the electron spin to the axion wind [48; 49].
It is useful to consider what future improvements to our setup might lead to sensitivity to the QCD axion parameter space, shown in grey in Figure 1(c). Our setup may have the required sensitivity with a year-long measurement campaign, similar to the one performed by the BACON collaboration [50] searching for the time variation of fundamental constants. In this case one would also have to carefully account for the stochastic fluctuations of the axion dark matter field. Another avenue might be to seek improvements in the measurement process itself to facilitate sensing of weaker electric fields, e.g., by using stronger probe fields. The sensitivity estimate of the superhet detector for weak probe fields shows that the SNR is proportional to \(\Omega_{p}/\Omega_{C}\), i.e., it improves for stronger probe fields. However, this requires the development of non-perturbative theoretical methods, e.g., based on the photon-resolved Floquet theory [51], to accurately predict the spectroscopic signatures of the axion dark matter. One interesting possibility proposed in [52] is to use trapped ion crystals which can achieve electric field sensitivities of 100 nVm\({}^{-1}\). Moreover, trapping the Rydberg atoms in an optical lattice can further help to mitigate the Doppler effect, and would allow for complex quantum operations to mitigate measurement noise [53; 31].
## IV Methods
### Quantization of the axion Maxwell equations
The axion field, being a pseudo-scalar, interacts with the electric and magnetic fields via the Lagrangian term \(\mathcal{L}=-g_{a\gamma\gamma}\sqrt{\frac{\epsilon_{0}}{\mu_{0}}}a\mathbf{E} \cdot\mathbf{B}\), where \(\mathbf{E}\), \(\mathbf{B}\), and \(a\) denote the classical electric, magnetic, and axion fields, respectively. After deriving the Euler-Lagrange equations, we obtain the axion-Maxwell equations [38; 16]
\[\mathbf{\nabla}\cdot\mathbf{E} = \frac{\rho}{\epsilon_{0}}-cg_{a\gamma\gamma}\mathbf{B}\cdot\mathbf{ \nabla}a, \tag{11}\] \[\mathbf{\nabla}\times\mathbf{B}-\frac{\dot{\mathbf{E}}}{c^{2}} = \mu_{0}\mathbf{J}\] (12) \[+ \frac{g_{a\gamma\gamma}}{c}\left(\mathbf{B}\hat{a}-\mathbf{E}\times\mathbf{ \nabla}a\right),\] \[\mathbf{\nabla}\cdot\mathbf{B} = 0,\] (13) \[\mathbf{\nabla}\times\mathbf{E}+\dot{\mathbf{B}} = 0,\] (14) \[\ddot{a}-c^{2}\mathbf{\nabla}^{2}a+\frac{m_{a}^{2}c^{4}}{\hbar^{2}}a = \hbar c^{3}\sqrt{\frac{\epsilon_{0}}{\mu_{0}}}g_{a\gamma\gamma}\mathbf{E} \cdot\mathbf{B}, \tag{15}\]
where \(\mathbf{J}\) is the electric current density. The constants have been described in the 'Results' section. Clearly, These equations reduce to the common Maxwell equations for \(g_{a\gamma\gamma}=0\).
As shown in details in the SM, the canonically quantized axion-light-matter Hamiltonian reads as
\[\hat{H} = \frac{1}{2}\int d^{3}\mathbf{r}\,\left[\epsilon_{0}\hat{\mathbf{E}}_{c}^{ \perp 2}+\frac{1}{\mu_{0}}\hat{\mathbf{B}}^{2}\right] \tag{16}\] \[- \int d^{3}\mathbf{r}\,\left[\sqrt{\frac{\epsilon_{0}}{\mu_{0}}}g_{a \gamma\gamma}a\hat{\mathbf{E}}_{c}\cdot\hat{\mathbf{B}}-\frac{1}{2\mu_{0}}\left(g_{a \gamma\gamma}\hat{a}\right)^{2}\hat{\mathbf{B}}\cdot\hat{\mathbf{B}}\right]\] \[+ \frac{1}{2}\int d^{3}\mathbf{r}\,\left[\frac{\hat{\pi}^{2}}{\hbar c^ {3}}+\frac{1}{c\hbar}\mathbf{\nabla}\hat{a}\cdot\mathbf{\nabla}\hat{a}+\frac{m_{a}^ {2}c^{4}}{c^{3}\hbar^{3}}\hat{a}^{2}\right]\] \[+ \sum_{\eta}\left[\frac{\hat{\mathbf{p}}_{\eta}^{2}}{2m_{\eta}}+\frac {q_{\eta}^{2}}{2m_{\eta}c^{2}}\hat{\mathbf{A}}^{2}(\hat{\mathbf{r}}_{\eta})\right]\] \[+ \frac{1}{2}d^{3}\mathbf{r}\,\left[\hat{\rho}\hat{\phi}+\hat{\mathbf{J}} \cdot\hat{\mathbf{A}}\right],\]
where the canonical electric field operator has been distributed in the transversal and longitudinal fields \(\hat{\mathbf{E}}_{c}=\hat{\mathbf{E}}_{c}^{\perp}+\hat{\mathbf{E}}_{c}^{\parallel}\). The transversal contribution is defined by \(\mathbf{\nabla}\cdot\hat{\mathbf{E}}_{c}^{\perp}=0\), while the longitudinal contribution is given by \(\hat{\mathbf{E}}_{c}^{\parallel}=\hat{\mathbf{E}}_{c}-\hat{\mathbf{E}}_{c}^{\perp}=-\mathbf{ \nabla}\hat{\phi}\), where \(\hat{\phi}\) is the electrostatic vector potential. The vector potential and the magnetic field operators are denoted by \(\hat{\mathbf{B}}\) and \(\hat{\mathbf{A}}\), respectively. The axion field is denoted by \(\hat{a}\), and \(\hat{\pi}\) denotes its conjugated momentum. Particles with charge \(q_{\eta}\) and mass \(m_{\eta}\) at positions \(\hat{\mathbf{r}}_{\eta}\) having momentum \(\hat{\mathbf{p}}_{\eta}\) are labeled by \(\eta\in\{1,\dots,N_{p}\}\). The coupling between light and matter reflects the minimal coupling principle. The relation between the canonical and the physical (observable) electric field operators \(\hat{\mathbf{E}}\) is established via the relation
\[\hat{\mathbf{E}}=\hat{\mathbf{E}}_{c}-cg_{a\gamma\gamma}\hat{a}\hat{\mathbf{B}}. \tag{17}\]
The canonical operators for all other operators agree with the physical operators. The field operators are quantized as
\[\hat{\rho}(\mathbf{r}) =\sum_{\eta=1}^{N_{p}}q_{\eta}\delta\left(\mathbf{r}-\hat{\mathbf{r}}_{\eta }\right), \tag{18}\] \[\hat{\mathbf{J}}(\mathbf{r}) =\sum_{\eta=1}^{N_{p}}q_{\eta}\dot{\hat{\mathbf{r}}}_{\eta}\delta \left(\mathbf{r}-\hat{\mathbf{r}}_{\eta}\right)+\text{h.c.},\] (19) \[\hat{\mathbf{A}}(\mathbf{r}) =\sum_{\mathbf{k},\lambda}\mathbf{e}_{\mathbf{k},\lambda}\sqrt{\frac{\hbar}{2 \omega_{\mathbf{k}}\epsilon_{0}V}}\left(\hat{d}^{\dagger}_{\mathbf{k},\lambda}e^{i\bm {k}\cdot\mathbf{r}}+\hat{d}_{\mathbf{k},\lambda}e^{-i\mathbf{k}\cdot\mathbf{r}}\right),\] (20) \[\hat{\mathbf{E}}^{\perp}_{c}(\mathbf{r}) =i\sum_{\mathbf{k},\lambda}\mathbf{e}_{\mathbf{k},\lambda}\sqrt{\frac{\hbar \omega_{\mathbf{k}}}{2\epsilon_{0}V}}\left(\hat{d}^{\dagger}_{\mathbf{k},\lambda}e^{i \mathbf{k}\cdot\mathbf{r}}-\hat{d}_{\mathbf{k},\lambda}e^{-i\mathbf{k}\cdot\mathbf{r}}\right),\] (21) \[\hat{\mathbf{E}}^{\parallel}_{c}(\mathbf{r}) =-\mathbf{\nabla}\hat{\phi}(\mathbf{r}),\] \[\hat{\phi}(\mathbf{r}) =\frac{1}{4\pi\epsilon_{0}}\sum_{\eta=1}^{N_{p}}\frac{q_{\eta}}{ \left|\mathbf{r}-\hat{\mathbf{r}}_{\eta}\right|},\] (22) \[\hat{\mathbf{B}}(\mathbf{r}) =i\sum_{\mathbf{k},\lambda}\sqrt{\frac{\hbar}{2\omega_{\mathbf{k}} \epsilon_{0}V}}\left(\mathbf{k}\times\mathbf{e}_{\mathbf{k},\lambda}\right)\] \[\quad\times\left(\hat{d}^{\dagger}_{\mathbf{k},\lambda}e^{i\mathbf{k} \cdot\mathbf{r}}-\hat{d}_{\mathbf{k},\lambda}e^{-i\mathbf{k}\cdot\mathbf{r}}\right). \tag{23}\]
The photonic operators \(\hat{d}_{\mathbf{k},\lambda}\) quantizing the electromagnetic field are labeled by wavevector \(\mathbf{k}\) and polarization \(\lambda\in\{\mathbb{\},\leftrightarrow\}\). The frequencies of the photonic modes are given by \(\omega_{\mathbf{k}}\). The quantization volume is denoted by \(V\). The unit vectors \(\mathbf{e}_{\mathbf{k},\lambda}\) describe the direction of polarization and are perpendicular to the wavevector, i.e., \(\mathbf{e}_{\mathbf{k},\lambda}\cdot\mathbf{k}=0\). This fact readily ensures that the electric and magnetic Gauss equations in Eqs. (11) and (13) are fulfilled. Using the Heisenberg equations of motion and the relation between canonical and physical electric fields in Eq. (17), we can derive the axion-Maxwell equations Eq. (11)-(15) in the classical limit by generalizing the common derivation of the Maxwell equations, as shown in the SM.
### Multi-polar Hamiltonian
The minimal-coupling Hamiltonian in Eq. (16) is inconvenient to deal with because of the vector potential \(\hat{\mathbf{A}}\) that appears quadratically. This causes more complicated expressions when calculating, e.g., the spectroscopic response of quantum systems. Moreover, the vector potential \(\hat{\mathbf{A}}\) is not a physical observable. For this reason, we will bring the Hamiltonian into the so-called multi-polar form that contains only the electric and magnetic fields [54]. This is done by means of the Power-Zienau transformation [55], which we review here shortly. More details can be found in the SM [38]. Here, we restrict our investigation to systems with bounded charges, such as atoms, molecules, and similar quantum emitters. We will specify to atoms in the following for clarity. Accordingly, the summation over \(\eta\) in Eq. (16) will be modified into a double summation over \(i\in\{1,\ldots,N\}\), labeling atoms, and \(j\in\{1,\ldots,N_{c}\}\), labeling the charges associated with a particular atom. The center of mass of atom \(i\) will be denoted by \(\mathbf{R}_{i}\), while the position of the charges belonging to this atom is denoted by \(\mathbf{r}_{i,j}\).
The Power-Zienau transformation is defined by the unitary operator
\[\hat{U}=\exp\left[-i\frac{1}{\hbar}\int d\mathbf{r}^{3}\hat{\mathbf{P}}(\mathbf{r})\cdot \hat{\mathbf{A}}(\mathbf{r})\right], \tag{24}\]
where
\[\hat{\mathbf{P}}(\mathbf{r}) =\sum_{m,j}\hat{\mathbf{n}}_{i,j}(\mathbf{r}),\] \[\hat{\mathbf{n}}_{i,j}(\mathbf{r}) =q_{i,j}\left(\hat{\mathbf{r}}_{i,j}-\mathbf{R}_{i}\right)\] \[\times\int_{0}^{1}du\;\delta\left[\mathbf{r}-\mathbf{R}_{i}-u\left(\hat{ \mathbf{r}}_{i,j}-\mathbf{R}_{i}\right)\right] \tag{25}\]
denote the macroscopic polarization and the polarization generated by the charges \(\eta=(i,j)\), respectively. For simplicity, we assume that there are no free charges in the system, such that \(\hat{\phi}=0\).
The Power-Zienau transformation leaves all operators in Eq. (16) invariant, except of the canonical electric field, which becomes
\[\hat{\mathbf{D}}(\mathbf{r})\equiv\hat{U}\hat{\mathbf{E}}_{c}(\mathbf{r})\hat{U}^{\dagger}= \hat{\mathbf{E}}_{c}(\mathbf{r})+\frac{1}{\epsilon_{0}}\hat{\mathbf{P}}, \tag{26}\]
and is called the displacement field \(\hat{\mathbf{D}}(\mathbf{r})\). Importantly, the displacement field is quantized in terms of the photonic operators \(\hat{d}_{\mathbf{k},\lambda}\):
\[\hat{\mathbf{D}}^{\perp}(\mathbf{r})=i\sum_{\mathbf{k},\lambda}\sqrt{\frac{ \hbar\omega_{\mathbf{k}}\epsilon_{0}}{2V}}\mathbf{e}_{\mathbf{k},\lambda}\left[\hat{d}_{\mathbf{ k},\lambda}e^{i\mathbf{k}\cdot\mathbf{r}}-\hat{d}^{\dagger}_{\mathbf{k},\lambda}e^{-i\mathbf{k} \cdot\mathbf{r}}\right], \tag{27}\]
and not the canonical electric field \(\hat{\mathbf{E}}_{c}(\mathbf{r})\). However, one should keep in mind that the electric field \(\hat{\mathbf{E}}\) is the actual physical observable. Away from the matter where \(\hat{\mathbf{P}}(\mathbf{r})=0\), the displacement field is equivalent to the canonical electric field \(\hat{\mathbf{D}}(\mathbf{r})=\epsilon_{0}\hat{\mathbf{E}}_{c}(\mathbf{r})\).
The transformed axion-light-matter coupling Hamiltonian (\(\propto a\hat{\mathbf{E}}_{c}\cdot\hat{\mathbf{B}}\)) now reads as
\[\hat{H}_{\text{Int}} =cg_{a\gamma\gamma}\int d\mathbf{r}^{3}\;\left[\hat{a}\left(\hat{\mathbf{ D}}-\hat{\mathbf{P}}\right)\cdot\hat{\mathbf{B}}\right]\] \[+\int d\mathbf{r}^{3}\;\left[\frac{1}{2\mu_{0}}g^{2}_{a\gamma\gamma} \hat{a}^{2}\hat{\mathbf{B}}\cdot\hat{\mathbf{B}}\right], \tag{28}\]
where we have used that \(c^{2}=1/(\epsilon_{0}\mu_{0})\). The first term is equivalent to \(H_{\text{ALM}}\) in Eq. (1) after substituting \(\hat{\mathbf{E}}\) according to Eq. (2). As the second term is proportional to \(g^{2}_{a\gamma\gamma}\), it can be safely neglected. The other terms in the Hamiltonian in Eq. (16) transform according to the common Power-Zienau transformation and are given in the SM [38].
### Rydberg atoms as superheterodyne detectors
At present, there is a variety of electric field detectors that deploy an EIT coupling scheme in Rydberg atoms. The common EIT three-level configuration is easy to realize experimentally and allows already for excellent sensitivity to weak electric fields [33; 34; 35; 36]. Yet, in a three-level configuration, the signal field does only perturbatively couple off-resonant Rydberg states to each other, which limits the optimal sensitivity. To further improve the sensitivity, the supernet detection scheme has been developed, which takes advantage of a resonance condition of two Rydberg states, that features a large transition dipole matrix element [32].
A supernet detector deploying Rydberg states can be described by an effective four-level system whose states are denoted by \(\left|1\right\rangle\), \(\left|2\right\rangle\), \(\left|3\right\rangle\), \(\left|4\right\rangle\). Their positions in the energy spectrum are shown in Fig. 2. The corresponding Hamiltonian reads as
\[H(t) = \epsilon_{1}\left|1\right\rangle\left\langle 1\right|+\epsilon_{2} \left|2\right\rangle\left\langle 2\right|+\epsilon_{3}\left|3\right\rangle \left\langle 3\right|+\epsilon_{4}\left|4\right\rangle\left\langle 4\right| \tag{29}\] \[+ \left[\hbar\Omega_{p}e^{i\omega_{p}t}\left|2\right\rangle \left\langle 1\right|+h.c.\right]\] \[+ \left[\hbar\Omega_{C}e^{i\omega_{C}t}\left|3\right\rangle\left 2\right|+h.c.\right]\] \[+ \left[\hbar\Omega(t)e^{i\omega_{LO}t}\left|4\right\rangle\left \langle 3\right|+h.c.\right],\]
where \(\epsilon_{x}\) with \(x=1,2,3,4\) denote the level energies. A description of the parameters \(\omega_{p}\), \(\omega_{C}\), \(\omega_{a}\), \(\omega_{LO}\), \(\Omega_{p}\), \(\Omega_{C}\), \(\Omega_{\alpha}\), \(\Omega_{LO}\) is given in the 'Results' section. We consider the time-dependent Rabi frequency \(\Omega(t)=\Omega_{LO}+\Omega_{a}e^{i(\omega_{a}-\omega_{LO})t}\) to be quasistatic, and use the difference of the measurement signal for \(\Omega=\Omega_{LO}-\Omega_{a}\) and \(\Omega=\Omega_{LO}+\Omega_{a}\) to estimate the SNR. We describe the dynamics of the four-level system in the presence of dissipation by the Bloch equation
\[\frac{d}{dt}\rho = -\frac{i}{\hbar}\left[H(t),\rho\right] \tag{30}\] \[+ \gamma_{2}D_{\left|1\right\rangle\left\langle 2\right|}\rho+ \gamma_{3}D_{\left|1\right\rangle\left\langle 3\right|}\rho+\gamma_{4}D_{\left|1 \right\rangle\left\langle 4\right|}\rho,\]
where the dissipator is defined by
\[D_{\hat{O}}\rho=2\hat{O}\rho\hat{O}^{\dagger}-\hat{O}^{\dagger}\hat{O}\rho- \rho\hat{O}^{\dagger}\hat{O}, \tag{31}\]
and \(\gamma_{2}\), \(\gamma_{3}\) and \(\gamma_{4}\) denote the inverse lifetimes of the states \(\left|2\right\rangle\), \(\left|3\right\rangle\), \(\left|4\right\rangle\), respectively.
Transforming the subsystem of the states \(\left|3\right\rangle\), \(\left|4\right\rangle\) into a frame rotating with \(\omega_{LO}\), the effective energies of the resulting mixed Rydberg states become \(\epsilon_{-}=\epsilon_{3}-\hbar\Omega/2\) and \(\epsilon_{+}=\epsilon_{3}+\hbar\Omega/2\), which are sketched in Fig. 2(e). Both mixed Rydberg levels couple to state \(\left|2\right\rangle\) with Rabi frequency \(\Omega_{C}/\sqrt{2}\) and have a dephasing rate of \(\gamma=\frac{1}{2}\left(\gamma_{3}+\gamma_{4}\right)\).
As common practice in EIT experiments, we aim to detect the presence of the axion field by measuring the absorption of the probe field. When certain resonance conditions are fulfilled by the probe and coupling fields, the atoms become transparent to the probe laser within a very narrow window around \(\omega_{p}=(\epsilon_{2}-\epsilon_{1})/\hbar\). As the resonance conditions are modified in the presence of the axion field, the absorption significantly depends on \(\Omega_{a}\), heralding the presence of axion dark matter.
The absorption rate \(\alpha(\omega_{p})\) of the probe beam in Fig. 2(f) is proportional to the susceptibility \(\alpha(\omega_{p})\propto\text{Im}\,\chi(\omega_{p})\)[54]. As explicitly shown in the SM, the susceptibility can be expressed in terms of the operator \(\hat{\mathcal{X}}=\sum_{i=1}^{N}\hat{\mathcal{X}}_{i}\) with
\[\hat{\mathcal{X}}_{i}=\int_{0}^{t_{\text{m}}}dt\;\hat{\mathcal{P}} _{2,1}^{(i)}(t)ie^{i\omega_{p}t}+h.c.,\] \[\hat{\mathcal{P}}_{2,1}^{(i)}(t)\equiv\hat{U}^{\dagger}(t)\left( \left|2\right\rangle_{i}\left\langle 1\right|+\left|1\right\rangle_{i}\left\langle 2 \right|\right)\hat{U}(t), \tag{32}\]
where the subscript \(i\) labels the \(N\) atoms in the ensemble and \(t_{\text{m}}\) denotes the measurement time. As \(\hat{U}(t)\) is the time-evolution operator associated with the Bloch equation in Eq. (30), \(\hat{\mathcal{X}}\) acts nonlocal in time.
Explicitly, the relation between the expectation value of \(\hat{\mathcal{X}}\) and the susceptibility in linear order of \(\Omega_{p}\) is given by
\[\text{Im}\,\chi(\omega_{p})=\frac{1}{2\Omega_{p}t_{\text{m}}N}\frac{\rho_{N} \left|\mathbf{d}_{1,2}\right|^{2}}{\epsilon_{0}\hbar}\left\langle\hat{\mathcal{X} }\right\rangle, \tag{33}\]
where \(\rho_{N}\) denotes the atom density. The expectation value \(\left\langle\bullet\right\rangle=\text{tr}\left[\bullet\rho_{\text{st}}\right]\) is defined in terms of the stationary state of the system for \(\Omega_{p}=0\) that reads \(\rho_{\text{st}}=\left|1\right\rangle\left\langle 1\right|\). For this reason, \(\hat{\mathcal{X}}\) can be considered as a susceptibility operator, and be employed to estimate the SNR. A straightforward perturbative treatment in \(\Omega_{p}\), which generalizes the standard treatment of the EIT [42] to the four-level system under investigation, and a velocity average to account for the Doppler effect, reveals that the susceptibility is given by
\[\text{Im}\,\,\chi(\omega_{p})=\frac{\rho_{N}\left|\mathbf{d}_{1,2}\right|^{2}}{ \epsilon_{0}\hbar\Omega_{p}}\text{Im}\,\,\,\frac{c_{1}}{c_{2}+i\sigma\Gamma(- ic_{2}/\sigma)}, \tag{34}\]
where we have defined
\[c_{1} = -\frac{\lambda_{p}\Omega_{p}}{2\pi},\] \[c_{2} = \frac{\lambda_{p}}{2\pi}\left(\tilde{\epsilon}_{12}-\frac{1}{8} \frac{\Omega_{C}^{2}}{\tilde{\epsilon}_{1-}}-\frac{1}{8}\frac{\Omega_{C}^{2}}{ \tilde{\epsilon}_{1+}}\right),\] \[\sigma = \sqrt{\frac{k_{B}T}{m_{R}}},\] \[\Gamma(z) = \sqrt{\frac{2}{\pi}}\frac{e^{-z^{2}}}{1-\text{erf}\left(z\right)}- \sqrt{2}z, \tag{35}\]
where \(\text{erf}\left(z\right)\) denotes the complex-valued error function, and we have introduced
\[\tilde{\epsilon}_{12} = (\epsilon_{1}-\epsilon_{2})/\hbar+\Omega_{p}+i\gamma_{2},\] \[\tilde{\epsilon}_{1-} = (\epsilon_{1}-\epsilon_{3})/\hbar-\frac{\Omega}{2}+\omega_{C}+ \omega_{p}+i\gamma,\] \[\tilde{\epsilon}_{1+} = (\epsilon_{1}-\epsilon_{4})/\hbar+\frac{\Omega}{2}+\omega_{C}+ \omega_{p}+i\gamma, \tag{36}\]
for a rotational reason.
The absorption rate (i.e., the imaginary part of the susceptibility) is depicted in Fig. 2(g) as a function of \(\Delta\omega_{C}=\omega_{C}-(\epsilon_{3}+\epsilon_{1})/\hbar\) for \(\omega_{p}=(\epsilon_{2}-\epsilon_{1})/\hbar\), for a suitable local oscillator strength \(\Omega_{LO}\), and two different \(\Omega_{a}=0\) (dashed) and \(\Omega_{a}>0\) (solid). In the inset we depict the susceptibility for frequencies close to \(\left|\Delta\omega_{C}\right|\lesssim\gamma\), where we observe a clear difference between the cases \(\Omega_{a}=0\) and \(\Omega_{a}>0\). For this reason, we choose the coupling frequency to be \(\Delta\omega_{C}=0\), for which the signal for small \(\Omega_{a}\) is given by
\[\delta\left\langle\hat{\mathcal{X}}\right\rangle \equiv\Omega_{a}\left.\frac{d}{d\Omega_{a}}\left\langle\hat{ \mathcal{X}}\right\rangle\right|_{\Omega_{a}=0}\] \[=\frac{4\Omega_{C}^{2}\gamma\Omega_{LO}\Omega_{p}\cdot N\Omega_{ a}t_{\text{m}}\left(1+\Gamma^{\prime}\right)}{\left[\left(\gamma_{2}+\frac{ \omega_{p}\sigma}{c}\Gamma\right)\left(\Omega_{LO}^{2}+\gamma^{2}\right)+ \Omega_{C}^{2}\gamma\right]^{2}}, \tag{37}\]
where we have defined \(\Gamma=\Gamma\left(\zeta/(\sqrt{2}\sigma)\right)\) with \(\zeta=\gamma_{2}+\gamma\Omega_{c}^{2}/\left(\Omega_{LO}^{2}+\gamma^{2}\right)\), and \(\Gamma^{\prime}=\partial_{z}\Gamma(z)\mid_{z\rightarrow\zeta/(\sqrt{2}\sigma)}\). Likewise, we find that the variance of \(\hat{\mathcal{X}}\) for vanishing \(\Omega_{a}\) is given as
\[\text{Var}\left\langle\hat{\mathcal{X}}\right\rangle =t_{\text{m}}N\,\frac{\epsilon_{0}\hbar}{\rho_{N}\left|\mathbf{d}_{1,2}\right|^{2}}\text{Im}\,\chi(\omega_{p})\] \[=\frac{4t_{\text{m}}N}{\left(\gamma_{2}+\frac{\omega_{p}\sigma}{c }\Gamma\right)+\frac{\Omega_{C}^{2}\gamma}{\Omega_{LO}^{2}+\gamma^{2}}}, \tag{38}\]
where we have approximated \(\Omega\rightarrow\Omega_{L0}\). Putting everything together, we obtain the \(SNR\equiv\delta\left\langle\hat{\mathcal{X}}\right\rangle/\left(\text{Var} \left\langle\hat{\mathcal{X}}\right\rangle\right)^{-1/2}\) in Eq. (8).
###### Acknowledgements.
This work is supported by the Guangdong Provincial Key Laboratory (Grant No.2019B121203002), the MURI-ARO Grant No. W911NF17-1-0323 through UC Santa Barbara, and the Shanghai Municipal Science and Technology Major Project through the Shanghai Research Center for Quantum Sciences (Grant No. 2019SHZDZX01).
|
2307.02484 | Elastic Decision Transformer | This paper introduces Elastic Decision Transformer (EDT), a significant
advancement over the existing Decision Transformer (DT) and its variants.
Although DT purports to generate an optimal trajectory, empirical evidence
suggests it struggles with trajectory stitching, a process involving the
generation of an optimal or near-optimal trajectory from the best parts of a
set of sub-optimal trajectories. The proposed EDT differentiates itself by
facilitating trajectory stitching during action inference at test time,
achieved by adjusting the history length maintained in DT. Further, the EDT
optimizes the trajectory by retaining a longer history when the previous
trajectory is optimal and a shorter one when it is sub-optimal, enabling it to
"stitch" with a more optimal trajectory. Extensive experimentation demonstrates
EDT's ability to bridge the performance gap between DT-based and Q
Learning-based approaches. In particular, the EDT outperforms Q Learning-based
methods in a multi-task regime on the D4RL locomotion benchmark and Atari
games. Videos are available at: https://kristery.github.io/edt/ | Yueh-Hua Wu, Xiaolong Wang, Masashi Hamaya | 2023-07-05T17:58:21Z | http://arxiv.org/abs/2307.02484v6 | # Elastic Decision Transformer
###### Abstract
This paper introduces Elastic Decision Transformer (EDT), a significant advancement over the existing Decision Transformer (DT) and its variants. Although DT purports to generate an optimal trajectory, empirical evidence suggests it struggles with trajectory stitching, a process involving the generation of an optimal or near-optimal trajectory from the best parts of a set of sub-optimal trajectories. The proposed EDT differentiates itself by facilitating trajectory stitching during action inference at test time, achieved by adjusting the history length maintained in DT. Further, the EDT optimizes the trajectory by retaining a longer history when the previous trajectory is optimal and a shorter one when it is sub-optimal, enabling it to "stitch" with a more optimal trajectory. Extensive experimentation demonstrates EDT's ability to bridge the performance gap between DT-based and Q Learning-based approaches. In particular, the EDT outperforms Q Learning-based methods in a multi-task regime on the D4RL locomotion benchmark and Atari games. Videos are available at: [https://kristery.github.io/edt/](https://kristery.github.io/edt/).
## 1 Introduction
Reinforcement Learning (RL) trains agents to interact with an environment and learn from rewards. It has demonstrated impressive results across diverse applications such as game playing [23; 44], robotics [41; 40; 54], and recommendation systems [12; 1]. A notable area of RL is Offline RL [31], which employs pre-collected data for agent training and proves more efficient when real-time interactions are costly or limited. Recently, the conditional policy approach has shown large potentials in Offline RL, where the agent learns a policy based on the observed state and a goal. This approach enhances performance and circumvents stability issues related to long-term credit assignment. Moreover, the successful Transformer architecture [49], widely used in applications like natural language processing [52; 13; 8] and computer vision [14; 34], has been adapted for RL as the Decision Transformer (DT) [11].
DT utilizes a Transformer architecture to model and reproduce sequences from demonstrations, integrating a goal-conditioned policy to convert Offline RL into a supervised learning task. Despite its competitive performance in Offline RL tasks, the DT falls short in achieving trajectory stitching, a desirable property in Offline RL that refers to creating an optimal trajectory by combining parts of sub-optimal trajectories [19; 9; 57]. This limitation stems from the DT's inability to generate
Figure 1: **Normalized return with medium-replay datasets. The dotted gray lines indicate normalized return with medium datasets. By achieving trajectory stitching, our method benefits from worse trajectories and learns a better policy.**
superior sequences, thus curbing its potential to learn optimal policies from sub-optimal trajectories (Figure 1).
We introduce the Elastic Decision Transformer (EDT), which takes a variable length of the traversed trajectory as input. Stitching trajectories, or integrating the current path with a more advantageous future path, poses a challenge for sequence generation-based approaches in offline RL. Stitching a better trajectory appears to contradict one of the core objectives of sequence generation that a sequence generation model is required to reliably reproduce trajectories found within the training dataset. We suggest that in order to'refresh' the prediction model, it should disregard 'negative' or 'unsuccessful' past experiences. This involves dismissing past failures and instead considering a shorter history for input. This allows the sequence generation model to select an action that yields a more favorable outcome. This strategy might initially seem contradictory to the general principle that decisions should be based on as much information as possible. However, our proposed approach aligns with this concept. With a shorter history, the prediction model tends to output with a higher variance, typically considered a weakness in prediction scenarios. Yet, this increased variance offers the sequence prediction model an opportunity to explore and identify improved trajectories. Conversely, when the current trajectory is already optimal, the model should consider the longest possible history for input to enhance stability and consistency. Consequently, a relationship emerges between the quality of the path taken and the length of history used for prediction. This correlation serves as the motivation behind our proposal to employ a variable length of historical data as input.
In practice, we train an approximate value maximizer using expectile regression to estimate the highest achievable value given a certain history length. We then search for the history length associated with the highest value and use it for action inference.
Evidence from our studies indicates that EDT's variable-length input sequence facilitates more effective decision-making and, in turn, superior sequence generation compared to DT and its variants. Furthermore, it is computationally efficient, adding minimal overhead during training. Notably, EDT surpasses state-of-the-art methods, as demonstrated in the D4RL benchmark [15] and Atari games [7; 17]. Our analysis also suggests that EDT can significantly enhance the performance of DT, establishing it as a promising avenue for future exploration.
**Our Contributions:**
* We introduce the Elastic Decision Transformer, a novel approach to Offline Reinforcement Learning that effectively addresses the challenge of trajectory stitching, a known limitation in Decision Transformer.
* By estimating the optimal history length based on changes in the maximal value function, the EDT enhances decision-making and sequence generation over traditional DT and other Offline RL algorithms.
* Our experimental evaluation highlights EDT's superior performance in a multi-task learning regime, positioning it as a promising approach for future Offline Reinforcement Learning research and applications.
## 2 Preliminaries
In this study, we consider a decision-making agent that operates within the framework of Markov Decision Processes (MDPs) [42]. At every time step \(t\), the agent receives an observation of the world \(o_{t}\), chooses an action \(a_{t}\), and receives a scalar reward \(r_{t}\). Our goal is to learn a single optimal policy distribution \(P^{s}_{\theta}(a^{t}|^{o\leq t},a^{<t},r^{<t})\) with parameters \(\theta\) that maximizes the agent's total future return \(R_{t}=\sum_{k>t}r^{k}\) on all the environments we consider.
### Offline Reinforcement Learning
Offline RL, also known as batch RL, is a type of RL where an agent learns to make decisions by analyzing a fixed dataset of previously collected experiences, rather than interacting with an environment in real-time. In other words, the agent learns from a batch of offline data rather than actively exploring and collecting new data online.
Offline RL has gained significant attention in recent years due to its potential to leverage large amounts of pre-existing data and to solve RL problems in scenarios where online exploration is
Impractical or costly. Examples of such scenarios include medical treatment optimization [33], finance [46], and recommendation systems [55].
Despite its potential benefits, offline RL faces several challenges, such as distributional shift, which occurs when the offline data distribution differs significantly from the online data distribution, and the risk of overfitting to the fixed dataset. A number of recent research efforts have addressed these challenges, including methods for importance weighting [35; 37], regularization [53; 28], and model-based learning [25; 29], among others.
### Decision Transformer
The Decision Transformer architecture, introduced by [11], approaches the offline RL problem as a type of sequence modeling. Unlike many traditional RL methods that estimate value functions or compute policy gradients, DT predicts future actions based on a sequence of past states, actions, and rewards. The input to DT includes a sequence of past states, actions, and rewards, and the output is the next action to be taken. DT uses a Transformer architecture [49], which is composed of stacked self-attention layers with residual connections. The Transformer architecture has been shown to effectively process long input sequences and produce accurate outputs.
Despite the success of being applied to offline RL tasks, it has a limitation in its ability to perform "stitching." Stitching refers to the ability to combine parts of sub-optimal trajectories to produce an optimal trajectory. This approach can lead to a situation where the agent follows a sub-optimal trajectory that provides an immediate reward, even if a different trajectory leads to a higher cumulative reward over time. This limitation of DT is a significant challenge in many offline RL applications, and addressing it would greatly enhance the effectiveness of DT in solving real-world problems.
## 3 Elastic Decision Transformer
In this section, we present Elastic Decision Transformer (EDT), a model that automatically utilizes a shorter history to predict the next action when the traversed trajectory underperforms compared to those in the training dataset. The mechanism allows the model to switch to a better trajectory by forgetting 'unsuccessful' past experiences, thus opening up more possibilities for future trajectories. We further propose a method to estimate the maximum achieveable return using the truncated history, allowing EDT to determine the optimal history length and corresponding actions.
We first provide essential background knowledge (Sec. 3.1) and discuss the motivation behind our approach (Sec. 3.2). Subsequently, we present our novel objective designed for training the EDT, which integrates expectile regression (Sec. 3.3). We also detail the action inference process employed during testing (Sec. 3.4).
### Reinforcement Learning as Sequence Modeling
In this paper, we adopt an approach to offline reinforcement learning that is based on a sequence modeling problem. Specifically, we model the probability of the next token in the sequence (denoted as \(\tau\)) based on all the tokens that come before it. The sequences we model can be represented as:
\[\tau=\langle...,o^{t},\hat{R}^{t},a^{t},...\rangle,\]
Figure 2: An overview of our Elastic Decision Transformer architecture. \(\tilde{R}\) is the prediction of the maximum return. We also show the environments we used in the experiments on the right. We adopt four tasks for D4RL [15] and \(20\) tasks for Atari games.
where \(t\) is a time step and \(\hat{R}\) is the return for the remaining sequence. The sequence we consider here is similar to the one used in [30] whereas we do not include reward as part of the sequence and we predict an additional quantity \(\hat{R}\) that enables us to estimate an optimal input length, which we will cover in the following paragraphs. Figure 2 presents an overview of our model architecture. It should be noted that we also change the way to predict future observation from standard DT [11], where the next observation is usually directly predicted from \(a^{t}\) through the causal transformer decoder.
### Motivation
We propose a shift in the traditional approach to trajectory stitching. Instead of focusing on training phases, we aim to achieve this stitching during the action inference stage. This concept is illustrated in Figure 3 using a simplified example. In this scenario, we consider a dataset, \(D\), comprising only two trajectories: \(D=(s^{a}_{t-1},s_{t},s^{a}_{t+1}),(s^{b}_{t-1},s_{t},s^{b}_{t+1})\). A sequence model trained with this dataset is likely to predict the next states in a manner consistent with their original trajectories.
To overcome this, we propose a method that enables trajectory stitching, where the model starts from \(s^{b}_{t-1}\) and concludes at \(s^{a}_{t+1}\). This is achieved by adaptively adjusting the history length. We introduce a maximal value estimator, \(\hat{R}\), which calculates the maximum value among all potential outcomes within the dataset. This allows us to determine the optimal history length that maximizes \(\tilde{R}\).
In the given example, if the model starts at state \(s^{b}_{t-1}\), it will choose to retain the history \((s_{t})\) upon reaching state \(s_{t}\), as \(\tilde{R}(s_{t})>\tilde{R}(s_{t-1},s_{t})\). Conversely, if the model initiates from state \(s^{a}_{t-1}\), it will preserve the history \((s^{a}_{t-1},s_{t})\) when decision-making at \(s_{t}\), as \(\tilde{R}(s^{a}_{t-1},s_{t})\geq\tilde{R}(s_{t})\). From the above example, we understand that the optimal history length depends on the quality of the current trajectory we've traversed, and it can be a specific length anywhere between a preset maximal length and a single unit.
To estimate the optimal history length in a general scenario, we propose solving the following optimization problem:
\[\arg\max_{T}\max_{\tau_{T}\in D}\hat{R}^{t}(\tau_{T}), \tag{1}\]
where \(\tau_{T}\) denotes the history length \(T\). More precisely, \(\tau_{T}\) takes the form:
\[\tau_{T}=\langle o^{t-T+1},\hat{R}^{t-T+1},a^{t-T+1},...,o^{t-1},\hat{R}^{t-1},a^{t-1},o^{t},\hat{R}^{t},a^{t}\rangle.\]
### Training objective for Maximum In-Support Return
In the EDT, we adhere to the same training procedure as used in the DT. The key distinction lies in the training objective - we aim to estimate the maximum achievable return for a given history length in EDT. To approximate the maximum operator in \(\max_{\tau_{T}\in D}\hat{R}^{t}(\tau_{T})\), we employ expectile regression [38; 3], a technique often used in applied statistics and econometrics. This method has previously been incorporated into offline reinforcement learning; for instance, IQL [26] used expectile regression to estimate the Q-learning objective implicitly. Here, we leverage it to enhance our estimation of the maximum expected return for a trajectory, even within limited data contexts. The \(\alpha\in(0,1)\) expectile of a random variable \(X\) is the solution to an asymmetric least squares problem, as follows:
\[\operatorname*{arg\,min}_{m_{\alpha}}\mathbb{E}_{x\in X}\left[L_{2}^{\alpha} (x-m_{\alpha})\right],\]
where \(L_{2}^{\alpha}(u)=|\alpha-\mathbb{1}(u<0)|u^{2}\).
Through expectile regression, we can approximate \(\max_{\tau_{T}\in D}\hat{R}^{t}(\tau_{T})\):
\[\tilde{R}^{t}_{T}=\max_{\tau_{T}\in D}\hat{R}^{t}(\tau_{T})\approx \operatorname*{arg\,min}_{\hat{R}^{t}(\tau_{T})}\mathbb{E}_{\tau_{T}\in D}[L_ {2}^{\alpha}(\tilde{R}^{t}(\tau_{T})-\hat{R}^{t})]. \tag{2}\]
Figure 3: A Toy example to illustrate the motivation of EDT. The figure shows an offline RL dataset that contains only two trajectories \((s^{a}_{t-1},s_{t},s^{a}_{t+1}),(s^{b}_{t-1},s_{t},s^{b}_{t+1})\).
We estimate \(\tilde{R}^{t}\) by applying an empirical loss of Equation 2 with a sufficiently large \(\alpha\) (we use \(\alpha=0.99\) in all experiments). The only difference in training EDT compared to other DT variants is the use of Equation 2, making the training time comparably shorter. We summarize our objective as:
\[\mathcal{L}_{\text{EDT}}=c_{r}\mathcal{L}_{\text{return}}+\mathcal{L}_{\text{ observation}}+\mathcal{L}_{\text{action}}+\mathcal{L}_{\text{max}}, \tag{3}\]
where \(\mathcal{L}_{\text{observation}}\) and \(\mathcal{L}_{\text{action}}\) are computed with a mean square error, \(\mathcal{L}_{\text{return}}\) is a cross-entropy loss, and \(\mathcal{L}_{\text{max}}\) is an empirical estimate of Equation 2. We set \(c_{r}=0.001\) to balance scale differences between mean square error and cross-entropy loss. In tasks with discrete action spaces like Atari, we optimize the action space as the return objective \(\mathcal{L}_{\text{return}}\) using cross-entropy with weight \(10c_{r}\). Our training method extends the work of [30] by estimating the maximum expected return value for a trajectory using Equation 2. This estimation aids in comparing expected returns of different trajectories over various history lengths. Our proposed method is not only easy to optimize, but can also be conveniently integrated with other DT variants. As such, it marks a significant advance in developing efficient offline reinforcement learning approaches for complex decision-making tasks.
### Action Inference During Test time
During action inference phase in test time, we first (1) estimate the maximum achievable return \(\tilde{R}_{i}\) for each history length \(i\). Subsequently, (2) we predict the action by using the truncated traversed trajectory as input. The trajectory is truncated to the history length that corresponds to the highest value of \(\tilde{R}_{i}\). These steps are elaborated in Figure 4.
To identify the history length \(i\) that corresponds to the highest \(\tilde{R}_{i}^{t}\), we employ a search strategy as detailed in Algorithm 1. As exhaustively searching through all possible lengths from \(1\) to \(T\) may
Figure 4: The figure illustrates the action inference procedure within the proposed Elastic Decision Transformer. Initially, we estimate the value maximizer, \(\tilde{R}_{i}\), for each length \(i\) within the search space, as delineated by the green rectangle. Subsequently, we identify the maximal value from all \(\tilde{R}_{i}\), which provides the optimal history length \(w\). Utilizing this optimal history length, we estimate the expert value at time step \(t\), denoted as \(\tilde{R}_{w,e}^{t}\), by Bayes’ Rule. Finally, the action prediction is accomplished via the causal transformer decoder, which is indicated by the blue rectangle. In practice, we retain the distribution of \(R_{i}^{t}\) during the estimation process for \(\tilde{R}_{i}\) and we present the inference here for clarity.
result in slow action inference, we introduce a step size \(\delta\) to accelerate the process. This step size not only enhances inference speed by a factor of \(\delta\), but also empirically improves the quality of the learned policy. An ablation study on the impact of the step size \(\delta\) is provided in Appendix A. For all experiments, we set \(\delta=2\) to eliminate the need for parameter tuning.
To sample from expert return distribution \(P(R^{t},...|\text{expert}^{t})\), we adopt an approach similar to [30] by applying Bayes' rule \(P(R^{t},...|\text{expert}^{t})\propto P(\text{expert}^{t}|R^{t},...)P(R^{t},.)\) and approximate the distribution of expert-level return with inverse temperature \(\kappa\)2[48, 47, 43]:
Footnote 2: We set \(\kappa\) to \(10\) in all our experiments.
\[P(R^{t}|\text{expert}^{t},...)\propto\exp(\kappa R^{t})P(R^{t}). \tag{4}\]
While it may initially appear feasible to directly use the predicted \(\tilde{R}\) as the expert return, it's important to note that this remains a conservative maximum operation. Empirically, we have found that Eq. 4 encourages the pursuit of higher returns, which consequently enhances the quality of the actions taken.
## 4 Experiments
Our experiments are designed to address several key questions, each corresponding to a specific section of our study:
* Does EDT significantly outperform DT and its variants? (Sec. 4.2, 4.3)
* Is the EDT effective in a multi-task learning regime, such as Locomotion and Atari games? (Sec. 4.3)
* Does a dynamic history length approach surpass a fixed length one? (Sec. 4.4)
* How does the expectile level \(\alpha\) impact the model's performance? (Sec. 4.5)
* How does the quality of datasets affect the predicted history lengths? (Sec. 4.6)
We also provide an additional ablation study in Appendix A due to space constraints.
\begin{table}
\begin{tabular}{l||c c c c|c} \hline Dataset & DT & QDT & TS+BC & S4RL & IQL & EDT (Ours) \\ \hline hopper-medium & \(60.7\pm 4.5\) & \(57.2\pm 5.6\) & \(\mathbf{64.3\pm 4.2}\) & \(\mathbf{78.9}\) & \(\mathbf{63.8\pm 9.1}\) & \(\mathbf{63.5\pm 5.8}\) \\ hopper-medium-replay & \(61.9\pm 13.7\) & \(45.8\pm 35.5\) & \(50.2\pm 17.2\) & \(35.4\) & \(\mathbf{92.1\pm 10.4}\) & \(\mathbf{89.0\pm 8.3}\) \\ walker-medium & \(71.9\pm 3.9\) & \(67.5\pm 2.0\) & \(78.8\pm 1.2\) & \(\mathbf{93.6}\) & \(79.8\pm 3.0\) & \(72.8\pm 6.2\) \\ walker-medium-replay & \(43.3\pm 14.3\) & \(30.3\pm 16.2\) & \(61.5\pm 5.6\) & \(30.3\) & \(\mathbf{73.6\pm 6.3}\) & \(\mathbf{74.8\pm 4.9}\) \\ halfcheetah-medium & \(42.5\pm 0.4\) & \(42.3\pm 2.5\) & \(43.2\pm 0.3\) & \(\mathbf{48.8}\) & \(\mathbf{47.3\pm 0.2}\) & \(42.5\pm 0.9\) \\ halfcheetah-medium-replay & \(34.9\pm 1.6\) & \(30.0\pm 11.1\) & \(39.8\pm 0.6\) & \(\mathbf{51.4}\) & \(44.1\pm 1.1\) & \(37.8\pm 1.5\) \\ \hline average & \(52.5\) & \(45.5\) & \(56.3\) & \(56.4\) & \(\mathbf{66.7}\) & \(63.4\) \\ \hline ant-medium & \(92.5\pm 5.1\) & - & - & - & \(\mathbf{99.9\pm 5.8}\) & \(\mathbf{97.9\pm 8.0}\) \\ ant-medium-replay & \(\mathbf{87.9\pm 4.9}\) & - & - & - & \(\mathbf{91.2\pm 7.2}\) & \(\mathbf{92.0\pm 4.1}\) \\ \hline average & \(90.2\) & - & - & - & \(\mathbf{95.5}\) & \(\mathbf{94.9}\) \\ \hline \end{tabular}
\end{table}
Table 1: Baseline comparisons on D4RL [15] tasks. Mean of 5 random training initialization seeds, 100 evaluations each. Our result is highlighted. The results of QDT, TS+BC, and S4RL are adopted from their reported scores. Following [26], we emphasize in bold scores within 5 percent of the maximum per task (\(\geq 0.95\cdot\text{max}\)).
\begin{table}
\begin{tabular}{l||c c|c} \hline Task & DT-1 & IQL-1 & EDT-1 (Ours) \\ \hline hopper & \(51.2\) & \(59.8\) & \(\mathbf{76.9}\) \\ walker & \(29.8\) & \(52.6\) & \(\mathbf{74.1}\) \\ halfcheetah & \(30.5\) & \(\mathbf{40.4}\) & \(36.8\) \\ ant & \(79.8\) & \(82.3\) & \(\mathbf{88.6}\) \\ \hline sum & \(191.3\) & \(235.1\) & \(\mathbf{276.4}\) \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation results on multi-task regime. Mean of 5 random training initialization seeds, 100 evaluations each. The training dataset is a mixture of **medium-replay** datasets from the four locomotion tasks. Our main result is highlighted.
Figure 5: The average HNS comparison on \(20\) Atari games. The results are evaluated with three trials.
### Baseline Methods
In the subsequent section, we draw comparisons with two methods based on the Decision Transformer: the original Decision Transformer (DT) [11] and the Q-learning Decision Transformer (QDT) [57]. Additionally, we include a behavior cloning-based method (TS+BC) [19], as well as two offline Q-learning methods, namely S4RL [45] and IQL [26], in our comparisons.
It is important to note that QDT and TS+BC are specifically designed to achieve trajectory stitching. QDT accomplishes this by substituting collected return values with estimates derived from Conservative Q-Learning [28]. Conversely, TS+BC employs a model-based data augmentation strategy to bring about the stitching of trajectories.
### Single-Task Offline Reinforcement Learning
For locomotion tasks, we train offline RL models on D4RL's "medium" and "medium-replay" datasets. The "medium" dataset comes from a policy reaching about a third of expert performance. The'medium-replay", sourced from this policy's replay buffer, poses a greater challenge for sequence modeling approaches such as DT.
We conclude our locomotion results in Table 1. Since the proposed model estimates the return of the current sequence, reward information is not required during test time. Our observations indicate that the proposed EDT consistently outperforms the baseline DT and its variants on the majority of the datasets, with a notable performance advantage on the "medium-replay" datasets. These findings provide strong evidence that our approach is highly effective in stitching together sub-optimal trajectories with high return proportion, a task that DT and its variants cannot accomplish. Although EDT doesn't fully outperform IQL in the single-task learning, it does bridge the gap between Q-learning-based methods and DT by performing trajectory stitching with the estimated maximum return.
### Multi-Task Offline Reinforcement Learning
This section aims to evaluate the multi-task learning ability of our model across diverse tasks, focusing on locomotion and Atari tasks. Locomotion tasks utilize vectorized observations, while Atari tasks depend on image observations. To emphasize the role of trajectory stitching, we restrict our datasets to medium-replay datasets for the four locomotion tasks and datasets derived from DQN Replay [2] for the Atari tasks. Our evaluations span \(20\) different Atari tasks, with further environment setup details available in the Appendix.
**Locomotion.** In the locomotion multi-task experiment, we maintain the same model architecture as in the single-task setting. By confining the dataset to **medium-replay** datasets from four tasks, we increase task complexity and necessitate the offline RL approach to learn and execute these tasks concurrently, while effectively utilizing trajectories generated by random policies. As depicted in Figure 2, our proposed EDT successfully accomplishes all four tasks simultaneously without much performance compromise.
**Atari.** For Atari, we adopt a CNN image encoder used in DrQ-v2 [57] to process stacks of four \(84\)x\(84\) image observations. To ensure fair comparisons, all methods employ the same architecture for the image encoder. Following [30], we incorporate random cropping and rotation for image augmentation. Additional experiment details are delegated to the Appendix for brevity. Performance on each Atari game is measured by human normalized scores (HNS) [36], defined as \((\text{score}-\text{score}_{\text{random}})/(\text{score}_{\text{human}}- \text{score}_{\text{random}})\), to ensure a consistent scale across each game.
Our experimental results align with those of [30], highlighting that Q-learning-based offline RL approaches encounter difficulties in learning a multi-task policy on Atari games. Despite IQL achieving the highest score in Table1, it demonstrates relative inadequacy in simultaneous multi-task learning as indicated in Table 2 and Figure 5. We leave the raw scores of the \(20\) Atari games in Appendix B.
### Dynamic History Length vs. Fixed History Length
In Sec. 3.2, we proposed the concept of EDT, which adjusts history length based on the quality of the current trajectory. We illustrated this idea with a toy example.
To validate the benefits of this dynamic history length approach, we tested the EDT using both fixed and variable history lengths. The results, summarized in Table 3, show that the variable history length outperforms the fixed ones, particularly on the "medium-replay" datasets.
These findings suggest that the EDT effectively chooses a history length that yields a higher estimated return. While a shorter history length aids in trajectory stitching, it's also crucial to retain a longer history length to ensure the continuity of optimal trajectories. Therefore, the dynamic adjustment of history length in the EDT is key to its superior performance.
### Ablation Study on Expectile Level \(\alpha\)
A key component of EDT is the approximation of the maximal value using expectile learning, as our method depends on accurately estimating these maximal values to choose the optimal history length. Consequently, examining the change in performance relative to the expectile level, \(\alpha\), provides insight into the necessity of correct history length selection for performance enhancement.
The results, as displayed in Figure 6, suggest that when expectile regression is able to accurately approximate the maximizer, specifically at higher expectile levels, we observe both a higher average performance and lower standard deviation. This suggests that accurate selection of history length not only stabilizes performance but also enhances scores. Conversely, as the expectile level approaches \(0.5\), the expectile regression's objective shifts towards a mean square error, leading to an estimated value that is more of a mean value than a maximal one. This change makes it a less effective indicator for optimal history length. As a result, we can see a deterioration in EDT's score as the expectile level drops too low, and an increase in standard deviation, indicating inconsistency in the selection of an effective history length.
Figure 7: The figures illustrate the history length distributions across datasets and training epochs. For each distribution, we collect ten trajectories and derive a histogram of history lengths. The distribution is computed with kernel distribution estimate for better visualization.
### Analysis of Optimal History Length Distribution
In our analysis, we examine the history length distributions across various datasets, as depicted in Figure 7. Our findings reveal that the medium-replay dataset, which amalgamates trajectories from multiple policies, yields a distribution closely approximating a uniform distribution. Conversely, the medium dataset, acquired through a singular stochastic policy, exhibits a history length distribution characterized by an increased density at lower history lengths. This observation can be attributed to the prevalence of analogous trajectories within the medium dataset, leading to more frequent occurrences of trajectory stitching than the "medium-replay" dataset. However, it is important to acknowledge that the gains derived from this type of trajectory stitching remain limited, as the trajectories stem from identical distributions. Although performance improvement is observed, as presented in Table 1, it is significantly less pronounced in comparison to the medium-replay dataset.
Contrary to initial expectations, trajectory stitching does not occur as frequently within the medium-replay dataset as within the medium dataset. In fact, the distinct policies within the medium dataset contribute to the reduced instances of trajectory stitching, as their respective state distributions differ from one another. The diversity within the dataset results in a limited number of mutual \(s_{t}\) instances illustrated in Figure 3. Nevertheless, the proposed EDT method derives substantial benefits from trajectory stitching in this context. The EDT effectively avoids being misled by sub-optimal trajectories within the dataset, demonstrating its capacity to make better decisions regarding history lengths and actions that optimize the current return.
## 5 Related Work
**Offline Reinforcement Learning.** Offline RL has been a promising topics for researchers since sampling from environments during training is usually costly and dangerous in real-world applications and offline reinforcement learning is able to learn a better policy without directly collecting state-action pairs. Several previous works have utilized constrained or regularized dynamic programming to mitigate deviations from the behavior policy [39; 51; 27; 16; 56].
Decision Transformer and its variants [11; 30; 57] have been a promising direction for solving offline RL from the perspective of sequence modeling. Trajectory Transformer (TT) [22] models distributions over trajectories using transformer architecture. The approach also incorporates beam search as a planning algorithm and demonstrates exceptional flexibility across various applications, such as long-horizon dynamics prediction, imitation learning, goal-conditioned reinforcement learning, and offline reinforcement learning.
Recently, there has been a growing interest in incorporating diffusion models into offline RL methods. This alternative approach to decision-making stems from the success of generative modeling, which offers the potential to address offline RL problems more effectively. For instance, [18] reinterprets Implicit Q-learning as an actor-critic method, using samples from a diffusion parameterized behavior policy to improve performance. Similarly, other diffusion-based methods [21; 32; 50; 5; 10; 4] utilize diffusion-based generative models to represent policies or model dynamics, achieving competitive or superior performance across various tasks.
**Trajectory Stitching.** A variety of methods have been proposed to tackle the trajectory stitching problem in offline RL. The Q-learning Decision Transformer (QDT) [57] stands out as it relables the ground-truth return-to-go with estimated values, a technique expected to foster trajectory recombination. Taking a different approach, [19] utilizes a model-based data augmentation strategy, stitching together parts of historical demonstrations to create superior trajectories. Similarly, the Best Action Trajectory Stitching (BATS) [9] algorithm forms a tabular Markov Decision Process over logged data, adding new transitions using short planned trajectories. BATS not only aids in identifying advantageous trajectories but also provides theoretical bounds on the value function. These efforts highlight the breadth of strategies employed to improve offline RL through innovative trajectory stitching techniques.
## 6 Discussion
**Conclusion.** In this paper, we introduced the Elastic Decision Transformer, a significant enhancement to the Decision Transformer that addresses its limitations in offline reinforcement learning. EDT's innovation lies in its ability to determine the optimal history length, promoting trajectory stitching. We proposed a method for estimating this optimal history length by learning an approximate value optimizer through expectile regression.
Our experiments affirmed EDT's superior performance compared to DT and other leading offline RL algorithms, notably in multi-task scenarios. EDT's implementation is computationally efficient and straightforward to incorporate with other DT variants. It outshines existing methods on the D4RL benchmark and Atari games, underscoring its potential to propel offline RL forward.
In summary, EDT offers a promising solution for trajectory stitching, enabling the creation of better sequences from sub-optimal trajectories. This capability can considerably enhance DT variants, leading to improved performance across diverse applications. We are committed to **releasing our code.**
**Limitations.** A potential direction for future improvement involves enhancing the speed at which EDT estimates the optimal history. This could make the method suitable for real-time applications that have strict time constraints. While this adaptation is an exciting avenue for future research, it falls outside the primary scope of this paper.
|
2303.10577 | Enhancing Immersion and Presence in the Metaverse with Over-the-Air
Brain-Computer Interface | This article proposes a novel framework that utilizes an over-the-air
Brain-Computer Interface (BCI) to learn Metaverse users' expectations. By
interpreting users' brain activities, our framework can optimize physical
resources and enhance Quality-of-Experience (QoE) for users. To achieve this,
we leverage a Wireless Edge Server (WES) to process electroencephalography
(EEG) signals via uplink wireless channels, thus eliminating the computational
burden for Metaverse users' devices. As a result, the WES can learn human
behaviors, adapt system configurations, and allocate radio resources to tailor
personalized user settings. Despite the potential of BCI, the inherent noisy
wireless channels and uncertainty of the EEG signals make the related resource
allocation and learning problems especially challenging. We formulate the joint
learning and resource allocation problem as a mixed integer programming
problem. Our solution involves two algorithms: a hybrid learning algorithm and
a meta-learning algorithm. The hybrid learning algorithm can effectively find
the solution for the formulated problem. Specifically, the meta-learning
algorithm can further exploit the neurodiversity of the EEG signals across
multiple users, leading to higher classification accuracy. Extensive simulation
results with real-world BCI datasets show the effectiveness of our framework
with low latency and high EEG signal classification accuracy. | Nguyen Quang Hieu, Dinh Thai Hoang, Diep N. Nguyen, Van-Dinh Nguyen, Yong Xiao, Eryk Dutkiewicz | 2023-03-19T05:41:46Z | http://arxiv.org/abs/2303.10577v3 | # Enabling Immersion and Presence in the Metaverse with Over-the-Air Brain-Computer Interface
###### Abstract
Decoding brain signals can not only reveal Metaverse users' expectations but also early detect error-related behaviors such as stress, drowsiness, and motion sickness. For that, this article proposes a pioneering framework using wireless/over-the-air Brain-Computer Interface (BCI) to assist creation of virtual avatars as human representation in the Metaverse. Specifically, to eliminate the computational burden for Metaverse users' devices, we leverage Wireless Edge Servers (WES) that are popular in 5G architecture and therein URLLC, enhanced broadband features to obtain and process the brain activities, i.e., electroencephalography (EEG) signals (via uplink wireless channels). As a result, the WES can learn human behaviors, adapt system configurations, and allocate radio resources to create individualized settings and enhance user experiences. Despite the potential of BCI, the inherent noisy/fading wireless channels and the uncertainty in Metaverse users' demands and behaviors make the related resource allocation and learning/classification problems particularly challenging. We formulate the joint learning and resource allocation problem as a Quality-of-Experience (QoE) maximization problem that takes into the latency, brain classification accuracy, and resources of the system. To tackle this mixed integer programming problem, we then propose two novel algorithms that are (i) a hybrid learning algorithm to maximize the user QoE and (ii) a meta-learning algorithm to exploit the neurodiversity of the brain signals among multiple Metaverse users. The extensive experiment results with different BCI datasets show that our proposed algorithms can not only provide low delay for virtual reality (VR) applications but also can achieve high classification accuracy for the collected brain signals.
Metaverse, brain-computer interface, mobile networks, 5G, deep reinforcement learning, meta-learning.
## I Introduction
### _Motivation_
The recent emergence of the Metaverse represents a revolutionary advancement in virtual space, enabling real-time interactions between individuals and objects. Unlike conventional virtual reality, the Metaverse is an open and shared virtual environment that offers an immersive experience for its users [1]. The Metaverse is expected to create a "second life" for individuals, where they can engage in virtual activities such as shopping, attending parties, playing games, and purchasing virtual properties, akin to their real-world experiences [2]. The potential of the Metaverse for the digital industry is enormous, from innovative digital products to new virtual services such as virtual festivals, travel, and real estate. Moreover, the development of the Metaverse has been facilitated by recent technological breakthroughs such as VR, blockchain, AI, 5G, and quantum computing [1].
One of the distinctive features of the Metaverse is that individuals are the central figures, referred to as digital avatars, of all activities and events in the virtual world. In addition to communication and interaction with one another, individuals can also create digital objects in this virtual environment. For instance, they can draw digital art, author digital books, and even build virtual properties using other digital materials in the Metaverse. All these activities can be carried out through digital avatars. However, the real-time representation of human activities in the Metaverse presents a significant challenge for the Metaverse development. The primary reason is that an extensive array of sensors and cameras often required to accurately capture human movements in the real world and translate them into the Metaverse [3]. Additionally, effective communication and computing solutions are necessary to collect and process all the massive sensing data.
To address the above challenges to enable Metaverse applications, a few early works have been reported, e.g., [4, 5, 6, 7, 8, 9, 10]. In [4], the authors proposed a co-design framework for sampling, communication, and prediction to construct virtual objects in the Metaverse. The framework was successfully utilized to reconstruct the trajectories of a virtual robotic arm with low latency, under specific sampling rate and prediction accuracy constraints. The trajectories were captured using a camera as a motion capture system, which collected the movement data from human subjects. Additionally, special markers were placed on the physical object used for reconstructing the virtual robotic arm to facilitate the motion capture function of the camera. In [5], the authors proposed an interaction framework that supports multiple users at a time and a higher degree of freedom for virtual human avatars. To this end, the motion capture system utilized inertial-measurement-unit (IMU) sensors attached to the human participants. The IMU sensors measured the acceleration, velocity, and magnetometer signals of the human joints, which were then used to construct 3D virtual avatars with the iconic gestures of the human users.
To further enhance the user experience, [6] proposed a multimodal sensing and feedback platform that could not only detect finger motion of the users but also send haptic feedback such as vibration and heat to the users from the Metaverse. The haptic feedback was achieved by using triboelectric nanogenerator sensors attached on the user's fingers [7]. The finger motion data were then collected and processed at a cloud server to produce finger motion prediction with machine learning, i.e., Principal component analysis (PCA) and support vector machine (SVM) algorithms. The results showed that the errors of detecting motion and sending haptic feedback were acceptable for collision detection and motion control scenarios. Other sources of human data such as eye movement, VR headset's orientation, field-of-view, and heart rate, have shown
several benefits in enhancing user expectations [8, 9, 10]. For example, VR motion sickness, i.e., a common issue in VR applications in which the user's brain receives conflicting signals about self-movement between the physical and virtual worlds, can be eliminated by analyzing eye movement and heart rate of the user to adjust the display settings [8, 9].
As analyzed in the aforementioned works, the utilization of data collected from the Metaverse users can be considered as a potential approach towards a robust and individualized Metaverse world. However, there are fundamental challenges that need to be addressed to fully realize such individualized Metaverse world. One of the main challenges is how we can effectively use such a massive amount of data produced from human activities, given the practical constraints of computing, communications, and energy resources. For example, to make VR headsets/equipments less bulky, the battery pack should be smaller, but this might limit the overall energy for local processing and communications. This also means less antennas, sensors can be mounted, thus limiting the communications and sensing capabilities. Furthermore, each conventional sensing technique requires specific type of sensor, e.g., camera [4], finger sensor [6], IMU sensor [5], heart rate sensor [11], and joystick [12], that may hinder the scalability and standardization of the system. For example, in some Metaverse applications such as real-time virtual fighting, fitness, and dance gaming, heart rate tracking information can help to adjust the tempo and intensity of the music, while body movement tracking information can provide more accurate and precise control over the user's dance movements. However, it is challenging to exploit, combine, and synchronize such diverse information sources from different kinds of sensors, e.g., heart rate sensors and IMU sensors.
Unlike most of the existing works in the literature, we introduce a new source of human data from users' brain activities that is expected to be a paradigm shift in the way we interact in the Metaverse. As the brain is the most complex organ in the human body, encoded with the richest information that reflects an individual's cognitive capabilities and perception. It has been reported that brain signals, such as electroencephalography (EEG), contain underlying information about heart rate, limb movement, intentions, emotions, and eye movement [11, 13, 14]. By utilizing brain signals from users, we can potentially develop a human-centric Metaverse that authentically and individually represents humans through their brain activities, while significantly reducing the number of external sensors attached to the human body. Moreover, carrying brain signals over communication channels such as wireless mediums can open up new areas of application, such as brain communications, imagined speech, semantic communications, enabling users to communicate with one another through their thoughts e.g., [15]. To this end, Brain-Computer Interface (BCI) technology [16] provides a neural pathway between the human brain and the outside world through sensors attached to the human scalp, enabling the recording of brain activities and connection to computing units such as remote servers or personal computers. As such, BCI is considered as an enabler for the development of the human-centric Metaverse in which human activities and thoughts can be reliably mirrored and synergized.
### _Related Works_
Applications of BCI to the Metaverse are still in its infancy. In [15], the authors proposed a framework for imagined speech communication in home control applications using BCI. The user's EEG signals were extracted and processed using a locally deployed SVM classifier to detect the user's commands. In another study [12], the authors developed a gaming platform for teleportation in the Metaverse, where the user's commands were directly translated from their EEG signals. This approach enables users to control tasks in the virtual environment without the use of any gaming tools such as a joystick. The results also showed that participants did not experience motion sickness, as the negativities of the locomotion movements were eliminated by using EEG signals instead of a joystick controller. It is worth noting that both studies in [12, 15] were pseudo-online and supported by wired connection between a computer and BCI headsets. The local computer was responsible for both processing and providing VR experiences. The framework is hence not practical for a scalable Metaverse system due to the mobility and computational limitation of the local computing unit.
The mobility, scalability, and computation limitations of [12] and [15] could be addressed by utilizing wireless BCI headsets [17] connecting to a centralized server or an edge computing unit with much more abundant computational power. By this way, the remote server and the users can exchange information with each other through uplink and downlink wireless channels. For instance, the authors in [19] assumed that delay perception in a wireless network depends on human factors such as age, gender, and demographic, and developed a learning model to predict the perception delay of human users. The learning model can then allocate radio resources to the users and thus help to minimize the system's delay, constrained by the brain characteristics. A limitation of [19] is the lack of justification for the brain signals as the authors used delay feedback data from a conventional setting of a mobile network testbed. Furthermore, the brain signals used in [19] were simulated data and oversimplified, making it infeasible to learn human behaviors from their actual brain activities, e.g., EEG signals. More importantly, the joint problem of predicting human activities and resource allocation have not been addressed in [19]. In reality, these two problems are strongly correlated. For instance, the experience of users can be degraded by either inappropriate resource allocation policies or incorrect prediction on brain activities. The inappropriate resource allocation policies can directly cause system's delay and motion sickness for the Metaverse users. The joint problem is even more critical when massive data of users are exchanged in the future wireless systems, i.e., 5G and 6G, with ultra-reliable and low-latency requirements.
In our previous work [20], we proposed a BCI-enabled Metaverse system over wireless networks to address the limitations of [19]. Unlike [19], our work in [20] successfully predicted human behaviors from their actual brain signals, i.e., EEG signals. However, another important issue of incorporating BCI into the Metaverse that has not been concerned
in all the above works is the neurodiversity of the brain signals among different users. Unlike conventional human data, the brain signals are highly individual. In other words, the continuous signals such as EEG might be different across users in certain scenarios. This inevitable neurodiversity has been recently reported in [21, 22, 23] that designed robust and scalable classification frameworks for BCI systems.
Given the above, we summarize three major challenges of using BCI for the Metaverse that have not been well addressed in the literature as follows:
* First, given the noisy wireless environments with uplink communication bottlenecks, how the system can precisely predict human behaviors from their brain signals? This requires not only highly accurate classifiers for brain signals but also effective resource management schemes to alleviate the inherent problems of wireless communications (e.g., fading, interference).
* Second, given the sophisticated brain signals, how to optimally address the correlation between (i) predicting user behaviors and (ii) minimizing system delay? Such a joint optimization problem is challenging as it must take into account the noisy wireless environment, low-latency requirement of VR and 5G applications given the dynamics/uncertainty of the Metaverse users' demands and activities.
* Third, how to tackle the neurodiversity of the brain signals which impedes the scalability and robustness of the Metaverse system involving multiple users?
### _Contributions_
To address the aforementioned challenges, this article proposes a novel wireless/over-the-air Brain-Computer Interface (BCI) framework to assist creation of virtual avatars as human representation/interaction in the Metaverse. First, we propose a novel architecture that allows the Metaverse users to interact with the virtual environment while sending the brain signals on uplink wireless channels to Wireless Edge Servers (WES). By utilizing integrated VR-BCI headsets, we can produce a rich source of human data besides conventional sensing techniques for heart rate measurement, eye tracking, or wearable sensors on limbs. From the collected brain signals, a WES with much more power computing power, in comparison to the headsets can synthesize and hence orchestra Digital Avatars (DAs) to enhance Metaverse users' Quality-of-Experience (QoE). The DAs with up-to-date brain signals can further predict user behaviors such as their physical actions, e.g., moving hands and feet [24]. With the prediction ability, the DAs can act as intelligent interfaces to support and/or detect movements, decisions, emotions of the Metaverse users.
In order to solve the first challenge discussed in Section I-B, we use a real-world BCI dataset [25] to validate the practicability of our system, given the noisy and low-latency requirements of the wireless networks and VR applications, respectively. To address the second challenge, i.e., joint prediction and resource allocation, we first formulate the QoE maximization problem subject to practical delay and prediction accuracy requirements. Even neglecting the dynamics/uncertainty of the wireless and users' environment, the resulting problem is a mixed integer programming that is challenging to solve due to strong correlation between the resource allocation and human brain signals prediction problems. To tackle it, we leverage the recent advances of the actor-critic architecture [26] and the deep convolutional neural network [27] to jointly optimize the system's resources and predict actions of the users based on their brain signals. To address the third challenge of neurodiversity, we develop a novel meta-learning algorithm that makes the system robust against the increasing number of Metaverse users. Our main contributions are summarized as follows:
* We introduce a novel over-the-air BCI-enabled Metaverse system. By collecting and analyzing the brain signals of the users, the system can create intelligent DAs that serve as a neural interface for the Metaverse users. The intelligent DAs can detect user movement such as hands and feet, thus enabling more sophisticated gesture detection applications such as gaming, virtual object control, and virtual teleportation in the Metaverse.
* We propose an innovative hybrid learning algorithm to address the mixed decision-making (i.e., radio and computing resource allocation optimization) and classification problem (i.e., predicting user behaviors based on the brain signals). Unlike conventional optimization approaches which separately solve these sub-problems, our novel learning framework with deep neural networks directly optimizes the user QoE with practical constraints of system's delay and prediction accuracy of the brain signals. The proposed hybrid learning algorithm can jointly (i) predict the actions of the users given the brain signals as the inputs and (ii) allocate the radio and computing resources to the system so that the VR delay of the system is minimized. As a result, our approach is more applicable to practical BCI-enabled Metaverse systems with users' dynamic demand as well as strict VR delay requirements.
* We develop a highly effective meta-learning algorithm to address the neurodiversity in the brain signals. By using a novel meta-gradient update process, the proposed meta-learning can better recognize the neurodiversity in the brain signals, compared with the hybrid learning algorithm. Extensive experiments with real-world BCI datasets show that our proposed solution can achieve a prediction accuracy up to \(82\%\). More importantly, the simulation results empirically show that our proposed meta-learning algorithm is more robust against the neurodiversity, effectively alleviating the prediction accuracy deteriorating when the number of Metaverse users increases.
* We conduct comprehensive evaluations on the real-world BCI dataset together with a practical VR rendering process at the WES. The results show that our proposed hybrid learning and meta-learning algorithms can not only provide low delay for VR applications but also can achieve high classification accuracy for the collected brain signals. More interestingly, our proposed framework can work well with the brain signals distorted by noise
when the Metaverse users and the WES communicate with each other over the noisy channels.
The rest of our paper is organized as follows. In Section II, we describe our system model in detail. In Section III, we propose two novel algorithms that can effectively address the mixed decision-making and classification problem of our system. Section IV presents comprehensive evaluations of our proposed framework with real-world BCI datasets. Conclusions are drawn in Section V.
## II System Model
Our proposed BCI-enabled Metaverse system model is illustrated in Fig. 1. The system consists of (i) a Wireless Edge Server (WES) and (ii) \(K\) users equipped with integrated VR-BCI headsets, e.g., Galea headsets [28]. Each integrated VR-BCI headset can extract brain activities from \(J\) channels (i.e., corresponding to \(J\) electrodes of the headset) from the user and provide VR services for the user. Each user is associated with a Digital Avatar (DA). A controller is deployed at the WES to jointly allocate the system's resources and predict the user behaviors. We later describe the deployment of the controller in our proposed algorithms in Section III. The operation of our proposed system includes six main steps as illustrated in Fig. 1. Details of the system operation are as follows.
### _System Operation_
At each time step \(t\), each user sends BCI signals1\(\mathbf{e}_{k}(t)\) to the WES via uplink channels. The BCI signals \(\mathbf{e}_{k}(t)\in\mathbb{R}^{J\times W}\) is a \(J\times W\) vector where \(J\) is the number of BCI channels, and \(W\) is the number of collected BCI signal samples per channel. At the end of the sampling interval of time step \(t\), the WES collects a set of BCI signals, i.e.,
Footnote 1: We use “BCI signals” to denote the brain signals after being monitored and extracted from the human brain. The reason is that the BCI signals might be slightly different depends on the standard of BCI system, e.g., 10-10 and 10-20 international systems [29]. The details of BCI system and BCI signals are discussed later.
\[\mathbf{e}(t)=\{\mathbf{e}_{1}(t),\mathbf{e}_{2}(t),\ldots,\mathbf{e}_{K}(t) \}\in\mathbb{R}^{K\times J\times W}. \tag{1}\]
The above process corresponds to the step 1 in Fig. 1. Once the WES obtains BCI signals from the users, the WES pre-processes VR content for \(K\) users, monitors the wireless channel state and the internal computing state at the same time (step 2). The pre-processing process can be Field-of-Views (FoVs) rendering that is personalized for each user [8]. For example, the user may be interested in a particular spatial region in the Metaverse while he/she rarely explores other regions. As a result, pre-processing FoVs for the users can not only save computing resources for the users but also reduce the amount of information transmitted over the wireless links [20, 30]. For this, the WES monitors the total computing load of its multi-core CPUs, denoted by \(\mathbf{u}(t)\in\mathbb{R}^{N}\), is defined by:
\[\mathbf{u}(t)=\{u_{1}(t),u_{2}(t),\ldots,u_{N}(t)\}, \tag{2}\]
where \(N\) is the number of CPUs of the WES and \(u_{n}(t)\in(0,1)\) is the computing load of the \(n\)-th CPU. The WES then analyzes the current state of the system and calculates the best policy, i.e., computing resource allocation for VR pre-processing and radio/power resource allocation for the uplink channels in the next step.
Next, the WES updates the collected BCI signals for \(K\) DAs (step 3). We assume that each DA keeps up-to-date BCI signals of a particular user. As such, the creation of intelligent human-like DAs can be easily deployed, maintained, and improved. All information, including wireless channels, computing resources, and brain signals, will be used as inputs for the training purpose of the controller (step 4). The controller is a learning model that simultaneously perform two tasks: (i) allocating radio and computing resources of the system and (ii) classifying BCI signals into different actions of the users. We describe our controller in detail later in Section III. The controller then outputs a policy to increase the QoE of users (step 5). Finally, the WES delivers personalized VR services to the users via downlink channels (step 6). The system repeats the above steps in the next time step \(t+1\). As observed from Fig. 1 and the above steps, the neural interface, i.e., integrated VR-BCI headset, can transmit brain signals over wireless channels and allow the WES to deploy VR services that are personalized for the users. For example, by monitoring the EEG signals, the WES can detect VR sickness [31] or emotional changes [9] of the users and then adjusts the virtual environments' settings accordingly to eliminate such effects.
Fig. 1: Illustration of our proposed over-the-air BCI-enabled Metaverse system. A Wireless Edge Server (WES) runs Metaverse applications, supporting VR experiences for \(K\) users. These Metaverse users are equipped with the integrated VR-BCI headsets. \(K\) Digital Avatars (DAs) are maintained in Metaverse to support real-time recommendation and enhance the user QoE.
To evaluate the system's performance, we construct a QoE metric that is a function of (i) the round-trip VR delay of the users and (ii) the accuracy of classifying the BCI signals at the WES. The round-trip VR delay is the latency between the time the user requests VR content from the WES (step 1) and the time the user receives the requested VR content displayed in his/her integrated VR-BCI headset (step 6). The accuracy of classifying BCI signals is obtained by the controller at the WES to predict the actions of the users based on the collected BCI signals. We select the VR delay and classification accuracy as our main metrics because they have been commonly used to design frameworks that eliminate potential VR sickness or fatigue of the users [8, 10, 32]. Moreover, we consider the classification setting on the BCI signals because if we can successfully predict the actions of the users, it is possible to extend the setting to a general scenario in the Metaverse where intelligent human-like DAs can accurately behave like humans with controlled permissions, e.g., imagined speech communications [15], adaptive VR environment rendering [31], and anomalous states and error-related behaviors detection [33]. In the sequel, we formally construct the QoE metric by deriving the round-trip VR delay and BCI classifier's accuracy.
### _Round-trip VR delay_
We consider that the round-trip VR delay consists of (i) processing latency at the WES, (ii) downlink transmission latency, and (iii) uplink transmission latency. For the uplink, we use an orthogonal frequency division multiple access (OFDMA) technique in which each user occupies a radio resource block, and the downlink is broadcast channel [35]. Since most of the computation is shifted to the WES, we assume that the latency at the user headsets is negligible. Accordingly, the round-trip VR delay of user \(k\) at time step \(t\) is calculated by:
\[D_{k}(t)=\frac{l_{k}^{U}}{r_{k}^{U}(t)}+d_{k}(t)+\frac{l_{k}^{D}}{r_{k}^{D}(t )}, \tag{3}\]
where \(l_{k}^{U}\) and \(l_{k}^{D}\) are the length of data packets to be transmitted over the uplink and downlink, respectively; \(r_{k}^{U}(t)\), \(r_{k}^{D}(t)\) are the uplink and downlink data rates between the user \(k\) and the WES, respectively; and \(d_{k}(t)\) is the processing latency, e.g., pre-rendering the FoVs, at the WES. The processing delay of the WES depends on the process running in the WES and the CPU capacity of the WES. In our setting, we consider that the WES is running an FoVs rendering process application. We assume that the WES is equipped with a multi-core computing unit having sufficient computing resources for all the users. Our setting is illustrated in Fig. 2. At time step \(t\), the WES measures its current available CPU state \(\mathbf{u}(t)\). Let \(\tau_{k}(t)\in(0,1)\) denote the portion of \(u_{n}(t)\) (i.e., computing load of the \(n\)-th CPU) that is used for pre-rendering FoV for user \(k\)-th. Once \(u_{n}(t)\) and \(\tau_{k}(t)\) are obtained, the pre-processing delay of the WES for rendering FoV for user \(k\)-th is calculated by:
\[d_{k}(t)=\frac{1}{\tau_{k}(t)u_{n}(t)\upsilon}, \tag{4}\]
where \(\upsilon\) (Hz) is the CPU capacity, i.e., the total number of computing cycles, of the WES. The uplink data rate for user \(k\) is defined as follows:
\[r_{k}^{U}(t)=\sum_{m\in\mathcal{M}}B^{U}\rho_{k,m}(t)\log_{2}\Big{(}1+\frac{p _{k}(t)h_{k}(t)}{I_{m}+B^{U}N_{0}}\Big{)}, \tag{5}\]
where \(\mathcal{M}\) is the set of radio resource blocks, \(p_{k}(t)\) is the transmit power of the user \(k\), and \(h_{k}\) is the time-varying channel gain between the WES and user \(k\). \(\rho_{k,m}(t)\in\{0,1\}\) is the resource block allocation variable. \(\rho_{k,m}(t)=1\) if the resource block \(m\) is allocated to user \(k\). Otherwise \(\rho_{k,m}(t)=0\). \(I_{m}\) is the multi-user interference from users who are also using the resource block \(m\) from nearby WESs. \(B^{U}\) is the bandwidth of each resource block. \(N_{0}\) is the noise power spectral efficiency.
The high interference between multiple users can cause packet errors at the WES. In our work, we consider the packet error rate experienced by the transmission of BCI signals of user \(k\) as [35]:
\[\epsilon_{k}(t)=\sum_{m\in\mathcal{M}}\rho_{k,m}\epsilon_{k,m}, \tag{6}\]
where \(\epsilon_{k,m}=1-\text{exp}\Big{(}-\frac{\frac{\sigma c_{k}^{2}}{p_{k}h_{k}(t )}}{p_{k}h_{k}(t)}\Big{)}\) is the packet error rate over resource block \(m\) with \(z\) being a waterfall threshold [36]. Due to the packet errors, the received BCI signals at the WES can contain noise, e.g., Gaussian noise. From hereafter, we denote the noisy BCI signals received at the WES as \(\hat{\mathbf{e}}(t)\) to differentiate the notation from the raw BCI signals as defined in (1).
For the downlink channel, the WES can broadcast the VR content to the users. Therefore, the downlink data rate achieved by the WES is calculated by:
\[r_{k}^{D}(t)=B^{D}\log_{2}\Big{(}1+\frac{P_{B}h_{k}(t)}{I_{D}+B^{D}N_{0}} \Big{)}, \tag{7}\]
where \(P_{B}\) is the transmit power of the WES. \(I_{D}\) and \(B^{D}\) are interference and downlink bandwidth, respectively. Unlike the uplink transmission, the broadcast downlink transmission can significantly reduce packet errors. Moreover, the transmit power of the WES, i.e., \(P_{B}\), is usually sufficient to ensure high signal to interference plus noise ratio (SINR) at the users.
Fig. 2: Illustration of the FoV pre-rendering process at the WES: (a) a selected video frame before rendering and (b) after rendering. The original video is a panoramic video from Youtube2. We use the Vue-VR software [34] to pre-render the FoVs from the given panoramic video. Our local server for running the FoV pre-rendering process is a MacBook Air 2020 with 8GB memory and 2.3 GHz 8-core CPU.
Therefore, we assume that the packet error rate in the downlink transmission is negligible compared with the uplink.
### _BCI Classifier_
Similar to other works in the literature, we assume that the WES has labels for the input BCI signals [27]. We consider a BCI classifier at the WES, denoted by \(\phi\), to be a binary indicator (0 or 1) if the predicted output, e.g., predicted hands/fect movement, matches the given labels, denoted by \(\mathbf{l}(t)\). In particular, \(\phi\left(\mathbf{\hat{e}}(t),\mathbf{l}(t)\right)=1\) if the prediction is correct. Otherwise \(\phi\left(\mathbf{\hat{e}}(t),\mathbf{l}(t)\right)=0\). The goal of the WES is to minimize the loss of false detections for the predictor \(\phi\) given the collected BCI signals and labels. In our work, we assume that the amount of label data transmitted via uplink channels is negligible, compared with that of the BCI signals. The labels are only scalar values, e.g., 0, 1, and 2, while the corresponding BCI signals are usually sampled at frequencies 150 Hz or 200 Hz [27]. Formally, we define the loss of the predictor \(\phi\) by a cross-entropy loss as follows:
\[L_{\phi}\Big{(}\mathbf{\hat{e}}(t),\mathbf{l}(t)\Big{)}=-\sum_{c=1}^{C}\phi \Big{(}\mathbf{\hat{e}}(t),\mathbf{l}(t)\Big{)}\log(\varrho_{c}), \tag{8}\]
where \(C\) is the number of possible actions and \(\varrho_{c}\) is the predicted probability of actions \(c\), e.g., moving hands/fect. In this work, we consider that BCI signals are EEG signals as the case study. However, the extension beyond EEG, e.g., electrocardiogram (ECG) or electromyogram (EMG), is straightforward. We collect the EEG signals from a motor imagery experiment [25]. The dataset in [25] contains EEG signals from 109 participants. Each participant produces data samples from 64 EEG channels with the BCI2000 system [16]. Details of the experiment can be found in [25]. In Fig. 3, we illustrate the EEG signals from three different participants responding to the same instruction in the experiment, i.e., moving their hands and feet. It can be observed from Fig. 3 that EEG signals of the participant are different in both amplitude and phase. This observation expresses the neurodiversity among different users [24]. With the same considered environment, the BCI signals that reflect the user consciousness are different. Given the individual BCI signals of multiple users, it is very challenging to obtain accurate predictions on the raw BCI signals, let alone the noisy BCI signals received at the WES after the signals are transmitted over a noisy channel. In the following, we design an effective QoE model that can capture the impacts of the noisy BCI signals on the system. We then formulate the problem involving (i) a classification problem and (ii) a decision-making problem.
### _QoE Model and QoE Maximization Problem Formulation_
We consider the QoE of user \(k\), denoted by \(Q_{k}\), as a combination of (i) round-trip VR delay and (ii) the prediction accuracy for the actions. Therefore, the QoE metric can be expressed follows:
\[Q_{k}(\boldsymbol{\rho},\mathbf{p},\boldsymbol{\tau},\phi)= \frac{1}{T}\sum_{t=1}^{T}\Big{(}\eta_{1}\psi\big{(}D_{k}(t),D_{max}\big{)}+\\ \eta_{2}\phi\big{(}\mathbf{\hat{e}}(t),\mathbf{l}(t)\big{)}\Big{)}, \tag{9}\]
where \(\eta_{1}\) and \(\eta_{2}\) are the positive weighting factors; and \(T\) is the time horizon. \(\psi(\cdot)\) is also a binary indicator which is defined as follows:
\[\psi\Big{(}D_{k}(t),D_{max}\Big{)}=\begin{cases}1&\text{if }D_{k}(t)\leq D_{max},\\ 0&\text{otherwise,}\end{cases} \tag{10}\]
where \(D_{max}\) is the maximum allowed round-trip VR delay for user \(k\). Recall that the BCI classifier \(\phi(\mathbf{\hat{e}}(t),\mathbf{l}(t))\) is a binary indicator that receive value 1 if the classification is correct. The use of binary indicators with positive weighting factors enables the multi-objective QoE model and eliminates the effects of the differences in measurement scales, i.e., time and accuracy. Similar types of multi-objective QoE models have been widely used in the literature [38, 39].
By defining the QoE model as a linear combination of the two binary indicators, we can capture the impacts of (i) wrong classification in (8) and (ii) exceeding the VR delay requirement in (10). For example, an incorrect classification and an exceeded VR delay caused by the WES lead the QoE value to be 0. If the both indicators are equal to 1, the QoE value is equal to \(\eta_{1}+\eta_{2}\). By controlling the weighting factors \(\eta_{1}\) and \(\eta_{2}\), one can determine the priorities of such factors, i.e., delay or accuracy, in specific applications. For example, in the applications that require highly accurate classifications of BCI signals such as imagined speech communication [40], the value of \(\eta_{2}\) can be increased. Likewise, in delay-sensitive applications, the value of \(\eta_{1}\) can be increased. The impacts of these weighting factors on QoE of the users will be further discussed in Section IV.
In this paper, we aim to maximize the average QoE of users, given the following constraints: (i) power at the WES and user headsets, (ii) wireless channels, and (iii) computational
Fig. 3: Example of EEG signals recorded from three different BCI participants responding to the same experimental condition (left figure). The EEG signals are extracted from the same channels, i.e., C3, CP3, C4, CP4, Cz, and CPz, denoted as red circles on the surface of the scalp in an 10-10 international system (right figure). These channels are responsible for hands and feet movement [37]. The instructions to the participants are placed at the time 0 (marked by the vertical dashed line). The neurodiversity, i.e., subjective differences in the same environment, reflect the differences in amplitudes and phases of the BCI participants.
capability of the WES. Formally, our optimization problem is defined as follows:
\[\mathcal{P}_{1}: \max_{\mathbf{\rho},\mathbf{p},\mathbf{\tau},\phi} \frac{1}{K}\sum_{k\in\mathcal{K}}Q_{k}(\mathbf{\rho},\mathbf{p},\mathbf{ \tau},\phi)\] (11a) s.t. \[\rho_{k,m}(t)\geq 0, \tag{11b}\] \[\sum_{k\in\mathcal{K}}\rho_{k,m}(t)=1,\] (11c) \[0\leq p_{k}(t)\leq P_{max},\] (11d) \[\sum_{k\in\mathcal{K}}\tau_{k}(t)\leq 1,\tau_{k}\geq 0,\] (11e) \[\phi\big{(}\mathbf{e}(t),\mathbf{l}(t)\big{)}\in\{0,1\}, \tag{11f}\]
where \(P_{max}\) is the maximum transmission power of the integrated VR-BCI headsets. \(\mathbf{\rho}=\{\rho_{k,m}(t);\forall k\in\mathcal{K},\forall m\in\mathcal{M}\}\) is the resource block allocation vector, \(\mathbf{\tau}=\{\tau_{k}(t);\forall k\in\mathcal{K}\}\) is the computing resource allocation vector, and \(\mathbf{p}=\{p_{k}(t);\forall k\in\mathcal{K}\}\) is the power allocation vector. In the problem (11) above, (11b) and (11c) are the constraints for radio resource block allocation, (11d) is the constraint for the transmit power, (11e) are the constraints for the computing resource allocation at the WES, and (11f) is the BCI classifier constraint.
Note that the maximization of \(Q_{k}\) in (11) results in reducing the round-trip VR delay \(D_{k}(t)\) and the BCI prediction loss \(L_{\phi}\). Our considered problem involves not only classification problem, i.e., classification of BCI signals in (8), but also decision-making problem, i.e., channel, power, and computing resource allocation problem in (4) and (5). The formulated problem \(\mathcal{P}_{1}\) is a Mixed-Integer Linear Programming (MILP) problem in which the power allocation variables \(\mathbf{p}\) are continuous while the resource block allocation and classification decision variables, i.e., \(\mathbf{\rho}\) and \(\phi\) are integer variables. Thus, it is challenging to obtain the optimal solution for \(\mathcal{P}_{1}\). Unlike conventional optimization approaches such as [19] which can only separately solve the sub-problems, i.e., delay perception and resource allocation and thus cannot enable real-time optimization of user QoE, we propose a novel hybrid learning algorithm to jointly predict the user behaviors based on the BCI signals and allocate radio and computing resources in the system. As the result, our approach is more robust against the dynamic of the user demand as well as the uncertainty of the wireless channels. Later, we propose a highly-effective training algorithm based on the idea of meta-learning. With the proposed meta-learning algorithm, we can further enhance the prediction accuracy by dealing with the neurodiversity of the brain signals among multiple users. In the next section, we propose two new algorithms that we refer to as "Hybrid learner" and "Meta-leaner".
## III Learning Algorithms For Maximizing QoE
As we described in Section II, the controller deployed at the WES is responsible for learning to optimize the system's resources and predict the users' behaviors. For this purpose, our proposed algorithms, i.e., hybrid learning and meta-learning, in this section can be deployed at the WES just as simple as pre-installing software at the WES. To solve the problem \(\mathcal{P}_{1}\) in (11), we propose the Hybrid-leaner to effectively solve the problem. Next, we propose the Meta-learner as an improvement of the Hybrid learner to solve the problem \(\mathcal{P}_{2}\) in (23), which will be described later in this section. The problem \(\mathcal{P}_{2}\) is the extended version of \(\mathcal{P}_{1}\) in which the neuraldiversity among BCI users is taken into consideration.
### _Hybrid Learning Algorithm_
We first propose a Hybrid learner which is illustrated in Fig. 4. Our Hybrid learner consists of three deep neural networks that are (i) an actor network, (ii) a critic network, and (iii) a convolutional network. The inputs for training the deep neural networks are empirical data from the BCI signals, the wireless channel state and the computing load of the WES. The output of the proposed algorithm is the policy to jointly allocate power for the users' headsets, allocate radio resources for the uplink channels, and predict actions of the users based on the BCI signals. Let \(\mathbf{\theta}\), \(\mathbf{\Theta}\), and \(\mathbf{\varphi}\) denote the parameters, i.e., weights and biases, of the actor network, critic network, and convolutional network, respectively. Our proposed training process for the Hybrid learner is illustrated in Algorithm 1. The operation of the algorithm is as follows.
The parameters for deep neural networks are first initialized randomly (line 1 in Algorithm 1). At each training iteration \(i\), the Hybrid learner first collects a set of trajectories \(\mathcal{D}_{i}\) in (15) by running current policy \(\Omega(\mathbf{\theta}_{i},\mathbf{\Theta}_{i},\mathbf{\varphi}_{i})\) for \(O\) time steps. The trajectories \(\mathcal{D}_{i}\) contain three main parts that are (i) the observation from the environment, (ii) the action taken of the WES based on the observation from the environment, and (iii) QoE feedback from \(K\) users (line 3). The observation from the environment is a tuple of three states that are channel state \(\mathbf{h}\), computing load of the WES \(\mathbf{u}\), and BCI signals from users \(\hat{\mathbf{e}}\). The action of the WES is a tuple of four parts that are the radio resource block allocation vector \(\mathbf{\rho}\), the power allocation vector \(\mathbf{p}\), the computing resource allocation vector \(\mathbf{\tau}\), and the output of the BCI classifier \(\phi\). Based on the collected trajectories, the objective functions for updating the deep neural networks are calculated as follows. The advantage estimator \(\hat{A}_{i}\) is defined in (16) [26] where \(\lambda\) is the actor-critic tradeoff parameter and \(\delta_{o}\) is the temporal-difference error which is defined by:
\[\delta_{o}=\frac{1}{K}\sum_{k\in\mathcal{K}}Q_{k,o}+\gamma V(\mathbf{h}_{o+1}, \mathbf{u}_{o+1},\mathbf{\hat{e}}_{o+1})-V(\mathbf{h}_{o},\mathbf{u}_{o}, \mathbf{\hat{e}}_{o}), \tag{12}\]
where \(\gamma\in(0,1)\) is the discount factor and \(V(\cdot)\) is the value function of the given observation, i.e., output of the critic network. Once the advantage estimator is obtained, the decision-making objective can be calculated by \(J(\hat{A}_{i})\) as defined in (17). In the calculation of \(J(\hat{A}_{i})\), we adopt a policy-clipping technique from [26]. In particular, the policy clipping function \(f_{c}(\varepsilon,\hat{A}_{i})\) is defined by:
\[f_{c}(\varepsilon,\hat{A}_{i})=\begin{cases}(1+\varepsilon)\hat{A}_{i},&\text{if } \hat{A}_{i}\geq 0,\\ (1-\varepsilon)\hat{A}_{i},&\text{if }\hat{A}_{i}<0.\end{cases} \tag{13}\]
With the policy clipping function, the gradient step update is expected not to exceed certain thresholds so that the training is more stable. Next, the classification loss \(L_{\phi}(\mathbf{\Theta},\mathbf{l};\mathbf{\varphi})\) is calculated based on (8) with convolutional network (line 6).
With all the obtained objective and loss functions, the deep neural networks' parameters are finally updated as follows. The actor network's parameters are updated in (18) (line 7) where \(\alpha_{c}\) is the learning step size of the actor network and \(\nabla\) is the gradient of the function which can be calculated with stochastic gradient decent/ascent algorithms. In our paper, we use Adam as the optimizer for all the deep neural networks. The critic network's parameters are updated in (19) (line 8) where \(\alpha_{c}\) is the learning step size and \(L_{c}(\mathbf{\Theta})\) is the critic loss which is defined by:
\[L_{c}(\mathbf{\Theta})=\Big{(}V(\mathbf{h}_{o},\mathbf{u}_{o},\mathbf{\hat{e} }_{o})-\frac{1}{K}\sum_{k\in K}Q_{k,o}\Big{)}^{2}. \tag{14}\]
Finally, the convolutional network's parameters are updated in (20) (line 9) where \(\alpha_{n}\) is the learning step size.
As described above, our hybrid learning algorithm address the problem \(\mathcal{P}_{1}\) in (11) by maximizing the decision-making objective \(J(\hat{A}_{i})\) in (17) while the classification loss \(L_{\phi}(\mathbf{\hat{e}},\mathbf{l};\boldsymbol{\varphi})\) is minimized in (20). With the training process that splits, computes, and backpropagates the losses throughout the deep neural networks, the Hybrid learner can better realize the compound objective \(Q_{k}(\boldsymbol{\rho},\mathbf{p},\boldsymbol{\tau},\phi)\) involving distinct learning sub-objectives. In Section IV, we show that this design mechanism can significantly enhance the performance of the system, compared with other current state-of-the-art deep reinforcement learning algorithms.
### _Meta-learning Algorithm_
Based on our observation in Fig. 3 about the neurodiversity in the brain activities of different BCI users, we are interest in learning a meta-model that jointly optimizes the BCI prediction performance given the BCI signals from different distributions, i.e., phases and amplitudes. As a result, our optimization problem (11) now can be rewritten as:
\[\mathcal{P}_{2}:\max_{\boldsymbol{\rho},\mathbf{p},\boldsymbol{ \tau},\phi} \frac{1}{K}\sum_{k\in\mathcal{K}}Q_{k}(\boldsymbol{\rho},\mathbf{p}, \boldsymbol{\tau},\phi)\] (23a) s.t. \[(11b)-(11e), \tag{23b}\] \[\phi_{k}\big{(}\mathbf{\hat{e}}_{k}(t),\mathbf{l}_{k}(t)\big{)} \in\{0,1\}, \tag{23c}\]
where \(\phi_{k}\big{(}\mathbf{\hat{e}}_{k}(t),\mathbf{l}_{k}(t)\big{)}\) is now the BCI classifier of the user \(k\)-th, given the noisy BCI signals \(\mathbf{\hat{e}}_{k}(t)\) and labels \(\mathbf{l}_{k}(t)\) sampled from the user \(k\)-th. As observed from the problems \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\), the difference between the two problems is the constraint \((11f)\) and \((23c)\). In \(\mathcal{P}_{1}\), we consider that the BCI signals share the same distribution and we ignore the fact that the BCI signals might be significantly different in terms of amplitudes and phases, i.e., neurodiversity [21, 22, 23]. As the result, the trained BCI classifier may obtain a higher performance with the BCI signals from an BCI user than that from another user. With the formulated problem in \(\mathcal{P}_{2}\), we aim to improve the performance of the BCI classifier, regardless of the neurodiversity when the number of BCI users increases.
As observed from problem \(\mathcal{P}_{2}\) in (23), a naive solution can be implementing \(K\) Hybrid learners for \(K\) BCI classifiers to deal with the diverse distributions problem. In this way, we have to train and maintain multiple learning models for a single purpose of improving the BCI prediction accuracy. However, this implementation may require a switch operator that selects the optimal model given a random or unknown input of BCI signals. This assumption about the switch operator might not be feasible in practice [41, Ch. 7]. Moreover, maintaining multiple learning models can cause extra cost of training and maintenance, and thus yielding negative impacts on the servers/systems. In our work, we aim to train a single Meta-learner that can achieve high prediction accuracy on the BCI signals with unknown and diverse distributions.
The proposed meta-learning algorithm is proposed in Algorithm 2. In particular, the detailed training process is as follows. First, the parameters of the three deep neural networks are initialized at random (line 1 of Algorithm 2). Similar to the Algorithm 1, the Meta-leaner first computes the decision-making objective \(J(\hat{A}_{i})\) (lines 3, 4, and 5 of Algorithm 2). Next, the Meta-learner computes \(w\)-step meta-gradients, denoted by \(\boldsymbol{\tilde{\varphi}}_{k}\), expanded from the current parameter \(\boldsymbol{\varphi}_{i}\) as in (21) (line 7). In (21), \(g_{1},g_{2},\ldots,g_{w}\) are the gradients computed by SGD (Stochastic Gradient Decent) over \(w\) mini-batches of the inputs, i.e., \(\mathbf{\hat{e}}_{k}(t)\). In other words, the inputs \(\mathbf{\hat{e}}_{k}(t)\) are divided into \(w\) mini-batches to compute \(w\) gradients \(g_{1},g_{2},\ldots,g_{w}\). After that, the actor network and critic network are updated based on equations (18) and (19), respectively (lines 9 and 10). Finally, the convolutional network is updated
with (22), where \(\alpha_{M}\) is the meta-learning stepsize (line 11). The calculation of meta-gradients and meta-updates in equations (21) and (22) over \(K\) users helps to learn a policy that is equivalent to a distilled policy of \(K\) separate Hybrid learners. We develop our meta-learning algorithm based on a scalable first-order meta-learning algorithm, named Rep-tile [42]. Unlike the Reptile algorithm which is only applicable for the classification problem, our Meta-learner can deal with the joint decision-making and classification problem, thanks to the hybrid design.
In the next section, we show empirical results that illustrate our proposed hybrid learning algorithm can outperform current state-of-the-art reinforcement learning algorithms in dealing with the decision-making problem. Furthermore, we show that our proposed meta-learning algorithm significantly improves the BCI prediction performance, compared with our proposed hybrid learning algorithm. Thanks to the novel meta-learning process, our Meta-learner can understand the neurodiversity at a deeper level than the hybrid learning algorithm, resulting in higher and robust classification performance under different settings. Note that the proposed meta-learning algorithm only requires additional gradient computations and the Meta-learner can be deployed at the controller of the WES similar to the Hybrid learner in Fig. 4.
## IV Performance Evaluation with BCI Datasets
### _Data Preprocessing_
We conduct extensive simulations to evaluate the system performance as follows. For the BCI classification problem, we use a public dataset from [25]. The dataset contains the experiment results of 109 participants. Each participant is instructed to do an action per experimental run. The actions are opening/closing eyes, fists, and feet. On each experimental run, the EEG signals are obtained through 64 EEG channels with the BCI2000 system [16]. The sampling rate is 160 Hz. In our setting, we consider four different actions, i.e., \(C=4\), that are open eyes, close eyes, close fist, and move feet. In the default setting, we consider BCI signals from three users as illustrated in Fig. 3. We adopt data processing pipeline from [43]. In particular, the data processing is as follow. The collected BCI signals of each user have 255,680 data samples. Because EEG signals are temporal data, we split the data stream into different segments and iteratively input the segments into the deep neural networks. Each segment contains 16 EEG samples, which is equivalent to 0.1 second as the sampling frequency is at 160 Hz. The overlapping rate between two adjacent segments is set at 50%. After segmentation, the data samples are then normalized with z-score normalization technique [43]. Finally, the data samples are split into training set and testing set with the ratio is 80:20, respectively. Note that similar to
Fig. 4: Training process for the proposed Hybrid learner at the controller of the WES. The circled numbers denote the corresponding steps as described in Fig. 1 and Section II.
other BCI research works in the literature, we consider a classification setting with discrete number of actions [27]. For capturing full human body movement with high degree-of-freedom (DoF), current BCI technologies are not yet ready because of the complexity of BCI signals. In this work, we only focus on enabling potential of BCI for the Metaverse without focusing on realistic avatar/human motion capture techniques. Human motion capture is another topic that is outside the scope of our work.
The Hybrid learner's architecture is set as follow. The actor network and critic network are multilayer perceptron networks (MLPs) that consist of one input layer, two hidden layers, and one output layer. In our default setting, the number of users is set at \(K=3\). In this case, the number of input neurons at the input layer is 6, which is equivalent to the total sizes of the channel state and the CPUs, i.e., \(\{h_{1}(t),h_{2}(t),h_{3}(t),u_{1}(t),u_{2}(t),u_{3}(t)\}\). Note that we only measure the 3 most available CPUs of the WES to feed into the deep neural networks for simplicity. The number of the output neurons at the output layer is 9, which is corresponding to the number of radio resource block allocation variable, the power allocation variable, and the computing resource allocation variable, i.e., \(\{\rho_{1}(t),\rho_{2}(t),\rho_{3}(t),p_{1}(t),p_{2}(t),p_{3}(t),\tau_{1}(t), \tau_{2}(t),\tau_{3}(t)\}\). For the CNN, we use each EEG channel as feature input for the CNN of the Hybrid learner. Thus, we have 64 input features and 4 class labels to train with the CNN. We use Adam to optimize the parameters of the deep neural networks. The Meta-learner reuses the same architecture of the Hybrid learner. The difference is computing the meta-gradients and meta-updates in equations (21) and (22), respectively. For this, an additional SGD algorithm is used to calculate \(w\) gradient steps with respect to \(w\) mini-batches of the input data, i.e., equation (21).
For the decision-making problem, i.e., radio and computing resource allocation, we conduct an experiment as illustrated in Fig. 2 to measure the processing latency at the WES. For the uplink and downlink latency, we use the Rayleigh fading to simulate the dynamics of the time-varying wireless channel. The number of radio resource blocks is set to \(M=6\). The number of Metaverse users is set at \(K=3\). Note that we use the fixed number of Metaverse users \(K=3\) but the number of BCI users can be greater than that. In such cases, each Metaverse user can be considered as a group of multiple BCI users, given the same amount of radio resources. The details of our parameter settings are shown in Table. I.
In comparison with our proposed algorithms, we introduce the following baselines.
* Proximal Policy Optimization (PPO) [26]: PPO is a state-of-the-art reinforcement learning algorithm for decision-making problems with continuous action values. Our Hybrid learner also adopts the actor-critic architecture and policy clipping techniques from PPO to achieve robust performance. In particular, the PPO baseline contains an actor network and critic network. We directly use this architecture to learn the QoE defined in (9). By maximizing the average QoE, the PPO baseline is expected to reduce the loss \(L_{\phi}\) and the round-trip VR delay \(D_{k}\).
* Vanilla Policy Gradient (VPG) [44]: VPG is a classic policy gradient algorithm for decision-making problems with continuous action values. The VPG baseline also uses the actor-critic architecture. However, the VPG algorithm does not have the embedded advantage function and the policy clipping technique.
* Support-Vector Machine (SVM) [45]: SVM is a classic supervised learning algorithm and is a robust benchmark for classification problems. In our simulations, we use SVM as a baseline to evaluate the classification accuracy of our proposed algorithms that are developed on deep neural networks. For a fair comparison, we consider the following setting to give SVM advantages compared with
\begin{table}
\begin{tabular}{c|l|l} \hline & **Communication Parameters** & **Default Settings** \\ \hline \(M\) & Number of radio & \(6\) resource \\ & resource blocks & blocks \\ \hline \(K\) & Number of users & \(3\) users \\ \hline \(P_{B}\) & Power of the WES & \(1\) W [35] \\ \hline \(P_{max}\) & Power of the user headset & [-20: 20] dBm \\ \hline \(B^{U}\) & Uplink bandwidth & \(1\) MHz [35] \\ \hline \(B^{D}\) & Downlink bandwidth & \(20\) MHz [35] \\ \hline \(N_{0}\) & Noise & \(-174\) dBm \\ \hline \(I_{m}\) & Interference & \(-10\) dBm \\ \(I_{D}\) & & \\ \hline \(D_{max}\) & Maximum round-trip delay & \(10\) milliseconds \\ \cline{2-3} & **Computation Parameters** & **Default Settings** \\ \hline \(\upsilon\) & Computation capacity of & \([2.3\times 10^{-6}:2.3]\) \\ & the WES & GHz \\ \hline & Video quality for the VR & \(1280\times 702\) \\ & processing & pixels \\ \hline & **BCI Parameters** & **Default Settings** \\ \hline & BCI dataset & [25] \\ \hline \(J\) & Number of BCI channels & 64 channels \\ \hline \(C\) & Number of class labels & 4 classes \\ \hline & Number of BCI users & [3:7] users \\ \hline & Sampling rate & 160 Hz \\ \hline & **Algorithms’ Parameters** & **Default Settings** \\ \hline \(O\) & Trajectories’ length & 100 steps \\ \hline \(T\) & Total training time steps & \(3\times 10^{5}\) steps \\ \hline \(\gamma\) & Advantage estimator’s & 0.99 [26] \\ & parameter & \\ \hline \(\lambda\) & Advantage estimator’s & 0.95 [26] \\ & parameter & \\ \hline \(\varepsilon\) & Clip ratio & 0.2 [26] \\ \hline \(\alpha_{a}\) & Actor network’s & \(5\times 10^{-5}\) \\ & learning rate & \\ \hline \(\alpha_{c}\) & Critic network’s & \(5\times 10^{-4}\) \\ & learning rate & \\ \hline \(\alpha_{n}\) & Convo. network’s & \(2\times 10^{-3}\) \\ & learning rate & \\ \hline \(\alpha_{M}\) & Meta-learning rate & 1.0 [42] \\ \hline \(w\) & Number of meta-gradient & 3 steps \\ & steps & \\ \hline \end{tabular}
\end{table} TABLE I: Parameter settings.
our proposed algorithms. First, we replace the convolutional network in our Hybrid learner with the SVM and we keep the actor network and critic network similar to those of the Hybrid learner. As a result, the SVM-based Hybrid learner can still deal with both decision-making and classification problems. Second, we train the SVM with training data that are collectively fed into the input of the SVM. In other words, all the training data is stored and reused at the WES. We observe that this training method can significantly boost the performance of the SVM. Otherwise, if we apply the same training method as our proposed algorithms, i.e., the training data at each time step is removed after feeding into the deep neural networks, the performance of the SVM is significantly decreased.
### _Experiment Results_
We first illustrate the training performance of the proposed algorithm and the baselines in Fig. 5. In Fig. 5(a), we can observe the increase in QoE values of all the algorithms during 3,000 training episodes. These results imply that all algorithms have the ability to learn a good policy given the dynamics of the environment. After training, the trained model can be used for testing on test data. In Fig. 5(b), we can observe that the proposed Meta-learner and Hybrid learner can obtain highly accurate predictions on BCI signals. The SVM baseline also achieves similar performance with the Hybrid learner. Specifically, the convergence speed of the SVM baseline is higher than those of the Meta-learner and Hybrid learner because the training data is stored and reused at the WES when we train the SVM baseline. However, the SVM baseline with a high demand for the amount of input data can only converge to the accuracy that is similar to the Hybrid learner. More interestingly, the Meta-learner can achieve higher accuracy with an additional update of 3 meta-gradient steps. In other words, our proposed Meta-learned can achieve the best performance by maintaining limited storage of BCI signals and computing load in the system. The SVM baseline also shows its superior performance over the PPO and VPG baselines thanks to its robust classification ability. Another observation on the performance of the PPO and VPG baselines shows that they are not effective to deal with the classification problem with only reinforcement learning design. With the number of class labels being \(C=4\), the prediction accuracy of the PPO and VPG baselines are just slightly higher than the random prediction, i.e., \(25\%\). Although the maximization of the QoE with reinforcement learning algorithms like PPO and VPG may result in reducing the classification loss theoretically, the exploration of such algorithms over the observation space makes the deep neural networks not have enough guidance to learn a good policy. This problem is known as a sparse reward problem in popular reinforcement learning settings [44]. In Fig. 5(c), we can observe that all algorithms can converge to round-trip delay values that are less than the requirement \(D_{max}=0.01\) second (i.e., 10 milliseconds). Thanks to the reinforcement learning techniques, all algorithms can learn the dynamics of radio and computing resources of the system.
It is noted that the above training results are obtained with the setting \((\eta_{1},\eta_{2})=(1,1)\). In other words, the importance of BCI classification and radio/computing resource allocation are similar. To evaluate the trade-off between the the system's delay (which is directly affected by the resource allocation) and the classification accuracy, we consider different scenarios
Fig. 5: (a) Normalized QoE, (b) classification accuracy, and (c) average round-trip VR delay values.
Fig. 6: CDF values of user QoE in different scenarios to show the trade-off between system’s latency and classification accuracy.
by changing the relative importance between the BCI classification and radio/computing resource allocation in Fig. 6. The Cumulative Distribution Function (CDF) curves are obtained by averaging over 3,000 training episodes. In Fig. 6(a), we can observe that when the BCI classification is more important, i.e., \(\eta_{2}>\eta_{1}\), the QoE of the proposed algorithms is higher than those of the PPO and VPG baselines. For example, with the same CDF value at 0.6, the proposed Meta-learner and Hybrid learner can achieve higher QoE values, i.e., 0.89 and 0.82, respectively. With the SVM baseline, although its CDF curve converges to a similar point as that of the Meta-learner, i.e., 0.98, the tail of its CDF starts at higher QoE values than the other algorithms. The reason is that SVM can achieve faster convergence on the same classification problem thanks to the advantages in training as we have discussed above. In Fig. 6(b), when the values of \(\eta_{1}\) and \(\eta_{2}\) are similar, the proposed Meta-learner and Hybrid learner clearly outperform the baselines and achieve similar performance as the SVM baseline. In addition, when \(\eta_{1}\) increases, i.e., \(\eta_{1}>\eta_{2}\) in Fig. 6(c), the CDF curves of the PPO and VPG algorithms shift toward the right and become closer to the CDF curves of the Meta-learner, Hybrid learner, and the SVM baseline. The reason is that when the importance of the radio/computing resource allocation increases, the QoE values are less affected by the the classification accuracy and the system's delay contributes more to the QoE. From the three different settings above, we can conclude that our proposed algorithms are more robust, compared with the baseline algorithms. Thanks to the ability to collectively store the training data, the SVM baseline can also achieve robust performance. In the rest of simulations, we consider different settings with the values \((\eta_{1},\eta_{2})=(1,1)\).
Next, we vary the maximum power value at the headsets of users, i.e., \(P_{max}\), to evaluate the impacts of the power allocation on the system performance. In Fig. 7(a), we can observe that when the maximum power of the headsets decreases, the accuracy of the prediction decreases. The reason is that with the low level of power allocated to the radio resource blocks, the SINR at the WES may significantly decrease, resulting in the high packet error rate \(\epsilon_{k}\) in (6). Specifically, our proposed Meta-learned achieves the best classification accuracy under all the considered settings. Similar to the observation from Fig. 5(b), the classification accuracy values obtained by the SVM baseline are similar to those of the Hybrid learner and are much higher than those of the PPO and VPG baselines. In Fig. 7(b), we can observe that the increase of power results in the decrease of the round-trip VR delay. The latency values obtained by our proposed algorithms are lowest among those of the baseline algorithms due to two main reasons. First, they utilize start-of-the-art actor-critic architecture and policy clipping techniques of PPO. Second, our new design in forwarding the losses through the actor network, critic network, and convolutional network, as described in Section III, enables the realization of Hybrid learner and Meta-learner that the compound objective \(Q_{k}\) involves distinct learning goals, i.e., BCI classification and radio, computing resource allocation; and thus facilitating the training process.
Furthermore, we evaluate the impacts of the computing capacity of the WES on the system performance by increasing the CPU capacity of the WES from \(2.3\times 10^{-6}\) GHz to \(2.3\) GHz. In Fig. 8(a), it can be observed that the increase in the CPU capacity of the WES does not have impact on the classification accuracy. These results imply that with the limited CPU capacity, our proposed algorithms with deep neural networks still achieve good predictions on the BCI signals. Unlike our proposed algorithms with advanced architecture designs, the PPO and VPG baselines only obtain around \(30\%\) of classification accuracy which is slightly higher than the chance level \(25\%\). In Fig. 8(b), we can observe that the increase in CPU capacity results in the decrease of the round-trip VR delay. The reason is that with lower CPU capacity, the WES takes a longer time to pre-precess VR contents for the users, thus yielding higher latency. Note that the decrease in round-trip VR delay in Fig. 8(b) is less significant than the decrease in Fig. 7(b). The reason is that the transmit power has more impacts on the system's delay, compared with the computing capacity.
Finally, we evaluate the classification challenges caused by the diversity of BCI signals. In Fig. 9, we increase the number of BCI users from three to seven. In Fig. 9(a), the classification accuracy of the proposed Hybrid learner significantly decreases when the number of BCI users increases. The reason for this is that although the proposed Hybrid learner with advanced architecture can obtain good prediction accuracy, the learner cannot deal with the diversity of the BCI signals as input data. As explained Fig. 3, the BCI signals are highly individual so that the responses of different users on the same experimental instruction, e.g., fist movement, are different in both amplitude and phase. Therefore, a conventional convo
Fig. 8: (a) Classification accuracy and (b) round-trip VR delay of the algorithms with testing data when the CPU capacity of the WES varies.
Fig. 7: (a) BCI classification accuracy and (b) round-trip VR delay of the algorithms with testing data when the maximum power of the headsets vary.
lutional neural network is not sufficient to learn from the neurodiversity of the BCI signals. The classification accuracy values obtained by the baseline SVM are higher than those of the Hybrid learner. The reason is that the SVM baseline is given advantages during training in which all the training data is iteratively collected and reused at each training episode. In other words, the SVM baseline is given more training data at a training episode to achieve such highly accurate predictions. However, this comes with the cost of storage and longer training time. Unlike the Hybrid learner and SVM baseline, the proposed Meta-learner shows its capability of learning a distilled model that can better classify BCI signals from different users, reflected by the non-decreasing classification accuracy. Specifically, with up to seven BCI users, our Meta-learner only needs one supporting meta-gradient update from each DA to achieve similar accuracy as obtained from the case with three BCI users. The results suggest that the proposed Meta-learner is sample-efficient and practical in systems with limited storage capacity. Note that in the multi-person BCI classification setting above, our Meta-learner does not require any additional feature extraction methods to achieve similar accuracy results reported in [21, 22, 23]. Moreover, the works in [21, 22, 23] only consider classification problem for the raw and noise-free BCI signals while we also consider the possible packet errors of transmitting BCI signals over noisy channels. In Fig. 9(b), we can observe that the round-trip VR delay are not affected by the increase of BCI users. Our proposed algorithms also achieve lower latency, compared with the baseline algorithms.
## V Conclusion
In this work, we have introduced a novel over-the-air BCI-enabled framework for human-centric Metaverse applications. The Digital Avatars can learn from human brain signals to predict the actions of the users under controlled permissions. In addition, the novel system design enables the WES to jointly optimize radio, computing resources of the system as well as the classification performance under the dynamics of the wireless environment and Metaverse users' behaviors. This was realized by the proposed hybrid learning algorithm and meta-learning algorithm to simultaneously address the mixed decision-making and classification problems. The hybrid learning algorithm have shown its effectiveness in handling mixed decision-making and classification problem of our system. This was thanks to the novel architecture which consists of three deep neural networks to split, compute, and backpropagate the losses. We have further proposed the meta learning algorithm as an improved version of the hybrid learning algorithm to deal with the neurodiversity of the brain signals from different users. Extensive experiments with real-world datasets showed that our proposed algorithms can achieve prediction accuracy up to 82%. More interestingly, our proposed algorithms also reduced the VR latency of the system, resulting in potential reduction of VR sickness and enhancing user QoE in future Metaverse applications.
|
2310.08200 | Optimising motion-induced spin transfer | In this paper, the spin transfer between two ferromagnetic insulators is
studied. There is a narrow gap between the ferromagnetic insulators so that
they are weakly interacting with each other. One of the ferromagnetic
insulators is moving at a constant speed while the other is at rest; hence, the
system is out of equilibrium. In the presence of the shearing motion, the
interaction amplitude is periodically modulated at the Doppler frequency. A
unitary transformation allows us to regard the periodic modulation of the
interaction amplitude as an effective potential, which drives the spin
transfer. The amount of the spin current is controlled by the spectral overlap
and the carrier population difference between the two ferromagnetic media. If
the spectra of the two ferromagnets are moderately broadened, the overlap in
the spectral domain increases, enlarging the spin current. However, too much
broadening spoils the spectral overlap and, hence, the spin current. This
implies that there is an optimal condition for maximising the spin transfer. | Daigo Oue, Matsuo Mamoru | 2023-10-12T10:48:06Z | http://arxiv.org/abs/2310.08200v2 | # Optimising motion-induced spin transfer
###### Abstract
In this paper, the spin tunnelling transport between two ferromagnetic insulators is studied. There is a narrow gap between the ferromagnetic insulators so that they are weakly interacting with each other. One of the ferromagnetic insulators is moving at a constant speed while the other is at rest; hence, the system is out of equilibrium. In the presence of the shearing motion, the interaction amplitude is periodically modulated at the Doppler frequency. A unitary transformation allows us to regard the periodic modulation of the interaction amplitude as an effective potential, which drives the spin tunnelling transport. The amount of the spin current is controlled by the spectral overlap and the carrier population difference between the two ferromagnetic media. If the spectra of the two ferromagnets are moderately broadened, the overlap in the spectral domain increases, enlarging the spin current. However, too much broadening spoils the spectral overlap and, hence, the spin current. This implies that there is an optimal condition for maximising the spin transfer.
spin current, tunnelling transport, Doppler effect
## 1 Introduction
Tunnelling transport is a phenomenon that occurs between two closely situated media under the influence of an external bias. It is widely acknowledged as a pervasive nonequilibrium phenomenon that manifests across various fields of study. For example, electron tunnelling transport between a metallic probe and a conducting sample realised electron tunnelling microscopy, an imaging method used to acquire ultra-high resolution at the atomic scale [1]. In addition to the conventional electron tunnelling, superconducting tunnelling junction has also been demonstrated [2] and applied to high-sensitivity sensors for biological, chemical, and physical systems [3, 4, 5]. Engineering photon tunnelling transport between a sharply pointed tip and a sample enabled breaking diffraction limit and recording the subwavelength super-resolution optical image of the sample [6].
In a recently emerging field of spintronics, analogously to the preceding examples, spin tunnelling transport has been demonstrated. This entails the transfer of spin between two closely situated media. Various types of external control are adopted to induce imbalances between the two media. Saitoh et al. adopted microwave irradiation to drive tunnelling spin transport between a ferromagnet film and a metallic medium [7]. Uchida et al. utilised the temperature difference between a ferromagnet and normal-metal wire(s) to trigger the spintronics analogue of the Seebeck effect (i.e. current due to temperature gradient) [8]. Johnson and Silsbee induced chemical potential difference (spin accumulation) at the interface between a ferromagnetic and paramagnetic metal to generate spin current across the interface [9, 10].
Here, we are going to theoretically investigate another possibility to drive spin tunnelling transport. Our proposal is inspired by a series of studies on noncontact friction, which is a momentum transfer between
two closely positioned objects. Typically, in the theoretical study of noncontact friction, the 'probe-sample' type setup is considered, where the sample is moving at a constant speed \(v\), and the probe is placed near the sample. As they are sufficiently close to each other, fundamental excitations in the probe and the sample can mutually interact, leading to linear momentum transfer between them, which is nothing but a frictional force without any physical contact. There have been investigated various excitations mediating the momentum transfer: magnetostatic [11, 12, 13], electrostatic [14, 15, 16], and radiative field [17, 18, 19]. In this work, we focus on spin angular momentum transfer rather than linear momentum transfer in such a 'probe-sample' setup.
Our setup comprises two ferromagnetic insulators, which are magnetised in the \(z\) direction, with a narrow vacuum gap in between (see FIG. 1). One of the two ferromagnets is moving at a constant speed while the other is at rest. We are going to investigate spin transfer from the moving to stationary magnets. The amount of spin transfer can be evaluated by the time variation of total spins in the stationary medium,
\[I_{\text{spin}}=\int\frac{\partial\left\langle S_{\text{R}}^{z}(x)\right\rangle }{\partial t}\,\text{d}x=\frac{1}{i\hbar}\int\left\langle[S_{\text{R}}^{z}(x), H]\right\rangle\text{d}x\,, \tag{1}\]
where we introduced spin operators \(S^{x,y,z}\) satisfying the angular momentum algebra (i.e. \([S^{j_{1}},S^{j_{2}}]\;=\;i\epsilon_{j_{1}j_{2}j_{3}}S^{j_{3}}\)), and \(H\) is the total Hamiltonian of the system given in the following. Note that the average \(\left\langle\ldots\right\rangle\) should be taken with respect to the total Hamiltonian. Reminding that the vacuum gap separates two magnets composing our system and hence weakly interacting with each other, we can split the total Hamiltonian into three parts, \(H\,=\,H_{\text{L}}\,+\,H_{\text{R}}\,+\,H_{\text{int}}\), where we adopt the spin-exchange type tunnelling interaction between two magnets for the interaction part,
\[H_{\text{int}}=\int\hbar J_{\text{int}}S_{\text{L}}^{+}(x;v)S_{\text{R}}^{-}( x)\,\text{d}x+\text{H.c.}, \tag{2}\]
where we have written the coupling strength \(J_{\text{int}}\) and defined spin flip operators, \(S^{+}\;=\;S^{x}\,+\,iS^{y}\) and \(S^{-}\;=\;\{S^{+}\}^{\dagger}\). Note that we have explicitly written the sliding velocity in the argument of the spin operator for the left medium. At the moment, we shall adopt the interaction picture and can give the bare Hamiltonian for each magnet later,
\[I_{\text{spin}}=\frac{1}{i\hbar}\int\left\langle[S_{\text{R}}^{z}(x),H_{\text {int}}]\right\rangle\text{d}x=2J_{\text{int}}\operatorname{Im}\int\left\langle S _{\text{L}}^{-}(x;v)S_{\text{R}}^{+}(x)\right\rangle\text{d}x\,. \tag{3}\]
Here, the expectation value on the right-hand side \(\left\langle S_{\text{L}}^{-}S_{\text{R}}^{+}\right\rangle\) should be assessed with special consideration on the fact that our system is pushed out of equilibrium by the persistent, constant motion imposed on the left medium.
## 2 Work done by the sliding motion
In this section, we describe how the forced sliding motion drives the system out of equilibrium. According to the previous studies on noncontact friction, we can expect that there is finite linear momentum
Figure 1: The schematic image of the setup that we will analyse in this work. We have two ferromagnetic insulators which are closely placed and, hence, interacting with each other. One is moving at a constant speed \(v\) (red arrow). We shall focus on the fundamental excitations in the ferromagnetic insulators, magnons (indicated by solid black arrows), and analyse if the shearing motion drives the spin tunnelling transport (black dashed arrow).
transfer between the two magnets, resulting in a force. Working in the slow-velocity regime, we can assume that the force scales linearly with the velocity,
\[F_{\mathrm{f}}=-\gamma v, \tag{4}\]
where \(\gamma\,>\,0\) is the friction coefficient, which can be microscopically determined, as we will show later. As the external force \(F_{\mathrm{ex}}\) should be balanced with the frictional force to maintain the constant motion (\(F_{\mathrm{ex}}=-F_{\mathrm{f}}\)), the work done by the sliding motion per unit of time is
\[W=F_{\mathrm{ex}}\cdot v=\gamma v^{2}>0. \tag{5}\]
This implies that work is perpetually done by the constant motion, causing the system to continuously receive energy and deviate out of equilibrium. In other words, we continuously pump energy into the system through the noncontact friction.
## 3 Temporal modulation of the tunnelling coupling by the constant motion
In this section, we shall study how the sliding motion affects the microscopic theory to confirm the system is indeed driven out of equilibrium. As our left medium is moving at a constant velocity \(v\), the spin operator in the laboratory frame is associated with the one in a reference frame where the left medium is at rest by applying the boost transformation. Since we work in the slow-velocity regime, we can safely adopt the Galilean boost instead of the Lorentz one (we can deal with the relativistic correction just by replacing the Galilean transformation with the Lorentz one in the following) to write the spin operator for the moving medium,
\[S_{\mathrm{L}}(x;v)=S_{\mathrm{L}}(x-vt), \tag{6}\]
where the spin operator without the velocity in the argument, \(S_{\mathrm{L}}(x)\), is the one for the left medium at rest.
Respecting the translation symmetry, we employ the Fourier representation,
\[S_{\mathrm{L}}^{+}(x;v) =\int S_{\mathrm{L}k}^{+}e^{ik(x-vt)}\,\mathrm{d}k\,, \tag{7}\] \[S_{\mathrm{R}}^{+}(x) =\int S_{\mathrm{R}k}^{+}e^{ikx}\,\mathrm{d}k\,. \tag{8}\]
As a result, we can write the effective interaction Hamiltonian in the reciprocal space as
\[H_{\mathrm{int}}=\int\hbar J_{k}(t)S_{\mathrm{L}k}^{+}S_{\mathrm{R}k}^{-}\, \mathrm{d}k+\mathrm{H.c.},\quad J_{k}(t)=J_{\mathrm{int}}e^{-i\Delta\omega_{k }t}, \tag{9}\]
where the coupling constant is periodically modulated in time with the Doppler frequency \(\Delta\omega_{k}\,=\,kv\). Note that this recovers the conventional tunnelling coupling Hamiltonian if we set \(v\,=\,0\). We can view
Figure 2: Cyclic modulation of the overlapping between left and right excitations. For the excitations with a wavelength \(2\pi/k\), the overlap between the excitation is cyclically modulated with a period \(T=2\pi/(kv)\) due to the sliding motion.
the phase factor \(e^{-i\Delta\omega_{k}t}\) as a consequence of the cyclic modulation of overlapping between excitations in the left and right media (see FIG. 2). Let us focus on excitations in the left and right media at a given wavenumber \(k\). The right medium is at rest; hence, the waveform of the excitation on the right does not change in time. On the other hand, the left medium is moving at a constant velocity so that the excitation waveform alters in time. Consequently, the overlapping is cyclically modulated in time with a period of \(T\) = \(2\pi/(kv)\). The time dependence of the interaction Hamiltonian serves as the microscopic rationale behind the nonequilibrium nature of our system.
Applying a unitary transformation,
\[H\mapsto UHU^{\dagger}-i\hbar U\frac{\partial}{\partial t}U^{ \dagger}, \tag{10}\]
the periodic modulation of the coupling strength can be reconsidered as an effective potential as follows.
We can choose
\[U=\exp\left(+i\biggl{[}\int\Delta\omega_{k}S^{z}_{\rm Lk}\,{\rm d }k\biggr{]}t\right) \tag{11}\]
to remove the time dependence of the coupling strength and get the conventional tunnelling coupling,
\[H_{\rm int}\mapsto\int\hbar J_{\rm int}S^{+}_{\rm Lk}S^{-}_{ \rm Rx}\,{\rm d}k+{\rm H.c.}. \tag{12}\]
In exchange for that, we have an effective potential on the left medium,
\[V:=-i\hbar U\frac{\partial}{\partial t}U^{\dagger}=\int\Delta \omega_{k}S^{z}_{\rm Lk}\,{\rm d}k\,. \tag{13}\]
Therefore, the effective total Hamiltonian can be written as \(H=H_{0}+H_{\rm int}\),
\[H_{0} =H_{\rm R}+H_{\rm L}^{\prime}, \tag{14}\] \[H_{\rm L}^{\prime} =H_{\rm L}+V,\] (15) \[H_{\rm int} =\int\hbar J_{\rm int}S^{+}_{\rm Lk}S^{-}_{\rm Rx}\,{\rm d}k+{\rm H.c.}, \tag{16}\]
where the effects of sliding motion \(V\) are included in the unperturbed part. In our previous work [20], this Hamiltonian was phenomenologically introduced with the spin-wave approximation, and the motion-induced spin transfer was perturbatively evaluated.
Here, we make a comment on the Galilean invariance. No global coordinate translation can remove the system's motion, and we call for local translation to go over a reference frame where both magnets are at rest. That is why we expect that the system no longer possesses the global Galilean invariance while it is still locally Galilean invariant. Indeed, the Galilean invariance is broken due to the interaction between the two magnets \(H_{\rm int}\,\propto\,\int S^{+}_{\rm L}(x\,-\,vt)S^{-}_{\rm R}(x)\,{\rm d}x\) as in the case of noncontact friction [21] If the interaction is turned off, each system does not have any way to detect the motion, and the Galilean invariance is preserved.
## 4 Spin tunnelling transport
In this section, we outline how to evaluate the motion-induced spin transfer with the perturbation theory with nonequilibrium Green's functions [20]. Working in the reciprocal domain, the quantity in question can be written as
\[I_{\rm spin}=2J_{\rm int}\,{\rm Im}\int\left\langle S^{-}_{ \rm Lk}S^{+}_{\rm Rx}\right\rangle{\rm d}k\,, \tag{17}\]
where the expectation value should be calculated with the full Hamiltonian given in Equations (14) and (16) Adopting the Schwinger-Keldysh formalism [22, 23], we can write the integrand in Equation (17) as
\[\left\langle\mathcal{T}_{\mathrm{C}}S^{+}_{\mathrm{Rk}}(t_{1}^{+})S ^{-}_{\mathrm{Lk}}(t_{1}^{-})\right\rangle =\left\langle\mathcal{T}_{\mathrm{C}}S^{+}_{\mathrm{Rk}}(t_{1}^{+ })S^{-}_{\mathrm{Lk}}(t_{1}^{-})U_{\mathrm{C}}\right\rangle_{0}, \tag{18}\] \[U_{\mathrm{C}} =\mathcal{T}_{\mathrm{C}}\exp\biggl{(}\frac{1}{i\hbar}\int_{ \mathrm{C}}H_{\mathrm{int}}(t_{2})\,\mathrm{d}t_{2}\biggr{)}, \tag{19}\]
where \(\mathcal{T}_{\mathrm{C}}\) is the time-ordering operator on the Schwinger-Keldysh contour, the superscript \(+\) (\(-\)) at the argument stands for forward (backwards) branch, \(\left\langle\ldots\right\rangle_{0}\) is the average with the unperturbed Hamiltonian \(H_{0}\), and \(\int_{\mathrm{C}}\ldots\mathrm{d}t_{2}\) is the integration over the Schwinger-Keldysh contour. We expand \(U_{\mathrm{C}}\) in Equation (18) in powers of the tunnelling coupling \(J_{\mathrm{int}}\), retain up to the first order, and apply the Bloch-de Dominicis theorem and the Langreth theorem [24] to get
\[i\hbar^{2}J_{\mathrm{int}}\int_{-\infty}^{+\infty}\left(\lambda ^{\mathfrak{R}}_{\mathrm{Rk};12}\chi^{<}_{\mathrm{Lk};21}+\chi^{<}_{\mathrm{Rk };12}\chi^{\mathfrak{R}}_{\mathrm{Lk};21}\right)\mathrm{d}t_{2}\,, \tag{20}\]
where \(\chi^{\mathfrak{R},<,\mathfrak{R}}_{\mathrm{L(R)}k}\) is the retarded, lesser, and advanced components of the nonequilibrium Green's functions for left (right) medium,
\[\chi_{\mathrm{L(R)}k;12}=\frac{1}{i\hbar}\left\langle\mathcal{T} _{\mathrm{C}}S^{+}_{\mathrm{L(R)}k}(t_{1})S^{-}_{\mathrm{L(R)}k}(t_{2}) \right\rangle. \tag{21}\]
Applying the Fourier transformation to Equation (20), we can write
\[2i\hbar^{2}J_{\mathrm{int}}\int\mathrm{Im}\,\chi^{\mathfrak{R} }_{\mathrm{Rk}\omega}\,\mathrm{Im}\,\chi^{\mathfrak{R}}_{\mathrm{Lk}\omega} \delta f_{k\omega}\,\mathrm{d}\omega \tag{22}\]
where we defined the distribution difference function with the regarded and lesser components of the nonequilibrium Green's functions,
\[\delta f_{k\omega}=\frac{\chi^{<}_{\mathrm{Lk}\omega}}{2i\,\mathrm{Im}\,\chi^ {\mathfrak{R}}_{\mathrm{Lk}\omega}}-\frac{\chi^{<}_{\mathrm{Rk}\omega}}{2i\, \mathrm{Im}\,\chi^{\mathfrak{R}}_{\mathrm{Rk}\omega}}. \tag{23}\]
In our previous study [20], we focused on this distribution difference and studied how it plays a role in the motion-induced spin transfer within the spin-wave approximation, while we have not relied on the spin-wave approximation in the derivation so that the formula is a generalised version of the one we got in the previous study.
From these considerations, we can write the total amount of the spin transfer as
\[I_{\mathrm{spin}}=4(\hbar J_{\mathrm{int}})^{2}\int\mathrm{Im} \,\chi^{\mathfrak{R}}_{\mathrm{Rk}\omega}\,\mathrm{Im}\,\chi^{\mathfrak{R}}_ {\mathrm{Lk}\omega}\delta f_{k\omega}\,\mathrm{d}k\,\mathrm{d}\omega\,. \tag{24}\]
It is clear from this expression that the spin transfer process is the second order in the tunnelling coupling and proportional to the spectral overlap \(\mathrm{Im}\,\chi^{\mathfrak{R}}_{\mathrm{R}\omega}\,\mathrm{Im}\,\chi^{ \mathfrak{R}}_{\mathrm{Lk}\omega}\) and the distribution difference \(\delta f_{k\omega}\) between the left and right magnets. In the following, we shall discuss the role of the spectral overlap in the motion-induced spin transfer.
## 5 Role of Spectral Overlap
To discuss the spectral overlap function, we apply the spin-wave approximation that captures the fundamental excitations in the ferromagnetic insulators. The unperturbed Hamiltonian, which is responsible for the ferromagnets, is written in terms of bosonic operators according to the Holstein-Primakoff theory within the spin-wave approximation (e.g. \(S^{+(-)}_{\mathrm{Lk}}\approx\sqrt{2S_{0}}b^{(\dagger)}_{\mathrm{Lk}}\) and \(S^{z}_{\mathrm{Lk}}\approx S_{0}-b^{\dagger}_{\mathrm{Lk}}b_{\mathrm{Lk}}\) where \(S_{0}\) is the magnitude of the localised spin, and \(b_{\mathrm{Lk}}\) is a bosonic annihilation operator),
\[H_{\mathrm{R}} =\hbar\omega_{k}b^{\dagger}_{\mathrm{Rk}}b_{\mathrm{Rk}}, \tag{25}\] \[H^{\prime}_{\mathrm{L}} =\hbar(\omega_{k}-\Delta\omega_{k})b^{\dagger}_{\mathrm{Lk}}b_{ \mathrm{Lk}}, \tag{26}\]
where we substitute a conventional parabolic magnon dispersion \(\omega_{k}=Dk^{2}+\omega_{0}\) with the Zeeman energy \(\omega_{0}\). Remind that \(\Delta\omega_{k}\,=\,kv\) is the Doppler-induced effective potential. Note that we assumed the left and right media are made of the same material.
As we have specified the unperturbed Hamiltonian, we can explicitly write the spectral function for each magnet. Since we are working in the interaction picture, we can write
\[b_{\text{L(R)}k}(t)=e^{-i\frac{H_{\text{L(R)}}}{\hbar}}t_{\text{ L(R)}k}e^{+i\frac{H_{\text{L(R)}}}{\hbar}t}=b_{\text{L(R)}k}e^{-i\omega_{ \text{L(R)}k}t}, \tag{27}\]
where we defined the magnon dispersion in the right magnet \(\omega_{\text{R}k}\,=\,\omega_{k}\) and the one in the left \(\omega_{\text{L}k}\,=\,\omega_{k}\,-\,\Delta\omega_{k}\). Therefore, the retarded component of the nonequilibrium Green's function can be explicitly written as
\[\chi^{\mathfrak{R}}_{\text{L(R)}k\omega}=\frac{2S_{0}/\hbar}{ \omega-\omega_{\text{L(R)}k}+i\Gamma}, \tag{28}\]
where we have phenomenologically introduced the spectral broadening factor \(\Gamma\).
Let us consider two extreme cases. First, in the 'pristine' limit (\(\Gamma\to 0\)), where the spectra are too sharp, they only overlap at \(\omega_{\text{R}k}\,=\,\omega_{\text{L}k}\), where the spin carrier has no momentum \(k\,=\,0\), and the Doppler shift \(\Delta\omega_{k}\,=\,k\cdot v\) does not play any role; hence, there is no current induced by the sliding motion. Second, in the 'dirty' limit (\(\Gamma\,\rightarrow\,\infty\)) where the spectra are flattened, \(\chi^{\mathfrak{R}}\,\rightarrow\,0\), the spectral overlap will vanish, leading to the disappearance of the spin current (remind the spin current is proportional to the spectral overlap). Thus, we can expect that there is some optimal condition where the induced spin current is maximised.
In order to confirm the qualitative discussion, we numerically evaluated the spin transfer formula (24), sweeping the spectral broadening factor \(\Gamma\). In FIG. 3, we show the amount of the motion-induced spin transfer \(I_{\text{spin}}\) as a function of the spectral broadening. We can verify that the spin current decreases as the broadening factor \(\Gamma\) becomes too small and significantly large and that there is a peak in between.
## 6 Microscopic theory for the work done by the sliding motion
In the preceding section, we employed the phenomenological description for the nonequilibrium nature of the system. Here, we provide a microscopic theory for the work done by the sliding motion within the
Figure 3: Effects of the spectral broadening on the spin tunnelling transport between the ferromagnetic insulators. In the ’pristine’ limit (\(\Gamma\,\rightarrow\,0\)), only \(k\,=\,0\) channel is active, where the Doppler effect plays no role. In the ‘dirty’ limit (\(\Gamma\,\rightarrow\,\infty\)), the spectra are flattened, and the spectral overlap vanishes. The broadening enlarges the spectral overlap and the spin tunnelling transport; however, it spoils the transport if it is too much. We used the following parameters to draw this line plot: \(D=532\text{ meV}\cdot\hat{\Lambda}^{2};\omega_{0}\approx 0.12\text{ meV}(B_{0}=1.0\text{ T});hJ_{\text{int}}=50\text{ meV}\ll\hbar\omega_{k};v=1.0\text{ m/s}\).
spin-wave approximation that we employed to evaluate the spin transfer in the previous section. As the work done by the motion is provided by the momentum transfer between two magnets, we shall focus on the time variation of the momentum in the left magnet,
\[F_{\rm f}=\int\hbar k\frac{\partial}{\partial t}\left\langle b_{ \rm R\hskip-1.0pt/\hskip-1.0ptk}^{\dagger}b_{\rm R\hskip-1.0pt/\hskip-1.0ptk} \right\rangle{\rm d}k=\int\hbar k\left\langle b_{\rm L\hskip-1.0pt/\hskip-1.0ptk }^{\dagger}b_{\rm R\hskip-1.0pt/\hskip-1.0ptk}\right\rangle{\rm d}k\,. \tag{29}\]
Repeating the same procedure that we take to perturbatively evaluate the spin transfer, we can get
\[F_{\rm f}=-4(\hbar J_{\rm int})^{2}\int\hbar k\,{\rm Im}\,\tilde{ \chi}_{\rm L\hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}\,{\rm Im}\, \tilde{\chi}_{\rm R\hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}\delta f_{k \omega}\,{\rm d}k\,{\rm d}\omega\,. \tag{30}\]
where we defined \(\tilde{\chi}_{\rm L(R)\hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}\,=\, \chi_{\rm L(R)\hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}/(2S_{0})\). As the external force that keeps the left medium at a constant speed should be balanced, we can write
\[F_{\rm ex}=-F_{\rm f}=4(\hbar J_{\rm int})^{2}\int\hbar k\,{\rm Im }\,\tilde{\chi}_{\rm R\hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}\,{\rm Im }\,\tilde{\chi}_{\rm L\hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}\delta f _{k\omega}\,{\rm d}k\,{\rm d}\omega\,, \tag{31}\]
Assuming the sliding velocity \(v\) is small, we expand the integrand and retain the first order in \(v\) to obtain \(F_{\rm ex}=-F_{\rm f}=\gamma v\) with a drag coefficient,
\[\gamma=4(\hbar J_{\rm int})^{2}\int\hbar k^{2}\,{\rm Im}\,\tilde{ \chi}_{k\omega}^{\mathfrak{R}}\,{\rm Im}\,\tilde{\chi}_{k\omega}^{\mathfrak{R }}n^{\prime}_{\rm B}(\omega_{k})\,{\rm d}k\,{\rm d}\omega\,, \tag{32}\]
where we defined \(\tilde{\chi}_{k\omega}^{\mathfrak{R}}\,=\,\,\tilde{\chi}_{\rm L\hskip-1.0pt/ \hskip-1.0ptk\omega}^{\mathfrak{R}}\big{|}_{\rm e=0}\,=\,\tilde{\chi}_{\rm R \hskip-1.0pt/\hskip-1.0ptk\omega}^{\mathfrak{R}}\), and \(n^{\prime}_{\rm B}\) is the derivative of the Bose distribution function. Note that we already performed the bosonisation and thus explicitly calculated the distribution difference where we get the Bose distribution.
## 7 Discussion
As we evaluated the drag force in the previous section, we can expand the integrand of the spin current formula (24),
\[I_{\rm spin}=\int\left(\alpha_{1}kv+\alpha_{2}k^{2}v^{2}+\alpha _{3}k^{3}v^{3}+\cdots\right){\rm d}k\,{\rm d}\omega\,, \tag{33}\]
to confirm the motion-induced spin transfer exhibits the parabolic dependence on the sliding velocity, \(I_{\rm spin}\,\,\approx\,\,\sigma_{\rm s}^{(2)}v^{2}\), in the slow-velocity regime (\(v\,\,\ll\,\,1\)). Note that we defined \(\sigma_{\rm s}^{(2)}\,\,=\,\,\int\alpha_{2}k^{2}\,{\rm d}k\,{\rm d}\omega\). As can be seen in the expansion (33), the odd terms in \(v\) are also odd in \(k\), and the odd terms in \(k\) do not contribute after the integration over \(k\). That is why the first order in \(v\) does not contribute to the spin transfer, and the second order is the dominant contribution in the slow-velocity regime.
The parabolic dependence on the sliding velocity implies we can obtain a second harmonic generation of spin current in our setup, which is forbidden in bulk due to symmetry. Let us consider oscillating motion \(v\,=\,v_{0}\cos\Omega t\). Even though the velocity is no longer constant, the present theory can still be applied adiabatically when the acceleration is sufficiently small \(\dot{v}\ll 1\). As our spin current is proportional to \(v^{2}\), we will generate the rectified and second-harmonic components of the spin current,
\[I_{\rm spin}(t)\approx\sigma_{\rm s}^{(2)}v_{0}^{2}+\sigma_{\rm s }^{(2)}v_{0}^{2}\cos(2\Omega t), \tag{34}\]
## 8 Conclusions
In conclusion, we found that shearing motion drives spin tunnelling current between two ferromagnetic insulators. We clarified that the shearing motion in the presence of interaction between the two magnets breaks the global Galilean invariance and pumps energy into the system, driving it out of equilibrium and triggering transport phenomena. Taking the nonequilibrium nature of the system, the spin
tunnelling current has been perturbatively assessed with nonequilibrium Green's functions. The resulting spin current is controlled by the spectral overlap and the population difference between the two magnets. We revealed the relevant (significant) spectral broadening enlarges (spoils) the spin current because it expands the spectrum overlap (flattens the spectra).
**Acknowledgements** D.O. is supported by the JSPS Overseas Research Fellowship, by the Institution of Engineering and Technology (IET), and by Fundacao para a Ciencia e a Tecnologia and Instituto de Telecomunicacoes under project UIDB/50008/2020. This work was supported by the Priority Program of Chinese Academy of Sciences (Grant No. XDB28000000) and JSPS KAKENHI (Grant No. JP20H01863, No. JP21H04565, and No. JP21H01800).
|
2302.07322 | TRESTLE: Toolkit for Reproducible Execution of Speech, Text and Language
Experiments | The evidence is growing that machine and deep learning methods can learn the
subtle differences between the language produced by people with various forms
of cognitive impairment such as dementia and cognitively healthy individuals.
Valuable public data repositories such as TalkBank have made it possible for
researchers in the computational community to join forces and learn from each
other to make significant advances in this area. However, due to variability in
approaches and data selection strategies used by various researchers, results
obtained by different groups have been difficult to compare directly. In this
paper, we present TRESTLE (\textbf{T}oolkit for \textbf{R}eproducible
\textbf{E}xecution of \textbf{S}peech \textbf{T}ext and \textbf{L}anguage
\textbf{E}xperiments), an open source platform that focuses on two datasets
from the TalkBank repository with dementia detection as an illustrative domain.
Successfully deployed in the hackallenge (Hackathon/Challenge) of the
International Workshop on Health Intelligence at AAAI 2022, TRESTLE provides a
precise digital blueprint of the data pre-processing and selection strategies
that can be reused via TRESTLE by other researchers seeking comparable results
with their peers and current state-of-the-art (SOTA) approaches. | Changye Li, Weizhe Xu, Trevor Cohen, Martin Michalowski, Serguei Pakhomov | 2023-02-14T20:07:31Z | http://arxiv.org/abs/2302.07322v2 | # TRESTLE: Toolkit for Reproducible Execution of Speech, Text and Language Experiments
###### Abstract
_The evidence is growing that machine and deep learning methods can learn the subtle differences between the language produced by people with various forms of cognitive impairment such as dementia and cognitively healthy individuals. Valuable public data repositories such as TalkBank have made it possible for researchers in the computational community to join forces and learn from each other to make significant advances in this area. However, due to variability in approaches and data selection strategies used by various researchers, results obtained by different groups have been difficult to compare directly. In this paper, we present TRESTLE (**T**oolkit for **R**eproducible **E**xection of **S**peech **T**ext_ and **L**anguage **E**xperiments), an open source platform that focuses on two datasets from the TalkBank repository with dementia detection as an illustrative domain. Successfully deployed in the hackallenge (Hackathon/Challenge) of the International Workshop on Health Intelligence at AAAI 2022, TRESTLE provides a precise digital blueprint of the data pre-processing and selection strategies that can be reused via TRESTLE by other researchers seeking comparable results with their peers and current state-of-the-art (SOTA) approaches._
## Introduction
In the "Last Words" letter to "Computational Linguistics" in 2008, Pedersen pointed out that the computational linguistics community was experiencing a reproducibility crisis [1]. In that letter, Pedersen provided strong arguments that all published computational linguistic research needs to be accompanied by working software to enable its replication in order to be credible, and that it was "unreasonable to expect that reproducibility be possible based on the description provided in a publication." Ten years later, in 2018, another group of researchers decided to follow up on Pedersen's "last words" to investigate the extent to which workers in computational linguistics were willing and able to share their code for the sake of reproducibility. Wieling et al. [2] surveyed 395 publications and found that the code was available either immediately or upon request for only one third of these papers. Furthermore, when they tried to replicate the results for a selection of 10 papers, they were only able to do so for six papers and obtained the exact same results as had been published for only one. These results highlight the magnitude of this persistent problem that is not unique to the computational linguistics community and has been noted in the machine learning (ML) [3], psychology [4], and biomedical natural language processing (NLP) [5, 6, 7] research fields as well.
The work presented in this paper addresses the broader problem of reproducibility by focusing on a specific subproblem of replicability as set forth by Cohen et al. [5] in at least one narrowly defined interdisciplinary area of research - computational approaches to characterizing changes in speech and language characteristics caused by cognitive impairment resulting from neurodegenerative conditions such as the Alzheimer's disease (AD).
This is an important area to address because AD is a debilitating condition with no known cure that affects every aspect of cognition, including language use. Over 50 million people have been diagnosed with AD dementia, and this number is anticipated to triple by 2050 [8, 9, 10]. Previous studies [11, 12] have demonstrated that machine learning methods can learn to distinguish between language from healthy controls and dementia patients, automatic analysis of spoken language can potentially provide accurate, easy-to-use, safe, and cost-effective tools for monitoring AD-related cognitive markers. However, a persistent challenge in this work has been the difficulty involved in reproducing prior work and comparing results across studies on account of the use of different diagnosis-related subsets (i.e., probable vs. possible dementia), aggregation strategies (i.e., one vs. multiple transcripts per participant), performance metrics
and cross-validation protocols. This challenging issue is exacerbated by the fact that space available for publication of results is typically highly limited and even when a publication venue allows appendices, the description of the methods provided by authors can be highly variable and subject to misinterpretation and uncertainty when trying to reproduce the methods. Consistent with previous finding,2 some researchers provide code while others do not, and the code that is provided typically includes only the implementation of core machine learning methods and does not include scripts needed for data selection and exact execution of validation strategies.
Footnote 2: For more details about CHAT protocol, please check the manual here:[https://talkbank.org/manuals/CHAT.pdf](https://talkbank.org/manuals/CHAT.pdf)
To address this challenge, we developed TRESTLE (**T**oolkit for **R**e**roducible **E** Execution of **S**peech **T**ext and **L**anguage **E**xperiments) for DementiaBank (DB), one of the most popular repositories to host data for the computational linguistics and machine learning communities to build state-of-the-art (SOTA) models on identifying subtle language differences used by dementia patients and healthy controls1. Particularly, TRESTLE supports data pre-processing for the Pitt corpus3 and other corpora such as transcripts from the Wisconsin Longitudinal Study (WLS)4 - both are formatted using the CHAT5 protocol.2
Footnote 1: To see a full list of publications that uses data from DementiaBank, see [https://dementia.talkbank.org/publications/bib.pdf](https://dementia.talkbank.org/publications/bib.pdf)
Footnote 2: For more details about CHAT protocol, please check the manual here:[https://talkbank.org/manuals/CHAT.pdf](https://talkbank.org/manuals/CHAT.pdf)
TRESTLE provides an opportunity for researchers to submit a manifest that includes the precise pre-processing parameters, data selection, and user-defined criteria for "dementia" and "control". Therefore, individuals can freely design their own pre-processing parameters and use _exactly the same_ data that their peers have provided, allowing for comparable and reproducible evaluation outcomes for their analytical models.
To the best of our knowledge, this is the first toolkit that provides the infrastructure to enable direct comparisons between experiments conducted on DementiaBank datasets While it is currently designed and tested with data contained in the DementiaBank portion of the TalkBank16 repository3, it can be easily extended to other public datasets following the CHAT protocol to facilitate reproducibility and comparability of experimental results and the ease of establishing and improving the SOTA in the ML research community. The code for the toolkit is publicly available on GitHub4.
Footnote 3: [https://www.talkbank.org/](https://www.talkbank.org/)
Footnote 4: [https://github.com/LinguisticAnomalies/harmonized-toolkit](https://github.com/LinguisticAnomalies/harmonized-toolkit)
#### TRESTLE Design Overview
In theory, if researcher B intends to reproduce the results of methods developed by researcher A, all that researcher B would need to do is ask researcher A for a copy of the data used to obtain the results. In practice, there are many barriers to executing this scenario including the fact that the owners of even publicly available datasets typically do not allow individual researchers to redistribute their data. Therefore, if researcher A makes any modifications to the original data for the purposes of experimentation, these modifications remain with researcher A, as they are not typically propagated back to the original dataset. Researcher B wishing to replicate and improve upon A's results has to obtain the original data from the owner of the data and figure out how to make the same modifications to the original data as were made by researcher A. While researcher A typically does provide in a publication the information describing the data selection and modification decisions, researcher B still has to essentially reconstruct these modifications. Clearly, this situation is error-prone and not conducive to making rapid scientific progress.
The main motivation for creating TRESTLE stems from the need for a convenient and error-resistant way of communicating the details of a researcher's experimental design to other researchers so they can replicate the experimental conditions in order to test their own methods and compare results to those obtained by previous researchers. Motivated by this need for replicability, the key design feature of TRESTLE is the generation of a machine-readable manifest that captures all of the data selection and pre-processing decisions that were made while running an experiment on the supported datasets. The manifest is intended to be disseminated along with publishing the results of experiments and used as a blueprint to replicate the exact same experimental set up and conditions by others. The objective is to avoid the situation in which a group of researchers develops a new machine learning algorithm for discrimination between dementia cases and controls based on speech and language features, experimentally validates the algorithm on a dataset and publishes the results but another group is not able to reproduce their results because of either insufficient information provided in the publication or misinterpretation of the information or both. An even worse situation may arise where the results are replicated (e.g. same or similar metrics are obtained) but the experimental conditions
differ in some subtle ways. Both of these scenarios may lead to meaningless comparisons or significant difficulty in conducting meaningful comparisons and thereby hindering the research community's ability to build on each other's work.
TRESTLE is also designed to make pre-processing decisions as explicit as possible while providing the flexibility for researchers to add their own pre-processing scripts needed to replicate their results. The motivation for providing this functionality for TRESTLE is secondary to the main motivation for replicating the experimental conditions because pre-processing could be considered a part of one's methodology. For example, including pause fillers (um's and ah's) in training a neural model of speech affected by dementia may be viewed as a novel methodological decision that would contribute to better classification accuracy. As such, pre-processing in general may not lend itself well to standardization. However, in more complex scenarios in which pre-processing itself involves using statistical models or other tools with multiple user-controlled parameters (e.g., target audio sampling rate, noise reduction techniques, etc.) it is also important to capture these parameters precisely and explicitly and provide them together with any software code to subsequent researchers so as to enable them to reproduce these methods.
The parameters used during sound and text pre-processing/conversion are also stored in the manifest file. In addition to the generation of the manifest, TRESTLE comes with a set of standard pre-processing utilities for text and audio, as demonstrated in Figure 1. TRESTLE is divided into two sub-modules, a) pre-processing text data (Figure 1a), and b) pre-processing audio data (Figure 1b) that is fully aligned with the corresponding text transcript. Each sub-module contains a number of parameters that users can define in their own pre-processing manifest. Block 1 and Block 2 show the general flow of using TRESTLE for pre-processing text and audio samples, respectively.
**Block 1** General flow of TRESTLE **text** pre-processing sub-module. _Italic_ indicates inputs from users
Which dataset you are pre-processing? wls or db?: _db_
Where are the.cha files located?: _file-locations_
Remove 'clear throat'? (Y/N): \(y\)
Remove open parentheses e.g. (be)coming? (Y/N): \(y\)
Remove open square brackets eg. [: overflowing]? (Y/N): \(y\)
Remove disfluencies prefixed with '&'? (Y/N): \(y\)
Remove unintelligible words? (Y/N): \(y\)
Remove pauses eg. (.) or (..)? (Y/N): \(y\)
Remove forward slashes in square brackets? (Y/N): \(y\)
Remove noise indicators e.g. &=breath? (Y/N): \(y\)
Remove square brackets indicating an error code? (Y/N): \(y\)
Remove all non-alphanumeric characters? (Y/N): \(y\)
Replace multiple spaces with a single space? (Y/N): \(y\)
Capitalize the first character? (Y/N): \(y\)
Add period at the end of every sentence? (Y/N): \(y\)
Add newline at the end of every sentence? (Y/N): \(n\)
You data will be stored as.tsv file. Please enter the output path and file name for your pre-processed transcripts: _output.tsv_
Please stand by, your pre-processing script will be generated shortly...
Your text pre-processing json file has been generated!
Running text pre-processing script now...
Your dataset is now pre-processed!
Block 1 demonstrates the pre-processing features currently supported by TRESTLE. As illustrated in Block 3, the raw input CHAT (.cha) file contains several tags indicating participants' behavior during the interview. TRESTLE allows users to remove tags/indicators such as clear throat indicator, open parentheses or brackets, noise, disfluencies, non-words or pauses from the verbatim transcript, if desired. Furthermore, users can choose whether or not to capitalize the first character of each sentence, or add newline at the end of the sentence. Depending on the type of analysis the user intends to do, some or all of these extra-linguistic or para-linguistic elements may need to be either removed or
Figure 1: TRESTLE design overview
used in the analysis, as demonstrated in several previous studies.17, 18, 19
Footnote 17: [http://sox.sourceforge.net/](http://sox.sourceforge.net/)
These binary user-controlled parameters are stored in the manifest file in JSON format. Other TRESTLE users can apply the same pre-processing parameters to raw transcripts by using this manifest file, or modify the manifest if comparability to previous work is not desired, giving the authors the option to choose their own criteria but ensure that the criteria are explicit and can be subsequently precisely replicated by others.
**Block 2** General flow of TRESTLE **audio** pre-processing sub-module. _Italic_ indicates inputs from users
Which dataset you are pre-processing? wls or db?: _db_
Where are the.mp3 files located?: _file-locations_
Where do you want to store the trimmed audio segments? _audio-segments-locations_
Enter sample rate: _16000_
Feature extraction methods, selecting from FTT or MFCC or NONE: _fft_
Enter number of FTT windows size or MFCC, 0 for NONE: 2
Scaling MFCC? y/n: \(n\)
Your audio pre-processing json file has been generated!
Running audio pre-processing script now...
Starting to convert.mp3 to.wav
Finished!
Starting to resample audio to target sample rate...
Finished!
Your dataset is now pre-processed!
Some other barriers to replicability stem from the variability in how raw audio data is pre-processed and prepared for ML. For example, the Pitt corpus audio data is in 16 bit, 16 kHz sampling rate (i.e., 256 kilobits/second bit rate) uncompressed WAVE format whereas the WLS data is in compressed MP3 format encoded at 44.1 kHz sampling rate but 124 kilobits/second bit rate - about half the bit rate of the Pitt corpus. It may be important for studies that use the audio from these datasets to convert the audio to a single specific format needed for analysis and with the understanding of the implications of any such conversion for resulting audio quality. In order to enable these conversions in TRESTLE, we included the Sound eXchange5 library for resampling audio samples. TRESTLE additionally supports feature extraction algorithms such as the Fourier transform (FT) and Mel-frequency cepstral coefficients (MFCC).
Footnote 5: [https://demantia.talkbank.org/access/English/Pitt.html](https://demantia.talkbank.org/access/English/Pitt.html)
These user-controlled parameters are then applied to each text or audio file from the Pitt or WLS datasets. The text sub-module mere all pre-processed utterance-level.cha transcripts to a.tsv file and saves it to the user-specified destination. Furthermore, when pre-processing a dataset in which the text and audio are fully aligned with each other (i.e., Pitt corpus, partial WLS dataset), the text sub-module maintains a list of timestamps indicating the intervals of test administrator speech to the corresponding.json file for further pre-processing in the audio sub-module. The audio sub-module converts audio files to a user-defined format (e.g. single-channel PCM waveform, sampled at 16 kHz). The audio sub-module also generates utterance-level audio segments. When working with the text sub-module together with the corresponding utterance-level transcripts, TRESTLE enables the followup application for automatic speech recognition (ASR) models.
**Datasets**
TRESTLE currently directly supports data pre-processing for two publicly available and datasets from TalkBank: a) the Pitt corpus6, b) and the WLS corpus7. The details of the two datasets are included in Table 1, and a sample transcript formatted in the CHAT protocol from Pitt corpus is shown in Block 3. These datasets, including raw audio, manual transcripts, linguistic annotations, and metadata with demographic, clinical and neuropsychological test characteristis are publicly available from TalkBank.
Footnote 6: [https://demantia.talkbank.org/access/English/Pitt.html](https://demantia.talkbank.org/access/English/Pitt.html)
The Pitt corpus contains audio recordings and manually transcribed transcripts of neuropsychological tests including the "Cookie Theft" picture description task from the Boston Diagnostic Aphasia Examination [20]. In this task, participants were asked to describe everything they see occurring in Figure 2. Participant responses were audio recorded and subsequently transcribed verbatim. Participants were tested multiple times resulting in multiple transcripts per participant. In total, there are 242 recordings/transcripts from the 99 healthy controls and 257 recordings/transcripts from the 169 participants with AD-related diagnoses. Neurological examination results, including results of the Mini-Mental State Exam (MMSE) and Clinical Dementia Rating (CDR) are also included in the Pitt corpus.
The WLS is a longitudinal study of 694 men and 675 women who graduated from Wisconsin high schools in 1957, where the participants were interviewed up to six times between 1957 and 2011. Cognitive evaluations and "Cookie Theft" picture description task were introduced to the later rounds of interview on WLS, which are presented in the
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Characteristics** & **Pitt** & **WLS** \\ \hline Age, mean (SD) & 69.2 (8.9) & 70.4 (4.4) \\ \hline Gender, \(n\) (\%) & Male & 200 (39.2) & 694 (50.7) \\ \cline{2-3} & Female & 310 (60.8) & 675 (49.3) \\ \hline Education, mean (SD) & 12.5 (3.1) & 13.5 (3.1) \\ \hline MMSE, mean (SD) & 20.7 (7.4) & NA (NA) \\ \hline \end{tabular}
\end{table}
Table 1: Dataset description.
Figure 2: Cookie Theft stimulus
CHAT-formatted (.cha) files. All of the participants in the WLS were considered to be cognitively healthy upon entry into the study. Some may developed dementia in later years; however, the neurological diagnostic information is not currently publicly available.
Defining the "dementia" and "control" categories is not entirely straightforward and creates a barrier to reproducibility even if the criteria are described. For example, typical studies involved with the Pitt corpus focus on 169 participants classified as having _possible_ or _probable_ AD based on clinical or pathological examination, as well as 99 healthy controls. However, 10 of 99 healthy controls later acquired a dementia-related diagnosis - 7 of 10 being diagnosed with probable AD and the remaining 3 having an indeterminate diagnostic status at baseline. This complicates data analysis since individuals' diagnostic statuses may change over time and how this change is treated in a given study may significantly affect the results. The paucity of neurological diagnoses in the WLS also complicates further data analysis. One way to categorize WLS participants into those with potential cognitive impairment and those without is to use the available verbal fluency neuropsychological test scores,21 as verbal fluency (ability to name words belonging to a semantic category) is significantly impaired in dementia and has been recommended for clinical use as a screening instrument for dementia.22 However, various verbal fluency score cutoffs for dementia have been proposed in the literature and different authors may follow the literature that they trust in selecting the cutoffs. It would not be reasonable to try to impose a single specific cutoff on all studies using the WLS data.
Footnote 2: [https://w3phiai2022.w3phi.com/hackathon.html](https://w3phiai2022.w3phi.com/hackathon.html)
Footnote 3: [https://w3phiai2022.w3phi.com/index.html](https://w3phiai2022.w3phi.com/index.html)
#### Results
We deployed TRESTLE at the Data Hackallenge8 of the International Workshop on Health Intelligence9, which was co-hosted at AAAI 2022. During the hackallenge, each group of participants used TRESTLE to generate specific subsets of the data along with the pre-processing TRESTLE manifest. Each group was instructed to select data samples from the Pitt or WLS set (or both) using the criteria provided in the corresponding metadata. Each group was also asked to label each selected data sample as "positive" ("dementia") or "negative" ("controls") based on their preferred criteria. Each group was also asked to develop an analytical method (pipeline) of their choosing for discriminating between those categories. Finally, each group was asked to to select an evaluation strategy of their choosing for their analytical method. In the second phase of the hackallenge, each team was asked to evaluate the other group's pre-processing manifests and run their analysis pipeline using the data selection, category definition, and evaluation strategy information provided in the other group's manifest to replicate the other group's experimental design so that the results could be directly compared.
Footnote 8: [https://w3phiai2022.w3phi.com/hackathon.html](https://w3phiai2022.w3phi.com/hackathon.html)
We provided a baseline manifest, following the current SOTA on such tasks with text transcripts.23 For the baseline system, we fine-tuned BERT24 on the Alzheimer's Dementia Recognition through Spontaneous Speech (ADReSS)25 dataset, which is a subset of the Pitt corpus that is matched for age and gender. The baseline manifest with evaluation metrics is shown in Block 4. Our baseline accuracy and AUC were both 0.77.
Footnote 2: [https://w3phiai2022.w3phi.com/index.html](https://w3phiai2022.w3phi.com/index.html)
```
{ "pre_process": "scripts/text_process.json", # pointing to the user-defined text pre-processing parameters "data_uids":["001-2", "005-2", "006-4",...], # the sample list of ADReSS dataset, where \(-n\) represents the \(n\)-th visit "positive_uids": ["001-2", "005-2", "010-3", "018-0",...], # the sample list of "dementia" cases from ADReSS training and test set, where \(-n\) represents the \(n\)-th visit "training_uids": ["001-2", "005-2", "006-4",...], # the sample list of ADReSS training set, where \(-n\) represents the \(n\)-th visit "test_uids": ["035-1", "045-0", "049-1",...], # the sample list of ADReSS test set, where \(-n\) represents the \(n\)-th visit "method": "fine-tune BERT", # very short description of method "evaluation": {"ACC": 0.77, "AUC": 0.77} # evaluation metrics used for the reported method }
```
**Block 4** Sample TRESTLE's baseline manifest in json. Full baseline manifest is available on TRESTLE's GitHub repository
In addition to our group, two other groups participated in the hackallenge. Both of these teams decided to use both text transcripts and audio recordings from the Pitt corpus and WLS; however, as we anticipated, the two groups had significantly different approaches to selecting data subsets, criteria for classification, and evaluation metrics and strategies. Table 2 demonstrates the differences between the data selection strategies between two teams who made the final submission. Both team successfully evaluated their methods on manifests of our and the other team and outperformed our baseline model performance with their own models, as seen in Table 3.
## Discussion
The results of the hackallenge were encouraging as the teams were able to use TRESTLE to achieve directly comparable results to those of the other teams without having to request any additional information or code. One of the key advantages that this hackallenge experiment has demonstrated is the elimination of any uncertainty in comparing results. TRESTLE facilitates the ability to compare results across multiple studies by providing all the necessary context for doing so - including the cutoffs used to define diagnostic categories. If the categories are not defined the same way in two studies, then by definition, the results of these studies cannot be directly compared. TRESTLE provides the information necessary to make this determination unambiguously. The main purpose of the toolkit, however, is to enable researchers to replicate the experimental setup, especially the data selection, exactly as performed by another team so that the only difference between the studies is the classification algorithm.
TRESTLE presented here has several limitations. First, it only supports data pre-processing for the Pitt corpus and WLS set and does not support pre-processing of the remaining data in DementiaBank. Secondly, the Pitt corpus and the WLS data are in American English, and many participants of these two studies are representative of White, non-Hispanic American men and women with an average of 12 years education. As result, TRESTLE currently has limited applicability to other ethnic groups and languages, though this may change as data from more diverse samples become available. Thirdly, TRESTLE runs two sub-modules using bash scripts; it may make TRESTLE more difficult to use for researchers who have less experience with programming. Finally, while TRESTLE only supports text or audio sample pre-processing specified on the "Cookie Theft" picture description task of the Pitt and WLS dataset in the current iteration, the design of TRESTLE offers the flexibility to generalize to any CHAT-formatted corpus. We believe it can be further iterated and improved for broader data pre-processing of corpora that are hosted on TalkBank for various downstream linguistic or Natural Language Processing (NLP) tasks, including those involving conversational, childhood language, multi-language and clinical datasets. With our access to Dementia Bank, we choose to focus our initial implementation of TRESTLE on the dementia-related corpus. Showing the feasibility of our approach with these data, we plan to further develop TRESTLE to support more datasets and data formats.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{28.5pt}|p{28.5pt}|} \hline
**Team** & **Dataset** & **Cutoff** \\ \hline Baseline & Pitt & ADReSS subset of Pitt corpus \\ \hline \multirow{2}{*}{Team 1} & Pitt & MMSE \(\leq\) 24 as dementia, otherwise healthy controls \\ \cline{2-3} & WLS & Verbal fluency score 16 for individuals aged \(<\) 60, 14 for age between 60 and 79, 12 for age \(\geq\) 79 as dementia group, otherwise healthy controls \\ \hline \multirow{2}{*}{Team 2} & Pitt & Diagnosis code 100 as dementia group, diagnosis code 800 as healthy controls \\ \cline{2-3} & WLS & Category fluency test score of 21 as cutoff \\ \hline \end{tabular}
\end{table}
Table 2: Criteria defined by the data hackallenge participants for the data selection.
\begin{table}
\begin{tabular}{|p{42.7pt}|p{28.5pt}|p{28.5pt}|p{28.5pt}|} \hline \multirow{2}{*}{**Team**} & \multicolumn{2}{p{28.5pt}|}{**Performance**} \\ \cline{2-4} & Accuracy & AUC & F1 \\ \hline Baseline & 0.77 & 0.77 & NA \\ \hline Team 1 & 0.94 & 0.92 & 0.84 \\ \hline Team 2 & 0.84 & 0.92 & 0.77 \\ \hline \end{tabular}
\end{table}
Table 3: Best model performances from participants of data hackallenge. Note that these results should be compared in light of the differences in the cutoffs to define categories, as shown in Table 2, and differences in the analytical model design. Please refer to the workshop proceeding[26] for more details.
## Conclusion
To address, at least in part, the pervasive challenge of reproducibility, we created TRESTLE, an application that provides researchers working on detection of speech and language characteristics of dementia with a way to replicate each other's experimental setup and explicitly communicate in a machine-readable fashion the parameters used in data pre-processing. TRESTLE was successfully used for the intended purpose in a hackallenge but clearly needs further development to enable wider adoption by the biomedical NLP and computational linguistic communities. Despite the limitations, TRESTLE also has a number of strengths. It provides the researcher with the ability to convey the details of their experiments (e.g. sample selection, category definition, exceptions) in a very transparent and reproducible fashion. This part of TRESTLE is not limited to the Pitt and WLS datasets and can be easily extended to any text and/or audio collection of data. While using the pre-processing modules requires some programming experience and these modules currently support only the Pitt and WLS datasets, they do encapsulate some of the standard practices, tools and methods for text and audio pre-processing. Last but not least, TRESTLE is an open-source package freely available on GitHub to the machine learning and all other communities to use and contribute to.
## Acknowledgement
This research was supported by grants from the National Institute on Aging (AG069792).
|
2301.08555 | Hybrid Open-set Segmentation with Synthetic Negative Data | Open-set segmentation can be conceived by complementing closed-set
classification with anomaly detection. Many of the existing dense anomaly
detectors operate through generative modelling of regular data or by
discriminating with respect to negative data. These two approaches optimize
different objectives and therefore exhibit different failure modes.
Consequently, we propose a novel anomaly score that fuses generative and
discriminative cues. Our score can be implemented by upgrading any closed-set
segmentation model with dense estimates of dataset posterior and unnormalized
data likelihood. The resulting dense hybrid open-set models require negative
training images that can be sampled from an auxiliary negative dataset, from a
jointly trained generative model, or from a mixture of both sources. We
evaluate our contributions on benchmarks for dense anomaly detection and
open-set segmentation. The experiments reveal strong open-set performance in
spite of negligible computational overhead. | Matej Grcić, Siniša Šegvić | 2023-01-19T11:02:44Z | http://arxiv.org/abs/2301.08555v3 | # Hybrid Open-set Segmentation
###### Abstract
Open-set segmentation is often conceived by complementing closed-set classification with anomaly detection. Existing dense anomaly detectors operate either through generative modelling of regular training data or by discriminating with respect to negative training data. These two approaches optimize different objectives and therefore exhibit different failure modes Consequently, we propose the first dense hybrid anomaly score that fuses generative and discriminative cues. The proposed score can be efficiently implemented by updating any semantic segmentation model with translation-equivariant estimates of data likelihood and dataset posterior. Our design is a remarkably good fit for efficient inference on large images due to negligible computational overhead over the closed-set baseline. The resulting dense hybrid open-set models require negative training images that can be sampled either from an auxiliary negative dataset or from a jointly trained generative model. We evaluate our contributions on benchmarks for dense anomaly detection and open-set segmentation of traffic scenes. The experiments reveal strong open-set performance in spite of negligible computational overhead.
Open-set segmentation, Open-set recognition, Out-of-distribution detection, Anomaly detection, Semantic segmentation, Synthetic data, Normalizing flows
## 1 Introduction
High accuracy, fast inference and small memory footprint of modern neural networks [1, 2] steadily expand the horizon of downstream applications. Many exciting applications require advanced image understanding functionality provided by semantic segmentation [3, 4, 5]. These models associate each pixel with a class from a predefined taxonomy [6]. They can accurately segment two megapixel images in real-time on low-power embedded hardware [7, 8, 9]. However, standard training procedures assume closed-world setup, which may raise serious safety issues in real-world deployments [10, 11]. For example, if a segmentation model missclassifies an unknown object (e.g. lost cargo) as road, the autonomous car may experience a serious accident [12]. Such hazards can be alleviated by complementing semantic segmentation with dense anomaly detection [13, 14]. The resulting open-set segmentation models [15] are fitter for real applications due to ability to decline decisions in unknown scene parts.
Previous approaches for open-set segmentation assume either a generative or a discriminative perspective. Generative approaches are based on density estimation [16] or image resynthesis [17, 18, 19]. Discriminative approaches use classification confidence [20], dataset posterior [15] or Bayesian inference [21]. However, the two perspectives exhibit different failure modes. Generative anomaly detectors inaccurately disperse the probability volume [22, 23, 24, 25] or face the hazards of image resynthesis [17, 18]. On the other hand, discriminative anomaly detectors require training on negative content from some general-purpose auxiliary dataset [15, 18, 26]. Such training may involve an overlap between training negatives and test set anomalies. Hence, the evaluation may lead to over-optimistic performance estimates and surprising failures in production.
In this work, we combine the two perspectives by designing a hybrid anomaly detector. The proposed approach complements a chosen closed-set semantic segmentation model with unnormalized dense dataset likelihood \(\hat{p}(\mathbf{x})\) and dense data posterior \(P(d_{\mathrm{in}}|\mathbf{x})\). Fusion of these two outputs yields an effective yet efficient dense anomaly detector which we refer to as DenseHybrid. Both components of our anomaly detector require training with negative data [15, 18, 26, 27, 28]. We present a way to relieve that dependence by leveraging synthetic negative data sourced from a generative model [28, 29, 30, 31]. Consequently, our experiments evaluate performance with and without real negative training data.
This paper extends our preliminary conference report [32] by allowing our dense hybrid models to train without real negative data. We achieve that by generating synthetic negative samples with a jointly trained normalizing flow. Different from previous work [31], the normalizing flow does not receive the gradients from the training objectives for anomaly detection. Such design ensures correct convergence of the normalizing flow in view of a complex formulation of the anomaly score, and provides a stronger learning signal to the dataset posterior head. Our new experiments explore open-set performance without training on real negative data and compare our unnormalized density estimator with respect to a general-purpose generative module. Finally, we substantially revise our presentation by supplying a more comprehensive review of the related work and improved descriptions of our method.
Our consolidated work brings forth the following contributions. First, we propose the first hybrid anomaly detector that allows end-to-end training, translational equivariance, and pixel-level predictions. The proposed DenseHybrid
method combines unnormalized density and discriminative dataset posterior. Both of these two components involve minimal computational overhead and require training on negative data. Second, we extend our approach by allowing it to learn only on inlier images. This configuration leverages synthetic negative data that correspond to generated samples at the boundary of the inlier distribution. Third, we propose open-mIoU as a novel performance metric for open-set segmentation in safety-critical applications. The main strength of the novel metric is exact quantification of the gap between closed-set and open-set setups. Fourth, our DenseHybrid anomaly detector can be easily attached to any closed-set segmentation approach. The resulting open-set segmentation algorithm delivers very competitive performance on standard benchmarks in road-driving scenes with and without training on real negative data.
## 2 Related Work
The related work considers anomaly detection (Sec. 2.1 and Sec. 2.2), open-set recognition (Sec. 2.3), training open-set recognition on synthetic data (Sec. 2.4), as well as progression towards open-world recognition (Sec. 2.5).
### _Image-wide Anomaly Detection_
Detecting samples which deviate from the generative process of the training data is a decades old problem [33]. In the machine learning community, this task is also known as anomaly detection, novelty detection and out-of-distribution (OOD) detection [13, 34]. Early image-wide approaches utilize max-softmax probability [34], input perturbations [35] ensembling [36] or Bayesian uncertainty [21]. More encouraging performance has been attained through discriminative training against real negative data [15, 27, 37, 38], adversarial attacks [39] or samples from appropriate generative models [28, 29, 31, 40]. Another line of work detects anomalies by estimating the likelihood. Surprisingly, this research reveals that anomalies may give rise to higher likelihood than inliers [22, 23, 25]. Generative models can mitigate this problem by sharing features with the primary discriminative model [41] and training on negative data [27].
### _Pixel-wise Anomaly Detection_
Image-wide anomaly detection can be adapted for dense prediction with variable success. Some of the existing image-wide approaches [41] are not applicable in dense prediction context, while others do not perform well [35] or involve excessive computational complexity [35, 36]. On the other hand, concepts such as discriminative training with negative data [27, 37, 42] are easily ported to dense prediction. Hence, several dense anomaly detectors are trained on mixed-content images obtained by pasting negatives (e.g. ImageNet, COCO, ADE20k) over regular training images [15, 18, 26]. Dataset posterior can be recovered by a dedicated head that shares features with the standard semantic segmentation head [15].
Anomalies can also be recognized in feature space [16]. However, this approach complicates detection of small objects due to subsampling and feature collapse [24]. Orthogonally, anomaly detector can be implemented according to learned dissimilarity between the input and the resynthesized image [17, 18, 19, 43]. The resynthesis is performed by a generative model conditioned on the predicted labels. However, this approach is suitable only for uniform backgrounds such as roads [17], and offline applications due to significant computational overhead. Besides dense anomaly detection in road driving scenes, some approaches consider applications in industrial facilities [44]. However, these setups are less relevant for our open-set algorithms since they do not involve the primary discriminative task.
Different than all previous work, we propose the first hybrid anomaly detector for dense prediction models. In comparison with previous approaches that build on dataset posterior [15, 27, 37], our method introduces synergy with likelihood evaluation. In comparison with approaches that recover dense likelihood [10], our method introduces joint hybrid training and efficient joint inference together with standard semantic segmentation. Our method is also related to joint energy-based models [45], since we also reinterpret logits as unnormalized joint likelihood. However, their method has to backprop through the intractable normalization constant and is therefore unsuitable for large resolutions and dense prediction. Our method completely avoids sampling by recovering unnormalized likelihood and training on negative data. Concurrent approaches [46,
Fig. 1: Qualitative performance of the proposed DenseHybrid approach on standard datasets. Top: input images. Bottom: dense maps of the proposed anomaly score. Unknown pixels are assigned with higher anomaly scores designated in yellow. Such a highly accurate anomaly detector enables us to derive the open-set segmentation model.
[47] consider only the generative component of our hybrid anomaly detector.
### _Open-set recognition_
Open-set recognition assumes presence of test examples that transcend the training taxonomy. Such examples are also known as semantic anomalies [13]. During inference, the model has to recognize semantic anomalies and withhold (or reject) the decision [48]. The rejection mechanism can be implemented by restricting the shape of the decision boundary [49, 50]. This can be carried out by thresholding the distance from learned class centers in the embedding space [49, 51]. Recognition performance can be further improved through employing a stronger classifier [52, 53]. Alternatively, the rejection mechanism can emerge by complementing the classifier with an anomaly detector [14, 34, 35]. The anomaly detector then detects samples which do not belong to the known classes. We direct the reader to [54] for a comprehensive overview of open-set approaches.
Most open-set approaches quantify performance by separate evaluation of closed-set recognition and anomaly detection [55, 56, 10, 34]. However, such practice does not reveal degradation of discriminative predictions due to errors in anomaly detection [57, 58]. This is especially pertinent to dense prediction models where we can observe inlier and outlier pixels in the same image. Recent work proposes a solution for the related problem of semantic segmentation in adverse conditions [59]. Their uncertainty-aware UIoU metric takes into account prediction confidence as measured by the probability of the winning class. However, UIoU assumes that each pixel belongs to one of the K known classes, which makes it inapplicable for open-set recognition. Different than all previous work, our anomaly-aware open-IoU metric specializes for evaluation of open-set segmentation in presence of outliers. It takes into account both false positive semantic predictions at outliers as well as false negative semantic predictions due to false positive anomaly detection. Furthermore, the difference between mIoU and open-mIoU reveals the performance gap due to presence of outliers in the test set.
### _Synthetic data in open-set recognition_
Recent seminal approaches train open-set recognition models on synthetic negative data produced by a jointly trained generative adversarial network (GAN) [28, 29]. The GAN is trained to generate inlier data that give rise to low recognition scores for each known class [28]. However, GANs are biased towards limited distribution coverage [24]. Consequently, they are unlikely to span the whole space of possible outliers. Thus, more promising results were achieved by mixing real and synthetic negative samples [40].
Alternatively, GANs can be replaced with generative models that optimize likelihood in order to improve distributional coverage [24]. This task calls for efficient approaches that support fast sampling since joint training requires sample generation on the fly. This puts at disadvantage many interesting generative models such as autoregressive PixelCNN and energy-based models. Normalizing flows are a great candidate for this role due to fast training and capability to quickly generate samples with different resolutions [31]. Instead of targeting negative data, a generative model can also target negative features [40]. This can be carried out by modelling inlier features and sampling synthetic anomalies from low-likelihood regions of feature space [60]. Negative data have also been crafted by leveraging adversarial perturbations [39].
### _Beyond open-set recognition_
Anomalous images or pixels can be clustered into new semantic classes. This can be done in incremental [61, 62] or zero/one/few-shot [63] setting. However, these approaches are still unable to compete with supervised learning on standard datasets. We direct reader to [64] for better analysis of pros and cons of low-shot learning.
## 3 Hybrid Score for Anomaly Detection
We propose a dense hybrid anomaly score that improves upon discriminative and generative anomaly detection (Sec. 3.1). The new hybrid anomaly score can be efficiently fused with a semantic classifier (Sec. 3.2).
We represent the input images with a random variable \(\underline{\mathbf{x}}\). Variable \(\underline{y}^{ij}\) denotes the corresponding label at the location \((i,j)\), while binary random variable \(\underline{d}^{ij}\) models whether a given pixel belongs to the inliers or outliers. We write \(d^{ij}_{\mathrm{in}}\) for inliers and \(d^{ij}_{\mathrm{out}}\) for outliers. We denote a realization of a random variable without the underline. Thus, \(P(y^{ij}|\mathbf{x})\) is a shortcut for \(P(\underline{y}^{ij}=y^{ij}|\underline{\mathbf{x}}=\mathbf{x})\). For brevity, we often omit spatial locations.
### _Hybrid Anomaly Detection for Dense Prediction_
Generative and discriminative approaches to anomaly detection exhibit different failure modes. Fig. 2 illustrates the shortcomings of both approaches on a toy problem. Blue dots designate inlier data. Green triangles designate the negative data used for training. Red squares denote anomalous test data. Discriminative detectors model dataset posterior \(P(d_{\mathrm{out}}|\mathbf{x})\). They fail if the negative training data does not cover the entire negative manifold (left) [27]. On the other hand, generative detectors which model \(p(\mathbf{x})\) tend to inaccurately distribute probability volume over the sample space [22, 23, 24] (center). We fuse discriminative and generative approaches into a hybrid detector that alleviates the aforementioned limitations (right).
We build our hybrid anomaly detector upon the discriminative dataset posterior \(P(d_{\mathrm{in}}|\mathbf{x})\) and the generative data likelihood \(p(\mathbf{x})\). We express a novel hybrid anomaly score as log-ratio between \(P(d_{\mathrm{out}}|\mathbf{x})=1-P(d_{\mathrm{in}}|\mathbf{x})\) and \(p(\mathbf{x})\):
\[s(\mathbf{x}):=\ln\frac{P(d_{\mathrm{out}}|\mathbf{x})}{p(\mathbf{x})}=\ln P(d _{\mathrm{out}}|\mathbf{x})-\ln p(\mathbf{x}). \tag{1}\]
We will further show that this formulation is especially suitable for dense predictions atop the dense classifier. There may be other effective formulations of \(s(\mathbf{x})\), which is an interesting direction for future work.
### _Efficient Implementation Atop Semantic Classifier_
Standard semantic classification can be viewed as a two-step procedure. Given an input image \(\mathbf{x}\), a deep feature extractor \(f_{\theta_{1}}\) computes an abstract representation \(\mathbf{z}\) also known as pre-logits. The computed pre-logits are projected into logits \(\mathbf{s}\), and activated by softmax. The softmax output is defined as class posterior probability \(P(y|\mathbf{x})\):
\[P(y|\mathbf{x}):=\mathrm{softmax}(\mathbf{s})_{y},\ \mathrm{where}\ \mathbf{s}=f_{\theta_{2}}( \mathbf{z}),\mathbf{z}=f_{\theta_{1}}(\mathbf{x}). \tag{2}\]
In practice, \(f_{\theta_{1}}\) is an encoder-decoder architecture common for semantic segmentation and \(f_{\theta_{2}}\) is a simple projection by means of 1x1 convolution. We extend this framework with dense data likelihood and discriminative dataset posterior.
Dense data likelihood can be conveniently derived atop dense classifier by re-interpreting logits as unnormalized joint probability of input and label [45]:
\[p(y,\mathbf{x})=\frac{1}{Z}\hat{p}(y,\mathbf{x}):=\frac{1}{Z}\exp\mathbf{s}_{ y},\ \mathrm{where}\ \ \mathbf{s}=f_{\theta_{2}}(\mathbf{z}). \tag{3}\]
\(Z\) denotes the corresponding normalization constant dependent only on model parameters. As usual, \(Z\) is finite but intractable, since it requires computing the unnormalized distribution for all realizations of \(\underline{y}\) and \(\underline{\mathbf{x}}\): \(Z=\sum_{\mathbf{x}}\sum_{y}\exp\mathbf{s}_{y}\). Throughout this work, we conveniently eschew the evaluation of \(Z\) in order to enable efficient training and inference.
We express the dense likelihood \(p(\mathbf{x})\) by marginalizing out \(\underline{y}\):
\[p(\mathbf{x})=\sum_{y}p(y,\mathbf{x})=\frac{1}{Z}\,\sum_{y}\hat{p}(y,\mathbf{ x})=\frac{1}{Z}\,\sum_{y}\exp\mathbf{s}_{y}. \tag{4}\]
Standard discriminative predictions are easily recovered through Bayes rule \(p(y,\mathbf{x})/p(\mathbf{x})\):
\[P(y|\mathbf{x})=\frac{p(y,\mathbf{x})}{\sum_{y^{\prime}}p(y^{\prime},\mathbf{ x})}=\frac{\exp\mathbf{s}_{y}}{\sum_{y^{\prime}}\exp\mathbf{s}_{y^{\prime}}}= \mathrm{softmax}(\mathbf{s})_{y}. \tag{5}\]
The normalization constant \(Z\) appears both in the numerator and denominator, and hence can be cancelled out. Reinterpretation of logits as unnormalized joint probability enables likelihood estimation atop a discriminative classifier task and even exploiting pretrained classifiers. Note that adding a constant value to the logits does not affect the standard classification but affects our framework since the value of \(p(\mathbf{x})\) changes. Hence, we use the extra degree of freedom in logits to express the data likelihood [45]. The same extra degree of freedom has been used to model a discriminator network in semi-supervised learning [65].
We define the dataset posterior \(P(d_{\mathrm{in}}|\mathbf{x})\) as a non-linear transformation based on pre-logits \(\mathbf{z}\)[15]:
\[P(d_{\mathrm{in}}|\mathbf{x}):=\sigma(g_{\gamma}(\mathbf{z})). \tag{6}\]
In our case, the function \(g\) is BN-ReLU-Conv1x1 of pre-logits, followed by a sigmoid non-linearity.
We can now compute the proposed dense hybrid anomaly score (1) atop the classifier as:
\[s(\mathbf{x}) :=\ln P(d_{\mathrm{out}}|\mathbf{x})-\ln\hat{p}(\mathbf{x})+\ln Z \tag{7}\] \[\cong\ln P(d_{\mathrm{out}}|\mathbf{x})-\ln\hat{p}(\mathbf{x}). \tag{8}\]
We can neglect \(Z\) since ranking performance [34] is invariant to monotonic transformations such as taking a logarithm or adding a constant. Note that the logarithmic function re-scales the unnormalized \(\hat{p}(\mathbf{x})\) and \(P(d_{\mathrm{out}}|\mathbf{x})\) on approximately the same scale, equalizing the influence of both components in the final decision. The resulting formulation (7) is especially well suited for dense prediction due to minimal overhead and translation equivariance.
Figure 3 illustrates dense inference with the proposed hybrid open-set setup. RGB input is fed to a hybrid dense model which produces pre-logit activations \(\mathbf{z}\) and logits \(\mathbf{s}\). We activate the closed-set class posterior \(P(y|\mathbf{x})\) with softmax and the unnormalized data log-likelihood \(\ln\hat{p}(\mathbf{x})\) via log-sum-exp operator (designated in green). A distinct head \(g\) transforms pre-logits \(\mathbf{z}\) into the dataset posterior \(P(d_{\mathrm{out}}|\mathbf{x})\) (designated in yellow). The anomaly score \(s(\mathbf{x})\) is a log ratio between the latter two outputs. The resulting anomaly map is thresholded and fused with the discriminative output into the final dense open-set recognition map.
## 4 Open-set Training with DenseHybrid
Our open-set approach complements an arbitrary closed-set segmentation model with the DenseHybrid anomaly detector. We propose a novel training setup that eschews the intractable normalization constant by introducing negative data to the generative learning objective (Sec. 4.1). The same negative data are used to train the dataset posterior. We relax dependence on real negatives by sampling a suitably trained normalizing flow (Sec. 4.2).
Fig. 2: Anomaly detection on a toy dataset. The discriminative approach (left) models the dataset posterior. It fails if the negative training dataset fails to cover all modes of the test anomalies. The generative approach (middle) models the data likelihood. It may assign high likelihoods to test anomalies [22] due to over-generalization [24]. The hybrid approach attains a synergy between discriminative and generative modelling.
### _Open-set training with real negative data_
The multi-task model from Fig. 3 requires joint fine-tuning of three dense prediction heads: i) closed-set class posterior \(P(y|\mathbf{x})\), ii) unnormalized data likelihood \(\hat{p}(\mathbf{x})\)[45], and iii) dataset posterior \(P(d_{\mathrm{in}}|\mathbf{x})\)[15]. The class-posterior head requires a discriminative loss over the inlier dataset \(D_{\mathrm{in}}\):
\[L_{\mathrm{cls}}(\theta) =\mathbb{E}_{\mathbf{x},y\in D_{\mathrm{in}}}[-\ln P(y|\mathbf{ x})] \tag{9}\] \[=-\mathbb{E}_{\mathbf{x},y\in D_{\mathrm{in}}}[\mathbf{s}_{y}- \ln\sum_{y^{\prime}}\exp\mathbf{s}_{y^{\prime}}]. \tag{10}\]
Training unnormalized likelihood is a daunting task since backpropagation through \(p(\mathbf{x})\) involves intractable integration over all possible images [66, 67]. Previous solutions are based on MCMC sampling [45], however, this is not feasible in our setup due to high-resolution inputs and dense prediction. We eschew the normalization constant by optimizing the likelihood both in inlier and outlier pixels:
\[L_{\mathbf{x}}(\theta) =\mathbb{E}_{\mathbf{x}\in D_{\mathrm{in}}}[-\ln p(\mathbf{x})] -\mathbb{E}_{\mathbf{x}\in D_{\mathrm{out}}}[-\ln p(\mathbf{x})] \tag{11}\] \[=-\mathbb{E}_{\mathbf{x}\in D_{\mathrm{in}}}[\ln\hat{p}(\mathbf{ x})]-\ln Z+\mathbb{E}_{\mathbf{x}\in D_{\mathrm{out}}}[\ln\hat{p}(\mathbf{x})]+ \ln Z\] (12) \[=-\,\mathbb{E}_{D_{\mathrm{in}}}\left[\ln\sum_{i}\exp(\mathbf{s} _{i})\right]+\,\mathbb{E}_{D_{\mathrm{out}}}\left[\ln\sum_{i}\exp(\mathbf{s} _{i})\right] \tag{13}\]
As before, \(\mathbf{s}\) stands for logits computed by \(f_{\theta}\). Note that the normalization constant \(Z\) cancels out due to training on outliers. In practice, we use a simplified loss that corresponds to an upper bound of the above expression (\(L_{\mathbf{x}}^{\mathrm{UB}}\geq L_{\mathbf{x}}\)):
\[L_{\mathbf{x}}^{\mathrm{UB}}(\theta)=-\,\mathbb{E}_{\mathbf{x},y\in D_{ \mathrm{in}}}[\mathbf{s}_{y}]+\,\mathbb{E}_{\mathbf{x}\in D_{\mathrm{out}}}[ \ln\sum_{i}\exp(\mathbf{s}_{i})]. \tag{14}\]
Proof can be easily derived by recalling that log-sum-exp is a smooth upper bound of the max function. Thus, our upper bound \(L_{\mathbf{x}}^{\mathrm{UB}}\) leverages the following inequalities:
\[\ln\sum_{i}\exp\mathbf{s}_{i}\geq\max_{i}\mathbf{s}_{i}\geq\mathbf{s}_{y}. \tag{15}\]
Comparison of the discriminative loss (9) and the generative upper bound (14) reveals that the standard classification loss is well aligned with the upper bound in inlier pixels. Recall that training data likelihood only on inliers [45, 66] would require MCMC sampling, which is infeasible in our context. Unnormalized likelihood could also be trained through score matching [67]. However, this would preclude hybrid modelling due to having to train on noisy inputs. Consequently, it appears that the proposed training approach is a method of choice in our context.
The dataset-posterior head \(P(d_{\mathrm{in}}|\mathbf{x})\) requires a discriminative loss that distinguishes the inlier dataset \(D_{\mathrm{in}}\) from the outlier dataset \(D_{\mathrm{out}}\)[15]:
\[L_{\mathbf{d}}(\theta,\gamma)=-\mathbb{E}_{\mathbf{x}\in D_{ \mathrm{in}}}[ \ln P(d_{\mathrm{in}}|\mathbf{x})]\\ -\mathbb{E}_{\mathbf{x}\in D_{\mathrm{out}}}[\ln(1-P(d_{\mathrm{ in}}|\mathbf{x}))]. \tag{16}\]
Our final compound loss aggregates \(L_{\mathrm{cls}}\), \(L_{\mathbf{x}}^{\mathrm{UB}}\) and \(L_{\mathbf{d}}\):
\[L(\theta,\gamma) =-\mathbb{E}_{\mathbf{x},y\in D_{\mathrm{in}}}[\ln P(y|\mathbf{ x})+\ln P(d_{\mathrm{in}}|\mathbf{x})]\\ -\beta\cdot\mathbb{E}_{\mathbf{x}\in D_{\mathrm{out}}}[\ln(1-P(d_{ \mathrm{in}}|\mathbf{x}))-\ln\hat{p}(\mathbf{x})]. \tag{17}\]
Hyperparameter \(\beta\) controls the impact of negative data to the primary classification task. Note that we omit the first term from \(L_{\mathbf{x}}^{\mathrm{UB}}\) (14) in inlier pixels since this is implicitly enforced through optimization of \(L_{\mathrm{cls}}\) (9).
Figure 4 illustrates the described procedure for training open-set segmentation models. We prepare mixed-content training images \(x^{\prime}\) by pasting negative patches \(\mathbf{x}^{-}\) into regular training images \(\mathbf{x}^{+}\):
\[\mathbf{x}^{\prime}=(\mathbf{1}-\mathbf{s})\cdot\mathbf{x}^{+}+\mathrm{pad}( \mathbf{x}^{-},\mathbf{m}),\quad\mathbf{x}^{-}\in D_{\mathrm{out}}. \tag{18}\]
Note that here we leverage real negative images \(x^{-}\in D_{\mathrm{out}}\). We consider synthetic negatives in the subsequent subsection. The binary mask \(\mathbf{m}\) identifies negative pixels within the mixed-content image \(\mathbf{x}^{\prime}\). Negative pixels are labelled as \(d_{\mathrm{out}}\) while positive pixels are labelled as \(d_{\mathrm{in}}\). Semantic labels of negative pixels are set to void.
The resulting mixed-content image \(\mathbf{x}^{\prime}\) is fed to the desired semantic segmentation model that produces pre-logits \(\mathbf{z}\) and logits \(\mathbf{s}\). We recover the class posterior \(P(y|\mathbf{x})\) by activating logits with softmax. We recover the unnormalized log-likelihood \(\ln\hat{p}(\mathbf{x})\) by processing logits with log-sum-exp. We recover dataset posterior \(P(d+\mathrm{in}|\mathbf{x})\) by processing pre-logit activations with the standard BN-ReLU-Conv\(1\times 1\) unit. The compound training loss \(L(\theta,\gamma)\) (17) aggregates class-discriminative loss \(L_{\mathrm{cls}}\) (9), generative loss \(L_{\mathbf{x}}^{\mathrm{UB}}\) (14) and dataset-discriminative loss \(L_{\mathbf{d}}\) (16).
### _Open-set training with synthetic negative data_
Anomaly detectors can avoid biased predictions by replacing real negative training data with samples of a suitable generative model [28, 30, 31, 68]. The generative model has to be trained to generate synthetic samples that encompass the border of the inlier distribution [28]. The required learning signal can be derived from discriminative predictions [28, 30, 31] or provided by an adversarial module [39]. Hence, replacing real negative data with synthetic counterparts requires joint training of the generative model. We choose a normalizing flow [69] for this task due to exceptional distributional coverage and ability to quickly generate samples of varying spatial dimensions [70]. We train the normalizing flow \(p_{\zeta}\) according to a weighted sum of two loss terms: \(L_{\mathrm{mle}}\) and \(L_{\mathrm{jid}}\).
The data term \(L_{\mathrm{mle}}\) corresponds to negative log-likelihood of random crops from inlier images \(\mathbf{x}^{+}\):
\[L_{\mathrm{mle}}(\zeta)=-\mathbb{E}_{\mathbf{x}^{+}\in D_{\mathrm{in}}}[\ln p_{ \zeta}(\text{crop}(\mathbf{x}^{+}))]. \tag{19}\]
Fig. 3: The proposed open-set segmentation approach. Our anomaly score is the log-ratio of dense data likelihood and discriminative dataset posterior. Both outputs are derived from the standard dense classifier. We formulate open-set segmentation by complementing the closed-set segmentation map with the thresholded anomaly score.
The crop notation mirrors the pad notation from (18). Random crops vary in spatial resolution. This term aligns the generative distribution with the distribution of the training data. It encourages coverage of the entire inlier distribution under the condition that the generative model has sufficient capacity.
The boundary-attraction term \(L_{\mathrm{jsd}}\)[70] corresponds to negative Jensen-Shannon divergence between the class-posterior and the uniform distribution at all generated pixels. This term pushes the generative distribution towards the periphery of the inlier distribution where the class posterior should be unclear. Note that gradients of this term must propagate through the entire semantic model in order to reach the normalizing flow. Hence, the flow is penalized when the generated sample yields high softmax confidence. This signal pushes the generative distribution away from high-density regions of the input space [28].
The total loss of the normalizing flow modulates the contribution of the boundary term with the hyperparameter \(\lambda\):
\[L(\zeta)=L_{\mathrm{mle}}(\zeta)+\lambda\cdot L_{\mathrm{jsd}}(\zeta;\theta) \tag{20}\]
Optimization of this loss enforces the generative distribution to encompass all modes of inlier distribution. Note that our normalizing flow can never match the diversity of images from a real dataset such as COCO or ADE20k. It would be unreasonable to expect a generative model to draw a sofa after training on Cityscapes. Still, if the flow succeeded to learn well the boundary of the inlier distribution, then DenseHybrid would be inclined to recognize all off-distribution samples as anomalies [28].
Details of the training procedure are again illustrated in Figure 4. We sample the normalizing flow by i) selecting a random spatial resolution (H,W) from a predefined interval, ii) sampling a random latent representation \(\mathbf{z}\sim\mathcal{N}(0,\mathrm{I}_{HW})\), and iii) feeding \(\mathbf{z}\) to the flow so that \(\mathbf{x}^{-}=h_{c}^{-1}(\mathbf{z})\). We again craft a mixed-content image \(\mathbf{x}^{\prime}\) by pasting the synthesized negative patch \(\mathbf{x}^{-}\) into the regular training image \(\mathbf{x}^{+}\) according to (18). We perform the forward pass, determine \(L_{\mathrm{cls}}\), \(L_{\mathbf{d}}\), \(L_{\mathbf{x}}\), and recover the training gradients by backpropagation. Of course, gradient of \(L_{\mathrm{jsd}}\) is propagated all the way to the normalizing flow. We now take the deleted inlier patch \(\mathbf{x}^{+}_{s}\), perform inference with the normalizing flow (\(\mathbf{z}=h_{c}(\mathbf{x}^{+}_{s})\)) and accumulate gradients of \(L_{\mathrm{mle}}\) before performing a model-wide parameter update.
## 5 Experimental setup
We describe benchmarks and datasets used for the evaluation of DenseHybrid in dense anomaly detection and open-set segmentation experiments (Sec. 5.1). We propose a new metric to adequately quantify the gap between the open-set and closed-set performance (Sec. 5.2). Also, we present the main implementation details of our solution (Sec. 5.3).
### _Benchmarks and Datasets_
We evaluate performance on standard benchmarks for dense anomaly detection and open-set segmentation. Fishyscapes [10] considers urban scenarios on a subset of LostAndFound [12] and on Cityscapes validation images with pasted anomalies (FS Static). SegmentRefYouCan (SMIYC) [56] collects carefully selected images from the real world and groups them with respect to the anomaly size into AnomalyTrack (large) and ObstacleTrack (small). Moreover, the benchmark includes a selection of images of the LostAndFound dataset [12] in which the lost objects do not correspond to the Cityscapes taxonomy. Unfortunately, both benchmarks supply only binary labels, which makes them inappropriate for evaluating open-set performance. Hence, we report only anomaly detection performance on these benchmarks. We also validate performance on Cityscapes while reinterpreting a subset of ignore classes as the unknown class [40].
StreetHazards [71] is a synthetic dataset created with CARLA virtual environment. The simulated environment enables smooth anomaly injection and low-cost label extraction. Consequently, the dataset contains K+1 labels, making it suitable for measuring open-set recognition performance.
### _Measuring open-set performance_
Previous work evaluates open-set segmentation through anomaly detection [12, 56] and closed-set segmentation
Fig. 4: The two training procedures for the proposed open-set training with DenseHybrid. We construct mixed-content images by pasting negatives into inlier images. The negative training data can be sampled either from an auxiliary real dataset (Sec. 4.1) or from a jointly trained normalizing flow (Sec. 4.2). Mixed-content images are fed to the open-set model with three dense outputs: closed-set class posterior, unnormalized likelihood, and dataset posterior. Outputs are optimized according to the compound loss (17). In the case of synthetic negatives, the normalizing flow optimizes the loss (20).
[10]. The reported drop in closed-set performance is usually negligible and is explained by the allocation of model capacity for anomaly detection. However, we will show that the impact of anomalies onto segmentation performance can be clearly characterized only in the open-set setup. More precisely, we shall take into account false positive semantic predictions at anomalies as well as false negative semantic predictions due to false anomaly detections.
We propose a novel evaluation procedure for open-set segmentation. Our procedure starts by thresholding the anomaly score so that it yields 95% TPR anomaly detection on held-out data. This is equivalent to the 5th percentile of inlier scores. Then, we override the classification in pixels which score higher than the threshold. This yields a recognition map with \(K+1\) labels. We assess open-set segmentation performance according to a novel metric that we term open-mIoU. We compute open-IoU for the \(k\)-th class as follows:
\[\text{open-IoU}_{k}=\frac{\text{TP}_{k}}{\text{TP}_{k}+\text{FP}_{k}^{\text{ os}}+\text{FN}_{k}^{\text{os}}},\quad\text{where} \tag{21}\]
\[\text{FP}_{k}^{\text{os}}=\sum_{i=1,i\neq k}^{K+1}\text{FP}_{k}^{i},\quad\text {FN}_{k}^{\text{os}}=\sum_{i=1,i\neq k}^{K+1}\text{FN}_{k}^{i}. \tag{22}\]
Different that the standard IoU formulation, open-IoU takes into account false positives and false negatives caused by applying imperfect anomaly detectors at open-set pixels. In particular, a prediction of class \(k\) at an outlier pixel (false negative anomaly detection) counts as a false positive for class \(k\). Furthermore, a prediction of class K+1 at a pixel labelled as class \(k\) (false positive anomaly detection) counts as a false negative for class \(k\). Note that we still average open-IoU over \(K\) inlier classes. Thus, a recognition model with perfect anomaly detection gets assigned the same performance as in the closed world. Note that this property would not be preserved if we averaged open-IoU over K+1 classes. Hence, a comparison between closed-set mIoU and open-set open-mIoU quantifies the gap between the open and closed-set performance. Some experiments report \(F_{1}\) score averaged over K+1 classes [40, 57]. However, m\(F_{1}\) can not be used to quantify the performance gap.
Figure 5 compares the closed-set (left) and open-set (right) evaluation protocols. Imperfect anomaly detection impacts recognition performance through increased false positive semantics (designated in yellow) and false negative semantics (designated in red). The difference between closed-set mIoU and open-mIoU reveals the performance drop due to inaccurate anomaly detection.
Measuring performance according to open-mIoU requires datasets with K+1 labels. Collecting and annotating a dataset with such taxonomy requires substantial resources. Currently, only StreetHazards [71] offers this opportunity.
### _Implementation Details_
The proposed approach can be easily applied to any pre-trained semantic segmentation baseline: the only requirement is access to pre-logit features and dense logits. We append an additional branch \(g_{\gamma}\) which is in our case BN-ReLU-Conv1x1, to compute the discriminative dataset posterior. We obtain unnormalized likelihood as the sum of exponentiated logits. We fine-tune the resulting open-set models on mixed-content images with pasted negative ADE20k instances (4.1) or synthetic negative patches (4.2).
In the case of SMIYC, we fine-tune LDN-121 [75] for 10 epochs on images from Cityscapes [76], Vistas [77] and Wilddash [55]. In the case of Fishspaces, we use DeepLabV3+ with WideResNet38 [78]. We fine-tune the model for 10 epochs on Cityscapes. In the case of StreetHazards, we train LDN-121 for 120 epochs in the closed-world setting and then fine-tune the open-set model on mixed-content images.
Configurations that do not rely on real negative data leverage synthetic data of varying resolution as generated by DenseFlow-45-6 [69]. All such experiments pre-train DenseFlow with the standard MLE loss on \(64\times 64\) crops from road-driving images prior to joint learning. Our fine-tuning experiments last less than 24h on RTX A5000 GPU. Our source code is publicly available [79].
## 6 Experimental results
We evaluate DenseHybrid performance in dense anomaly detection (Sec. 6.1) and open-set segmentation (Sec. 6.2) experiments, after training with and without real negative data. We also ablate the design choices (Sec. 6.3), explore the influence of distance (Sec. 6.4), and present the computational requirements of the proposed module (Sec. 6.5).
### _Dense Anomaly Detection in Open-set Setups_
Table I presents dense anomaly detection performance on SMIYC [56] and Fishyscapes [16]. We include our model trained on real negative data (DenseHybrid) as well as our model trained on synthetic negatives (SynDenseHybrid). DenseHybrid outperforms contemporary approaches on both AnomalyTrack and ObstacleTrack by a wide margin. Also, it achieves the best \(\text{FPR}_{95}\) on LostAndFound-noKnown. Similarly, it delivers the best performance on Fishyscapes LostAndFound and the best \(\text{FPR}_{95}\) on Static.
Denselybridly outperforms all previous methods that do not train on real negative data on ObstacleTrack and LostAndFound-noKnown. In the case of AnomalyTrack, it is outperformed only by image resynthesis [17] that requires
Fig. 5: We extend closed-set performance evaluation (left) with a novel open-set metric (right). Open-IoU takes into account false positive semantics at anomalies as well as false negative semantics due to false anomaly detections. The proposed open-mIoU metric quantifies recognition performance in presence of anomalies.
significant computational overhead. Also, DenseHybridSyn achieves the best performance on all but one metric of Fishyscapes and the second-best AP on Fishyscapes Static. As in the case of training on real negative data, the hybrid anomaly detector achieves the best performance on Fishyscapes with exception of AP on Static. Note that the presented performance evaluation uses standard performance metrics of the particular datasets. Our performance metrics on Fishyscapes LostAndFound would increase if we considered only the road pixels as in [19]. The rightmost column of the table indicates that our fine-tuning protocol exerts a negligible impact on closed-set performance. However, the next section will show that the impact of anomaly detection on final recognition performance is more significant than what can be measured with closed-set metrics.
Figure 6 shows synthetic negatives produced by the training setup from Sec. 4.2. Samples vary in spatial resolution and lack meaningful visual concepts. Yet, training our open-set model on such samples yields only slightly worse performance than when training on real negative data.
Table II presents performance on Road Anomaly [17] and on validation subsets of Fishyscapes. The top section presents methods which do not train on real negative data. The bottom section presents methods which train on real negative data. Our method performs competitively with respect to the previous works in both setups.
We validate our method by considering a subset of Cityscapes void classes as the unknown class [40]. More precisely, we consider all void classes except 'unlabeled', 'ego vehicle','rectification border', 'out of roi' and 'license plate' as unknowns during validation. Table III compares performance according to the AUROC (AUC) metric. SynDenseHybrid outperforms all previous works. Most notably, it outperforms the previous state of the art [40] by three percentage points. To offer fair comparison with previous work, we do not report results when training on real negative data since such data was not used in related work [40, 80].
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \multirow{2}{*}{Method} & \multicolumn{4}{c}{SegmentMelfYouCan [56]} & \multicolumn{4}{c}{Fishyscapes [10]} \\ & \(\begin{array}{c}\text{Aux}\\ \text{data}\end{array}\) & \(\begin{array}{c}\text{Img}\\ \text{rsrf}\end{array}\) & \(\begin{array}{c}\text{AnomalyTrack}\\ \text{AP}\end{array}\) & \(\begin{array}{c}\text{ObstacleTrack}\\ \text{AP}\end{array}\) & \(\begin{array}{c}\text{LASF-noKnown}\\ \text{PS Laf}\end{array}\) & \(\begin{array}{c}\text{FS Static}\\ \text{CS val}\end{array}\) \\ \hline Image Resyn. [17] & ✗ & ✓ & **52.3** & **25.9** & 37.7 & 47 & 57.1 & 8.8 & 5.7 & 48.1 & 29.6 & 27.1 & 81.4 \\ Road Inpaint. [72] & ✗ & ✓ & - & - & 54.1 & 47.1 & 82.9 & 35.8 & - & - & - & - & - \\ Max softmax [34] & ✗ & ✗ & 28.0 & 72.1 & 15.7 & 16.6 & 30.1 & 33.2 & 1.8 & 44.9 & 12.9 & 39.8 & 80.3 \\ MC Dropout [21] & ✗ & ✗ & 28.9 & 69.5 & 4.9 & 50.3 & 36.8 & 35.6 & - & - & - & - & - \\ ODIN [35] & ✗ & ✗ & 33.1 & 71.7 & 22.1 & 15.3 & 52.9 & 30.0 & - & - & - & - & - \\ SML [73] & ✗ & \multicolumn{1}{c}{✗} & - & - & - & - & - & - & 31.7 & 21.9 & 52.1 & 20.5 & - \\ Embed. Dens. [10] & ✗ & ✗ & 37.5 & 70.8 & 0.8 & 46.4 & 61.7 & 10.4 & 4.3 & 47.2 & 62.1 & 17.4 & 80.3 \\ JSRNet [19] & ✗ & ✗ & 33.6 & 43.9 & 28.1 & 28.9 & 74.2 & 6.6 & - & - & - & - & - \\ SynDenseHybrid (ours) & ✗ & ✗ & **51.5** & **33.2** & **64.0** & **0.6** & **78.8** & **1.1** & **51.8** & **11.5** & 54.7 & **15.5** & 79.9 \\ SynBoost [18] & ✓ & ✓ & 56.4 & 61.9 & 71.3 & 3.2 & 81.7 & 4.6 & 43.2 & 15.8 & 72.6 & 18.8 & 81.4 \\ Prior Entropy [74] & ✓ & ✗ & - & - & - & - & - & 34.3 & 47.4 & 31.3 & 84.6 & 70.5 \\ OOD Head [42] & ✓ & ✗ & - & - & - & - & - & 31.3 & 19.0 & **96.8** & **0.3** & 79.6 \\ Void Classifier [10] & ✓ & ✗ & 36.6 & 63.5 & 10.4 & 41.5 & 4.8 & 47.0 & 10.3 & 22.1 & 45.0 & 19.4 & 70.4 \\ Dirichlet prior [74] & ✓ & ✗ & - & - & - & - & - & 34.3 & 47.4 & 84.6 & 30.0 & 70.5 \\ DenseHybrid (ours) & ✓ & ✗ & **78.0** & **9.8** & **87.1** & **0.2** & 78.7 & **2.1** & **43.9** & **6.2** & 72.3 & **5.5** & **81.0** \\ \end{tabular}
\end{table} TABLE I: Anomaly detection performance on SegmentMelfYouCan [56]. Aux data denotes training on real negatives, while Img rsyn.
\begin{table}
\begin{tabular}{l c c c c c} Model & \multicolumn{2}{c}{RA} & \(\begin{array}{c}\text{PS Laf}\\ \text{AP}\end{array}\) & \(\begin{array}{c}\text{FS Static}\\ \text{AP}\end{array}\) & \(\begin{array}{c}\text{AP}\end{array}\) & \(\begin{array}{c}\text{PS Laf}\\ \text{AP}\end{array}\) & \(\begin{array}{c}\text{FS Static}\\ \text{AP}\end{array}\) \\ \hline MSP [34] & 15.7 & 71.4 & 4.6 & 40.6 & 19.1 & 24.0 \\ ML [71] & 19.0 & 70.5 & 14.6 & 42.2 & 38.6 & 18.3 \\ SML [73] & 25.8 & 49.7 & 36.6 & 14.5 & 48.7 & 16.8 \\ SynDrCP [43] & 24.9 & 64.7 & 6.5 & 46.0 & 23.2 & 34.0 \\ Density [10] & - & - & 4.1 & 22.3 & - & - \\ SynDenseHybrid & **35.1** & **37.2** & **59.5** & **10.4** & **49.0** & **10.3** \\ SynBoost [18] & 38.2 & 64.8 & 60.6 & 31.0 & **66.4** & 25.6 \\ OOD head [15] & - & - & 45.7 & 24.0 & - & - \\ Energy [38] & 19.5 & 70.2 & 16.1 & 41.8 & 41.7 & 17.8 \\ DenseHybrid & **63.9** & **43.2** & **60.5** & **6.0** & 63.1 & **4.2** \\ \end{tabular}
\end{table} TABLE II: Performance of DenseHybrid on Road Anomaly and Fishyscapes val. DenseHybrid delivers strong performance when trained with and without real negative images.
Fig. 6: Synthetic negatives produced by a normalizing flow trained as described in Sec. 4.2. These samples are pasted into training crops instead of real negative images (instances from ADE20k). We sample the normalizing flow at different resolutions in order to mimic real-world anomalies which vary in size.
### _Open-set Segmentation_
We recover open-set segmentation by fusing a closed-set segmentation with properly thresholded dense anomaly detection (Fig. 3). Such model detects anomalous regions, while also correctly classifying inlier parts of the scene. We measure open-set performance on the StreetHazards dataset according to mean \(F_{1}\) (\(\overline{F_{1}}\)) score and the proposed open-mIoU (\(\overline{\text{oIoU}}\)) metric. We partition the test subset into two folds which correspond to the two test cities - t5 and t6. We set the anomaly score threshold in order to obtain 95% TPR on t5, and measure open-mIoU on t6. Subsequently, we switch the folds and measure open-mIoU on t5. We compute the overall open-mIoU by weighting these two measurements according to the number of images in the two folds. Table IV presents performance evaluation on StreetHazards. The left part of the table considers anomaly detection while the right part of the table considers closed-set and open-set segmentation performance. Our method outperforms contemporary approaches in anomaly detection both with and without training on real negative data. Furthermore, our method achieves the best open-set performance (columns \(\overline{\text{oIoU}}\) and \(\overline{F_{1}}\)) despite lower closed-set segmentation score (\(\overline{\text{IoU}}\)) column. The performance drop between closed-set and open-set can be quantified as the difference between \(\overline{\text{IoU}}\) and \(\overline{\text{oIoU}}\) ("Gap" column). Our method achieves the least performance gap of around 18 percentage points. Nevertheless, an ideal anomaly detector would achieve equal open-set and closed-set metrics. Hence, we conclude that even the state-of-the-art anomaly detectors are still insufficient for delivering closed-set performance in open-set setups. Researchers should strive to further close this gap in order to improve the safety of recognition systems in the real world.
We implemented [38, 84] into our code base by following publicly available implementations. For the energy fine-tuning [38], we found that the optimal hyperparameters for dense setup are \(m_{in}=-15\) and \(m_{out}=-5\). ReAct [84] delivers the best results when the method-specific hyperparameter c = 0.99. Note that [39] also reports performance on StreetHazards, however, they aim to detect classification errors instead of anomalies.
Figure 7 visualises qualitative open-set segmentation performance on StreetHazards test. Our hybrid anomaly detector accurately combines dense anomaly detection (second row) with closed-set segmentation and delivers open-set segmentation (third row). We also show the energy-based approach [38] which yields more false positives (fourth row).
### _Ablating Components of our Hybrid Detector_
Table V validates components of our hybrid anomaly detection approach on Fishyscapes val. The top two sections compare our hybrid anomaly detector (7) with its generative and discriminative components - \(\hat{p}(\mathbf{x})\) and \(P(d_{\text{in}}|\mathbf{x})\) when training on real and synthetic negative data, respectively. We observe that the hybrid detector outperforms unnormalized density which outperforms dataset posterior. We observe the same qualitative behaviour when training on real and synthetic negative data. Interestingly, the synergistic effect of compound hybrid detection is larger in the case of synthetic negatives. This finding suggests that our hybrid formulation can compensate for incomplete coverage of the out-of-distribution manifold in test images.
Bottom section replaces our unnormalized likelihood with likelihood of pre-logits as estimated by a normalizing flow. The flow is applied point-wise to obtain dense likelihood, similar to embeding density [10]. This can also be viewed as a generalization of a previous image-wide open-set approach [41] on dense predictions. We still train on negative data in an end-to-end fashion in order to make the two generative components comparable. The resulting model behaves similarly to embedding density [10] -- good performance on FS Static and somewhat poorer performance on FS LostAndFound. Formulating dense likelihood
\begin{table}
\begin{tabular}{l c c c c c c c} \multirow{2}{*}{Method} & \multicolumn{3}{c}{Anomaly} & \multicolumn{3}{c}{Cls.} & \multicolumn{3}{c}{Open-set} \\ & AP & \(\mathrm{FPR}\) & AUC & \(\overline{\text{IoU}}\) & \(\overline{\text{F}_{1}}\) & \(\overline{\text{oIoU}}\) & Gap \\ \hline SynthCP [43] & 9.3 & 28.4 & 88.5 & - & - & - & - \\ Dropout [21] & 7.5 & 79.4 & 69.9 & - & - & - & - \\ TRAD [38] & 7.2 & 25.3 & 89.2 & - & - & - & - \\ SO+H [31] & 12.7 & 25.2 & 91.7 & 59.7 & - & - & - \\ DML [51] & 14.7 & 17.3 & 93.7 & - & - & - & - \\ MSP [34] & 7.5 & 27.9 & 90.1 & 65.0 & 46.4 & 35.1 & 29.9 \\ ODIN [35] & 7.0 & 28.7 & 90.0 & 65.0 & 41.6 & 28.8 & 36.2 \\ ReAct [84] & 10.9 & 21.2 & 92.3 & 62.7 & 46.4 & 34.0 & 28.7 \\ SynDnsHyb & **19.7** & **17.4** & **93.9** & 61.3 & **50.6** & **37.3** & **24.0** \\ Energy [38] & 12.9 & 18.2 & 93.0 & 63.3 & 50.4 & 42.7 & 29.9 \\ OE [27] & 14.6 & 17.7 & 94.0 & 61.7 & 56.1 & 43.8 & 17.9 \\ OH [42] & 19.7 & 56.2 & 88.8 & **66.6** & - & 33.9 & 32.7 \\ OOrthMSP [15] & 18.8 & 30.9 & 89.7 & **66.6** & - & 43.6 & 23.0 \\ DenseHybrid & **30.2** & **13.0** & **95.6** & 63.0 & **59.7** & **45.8** & **17.2** \\ \end{tabular}
\end{table} TABLE IV: Performance evaluation on StreetHazards [71]. We evaluate anomaly detection (Anomaly), closed-set segmentation (Cls.), open-set segmentation (Open-set), and the open-set gap (Gap). Our DenseHybrid delivers competitive open-set performance.
Fig. 7: Qualitative open-set segmentation performance on StreetHazards. DenseHybrid (rows 2 and 3) has more accurate open-set performance compared to the energy-based approach [38](row 4), as denoted with red rectangles. Zoom in for a better view.
with unnormalized density (4) delivers more consistent performance than a point-wise normalizing flow on top of latent representation.
### _Impact of the Depth to the Detection Performance_
Road driving scenes typically involve a wide range of depth. Hence, we explore the anomaly detection performance at different ranges from the camera in order to gain a better insight into the performance of different methods. We perform these experiments on LostAndFound test [12] since it allows us to compute the depth in each ground pixel. Due to errors in the provided disparity maps, we perform our analysis up to 50 meters from the camera. Table VI indicates that DenseHybrid achieves accurate results even at large distances from the vehicle. We observe that SynBoost [18] is better than our approach at the shortest range. However, the computational complexity of image resynthesis precludes real-time deployment of such approaches [17, 18, 43] on present hardware as we show next.
### _Inference speed_
Table VII compares computational overheads of prominent anomaly detectors on two-megapixel images. All measurements are averaged over 200 runs on RTX3090. DenseHybrid involves a negligible computational overhead of 0.1 GFLOPs and 2.8ms. These experiments indicate that image resynthesis is not applicable for real-time inference on present hardware.
## 7 Conclusion
Discriminative and generative approaches to anomaly detection assume different failure modes. We propose to achieve synergy between these two approaches by fusing the dataset posterior with unnormalized data likelihood. We refer to the resulting method as DenseHybrid since its low computational overhead and translational equivariance are especially well suited for dense prediction context. DenseHybrid eschews the evaluation of the intractable normalization constant by leveraging negative training data. It can be trained either on real negative data sourced from some general-purpose dataset, or on synthetic negative data generated by a jointly trained normalizing flow. Finally, it can be easily attached to any closed-set segmentation approach in order to attain open-set competence. DenseHybrid yields competitive performance on the standard benchmarks for dense anomaly detection and open-set segmentation. We evaluate open-set segmentation performance according to a novel open-mIoU metric that quantifies the performance gap between closed-set and open-set conditions. Ablation experiments confirm the contributions of both components of hybrid anomaly detection. Suitable directions for future work include extending DenseHybrid towards open-set panoptics as well as towards further reduction of the performance gap between the closed-set and open-set setups.
## 8 Limitations
It may seem that our method may generate samples due to likelihood evaluation being a standard feature of generative models (except GANs). However, sample generation with unnormalized distributions requires MCMC sampling which can not be performed at large resolutions and dense loss, at least not with known techniques. Still, our hybrid open-set model delivers competitive performance even without the ability to generate samples. Also, variety and quality of synthetic samples are limited by the capacity of the generative model, which will be mitigated with advances in GPU design.
## Acknowledgments
This work has been supported by Croatian Science Foundation grant IP-2020-02-5851 ADEPT, by NVIDIA Academic Hardware Grant Program, as well as by European Regional Development Fund grants KK.01.1.1.01.0009 DATACROSS and KK.01.2.1.02.0119 A-Unit.
\begin{table}
\begin{tabular}{l|c c c c c c} Range & \multicolumn{2}{c}{MSP [34]} & \multicolumn{2}{c}{ML [71]} & \multicolumn{2}{c}{SynBoost [18]} & \multicolumn{2}{c}{DH (ours)} \\ & AP & FPR & AP & FPR & AP & FPR & AP & FPR \\ \hline
5-10 & 28.7 & 16.4 & 76.1 & 5.4 & **93.7** & **0.2** & 90.7 & 0.3 \\
10-15 & 28.8 & 29.7 & 73.9 & 16.2 & 78.7 & 17.7 & **89.8** & **1.1** \\
15-20 & 26.0 & 28.8 & 78.2 & 5.9 & 76.9 & 25.0 & **92.9** & **0.6** \\
20-25 & 25.1 & 44.2 & 69.6 & 12.8 & 70.0 & 23.3 & **89.1** & **1.4** \\
25-30 & 29.0 & 41.3 & 72.6 & 9.5 & 65.6 & 18.8 & **89.5** & **1.4** \\
30-35 & 26.2 & 47.8 & 70.2 & 10.0 & 58.5 & 27.4 & **87.7** & **2.5** \\
35-40 & 29.6 & 44.7 & 71.0 & 9.8 & 59.8 & 25.4 & **85.0** & **3.7** \\
40-45 & 31.7 & 43.2 & 74.0 & 9.8 & 60.0 & 25.8 & **85.6** & **4.7** \\
45-50 & 33.7 & 45.3 & 73.9 & 11.0 & 53.3 & 29.9 & **82.1** & **6.3** \\ \end{tabular}
\end{table} TABLE VI: Anomaly detection performance at different distances from camera.
\begin{table}
\begin{tabular}{l c c c} Anomaly detector & Neg. & FS L\&F & FS Static \\ & data & AP & FPR & AP & FPR \\ \hline Disc. \((1-P(d_{\text{in}}|\mathbf{x}))\) & 46.5 & 38.3 & 53.5 & 30.9 \\ Gen. \(\hat{p}(\mathbf{x})\) & Real & 58.2 & 7.3 & 58.0 & 5.3 \\ Hyb. \((1-P(d_{\text{in}}|\mathbf{x}))/\hat{p}(\mathbf{x})\) & & 60.5 & 6.0 & 63.1 & 4.2 \\ Disc. \((1-P(d_{\text{in}}|\mathbf{x}))\) & & 30.9 & 61.0 & 29.4 & 71.5 \\ Gen. \(\hat{p}(\mathbf{x})\) & Syn. & 52.8 & 13.1 & 35.8 & 11.1 \\ Hyb. \((1-P(d_{\text{in}}|\mathbf{x}))/\hat{p}(\mathbf{x})\) & & 59.5 & 10.4 & 49.0 & 10.3 \\ Gen. \(p(\mathbf{x})\) & Real & 5.7 & 58.9 & 61.7 & 7.6 \\ Hyb. \((1-P(d_{\text{in}}|\mathbf{x}))/p(\mathbf{x})\) & & 6.5 & 46.1 & 65.1 & 6.5 \\ \end{tabular}
\end{table} TABLE V: Validation of hybrid anomaly detection on Fishyscapes val. Hybrid anomaly detection outperforms its generative and discriminative components. This behaviour is consistent in models trained on real and synthetic negative data, as well as for different generative components. |
2301.07666 | DDS: Decoupled Dynamic Scene-Graph Generation Network | Scene-graph generation involves creating a structural representation of the
relationships between objects in a scene by predicting subject-object-relation
triplets from input data. However, existing methods show poor performance in
detecting triplets outside of a predefined set, primarily due to their reliance
on dependent feature learning. To address this issue we propose DDS -- a
decoupled dynamic scene-graph generation network -- that consists of two
independent branches that can disentangle extracted features. The key
innovation of the current paper is the decoupling of the features representing
the relationships from those of the objects, which enables the detection of
novel object-relationship combinations. The DDS model is evaluated on three
datasets and outperforms previous methods by a significant margin, especially
in detecting previously unseen triplets. | A S M Iftekhar, Raphael Ruschel, Satish Kumar, Suya You, B. S. Manjunath | 2023-01-18T17:20:08Z | http://arxiv.org/abs/2301.07666v1 | # DDS: Decoupled Dynamic Scene-Graph Generation Network
###### Abstract
Scene-graph generation involves creating a structural representation of the relationships between objects in a scene by predicting subject-object-relation triplets from input data. However, existing methods show poor performance in detecting triplets outside of a predefined set, primarily due to their reliance on dependent feature learning. To address this issue we propose DDS- a decoupled dynamic scene-graph generation network- that consists of two independent branches that can disentangle extracted features. The key innovation of the current paper is the decoupling of the features representing the relationships from those of the objects, which enables the detection of novel object-relationship combinations. The DDS model is evaluated on three datasets and outperforms previous methods by a significant margin, especially in detecting previously unseen triplets.
Activity detection, compositional learning, zero-shot learning, scene graphs, scene understanding.
## I Introduction
Dynamic Scene-Graph (DSG) provides a graph structure presenting the relationships among different objects in a scene. It aims to create the scene-graph by predicting relationship triplets composed of \(\langle subject,object,relationship\rangle\) at each frame of an input video. This acts as a foundational block for various computer vision tasks [1, 2]. Current DSG generation systems [3, 4, 5] operate in a constrained setting where the possible triplets are predefined for a given set of relationships and objects. However, in a more realistic deployment scenario, it is likely that the network will encounter triplets that it has not seen before. Therefore, a system should be able to transfer the learned concepts of relationships and objects to compose unseen triplets. Our analysis has shown (Table I) poor performance in detecting these unseen triplets using the state-of-the-art (SOTA) method. This poor performance is mainly attributed to the learning of highly dependent feature representations of relationships and objects. The proposed decoupled dynamic scene-graph (DDS) addresses this issue.
Fig. 1 shows the core idea of the proposed DDS network. This architecture utilizes two different branches to learn decoupled features for relationships and objects. As shown in the figure, DDS learns the concept of 'ride', 'on', 'person', 'bicycle', and 'bed' from the training examples of a 'person riding a bike' and a 'dog on the bed' independently. The decoupled design makes DDS look into different spatial regions for relationships and objects. These learned concepts are transferred to successfully detect the unseen triplet \(\langle dog,bicycle,ride\rangle\), see Section V-C for more details.
Existing works [3, 4, 5] for generating DSGs follow a two-stage process. First, an object detector localizes objects irrespective of their relationship status. In the second stage, different attention-based [3, 4] and graph-based [5] methods utilize features extracted from the previously localized objects to predict relationships. In this process, the extracted features of the objects are fed throughout the network for relationship detection. As a result, these methods learn to associate a relationship only with a particular combination of objects and hence perform poorly to predict unseen triplets.
In contrast, DDS ensures the learning of discriminative spatio-temporal cues for relationships and objects. Fig. 2 shows the overview of our architecture. It consists of two separate branches: the relation and the object branch. We chose a Transformer based encoder-decoder [6] architecture for these branches with two different sets of queries. Moreover, a novel
Fig. 1: _Diagram to show the concept learning and transferring in DDS. By focusing on different spatial regions, DDS learns the concept of relationships (ride, on) and objects (person, bicycle, bed) independently. In the lower section of the diagram, we show how these learned concepts are transferred and utilized to detect the unseen triplet \(\langle dog,bicycle,ride\rangle\)._
temporal decoder is added to embed temporal information into the queries. These separate sets of queries focus on learning generalized representations for relationships and objects from differently encoded feature maps in both temporal and spatial domains. This is significantly better than the existing works, where the same object features are used for both object and relationship detection. Also, DDS does not have the dependence on the off-the-shelf object detectors like previous works.
Our proposed model is thoroughly evaluated on the Action-Genome [7] dataset for DSG generation, where it achieves significant performance gains compared to the SOTA models. Additionally, we evaluate DDS on the task of static scene-graph (SSG) generation on the HICO-DET [8] dataset and unusual SSG generation on the UnRel [9] dataset, where DDS outperforms all the existing models in both datasets. Finally, the proposed design choices are evaluated in an extensive ablation study.
## II Related Works
DDS is built on the previously developed works in SSG and DSG generation. This section is used to review the literature in the mentioned areas along with additional relevant publications on scene-graph generation under the compositional setting.
### _Static Scene-Graph (SSG) Generation_
SSG generation is proposed by [10] for the task of image retrieval. An extensive literature exists [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26] in this area. The initial works [21, 22, 23, 24, 25, 26] rely heavily on two-stage (object detection and then scene-graph generation) structures. Few of these works utilize different recurrent neural network (RNN) variants [33, 34] while other prominent researches focus on graph structure [20, 25, 26] with attention mechanisms. Also, many authors utilize prior knowledge [26, 29, 35] (e.g. semantic knowledge, statistical heuristics) for SSG generation. Despite recent improvements in SSG generation, these methods are heavily constrained by their reliance on the object detection quality as noted in [36].
Modern works [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46] in SSG generation focus on utilizing a one-stage Transformer based architecture to deal with the aforementioned issues. These approaches mainly focus on human-object interaction (HOI) detection. HOI detection is a subtask of SSG generation where the relationships among objects are limited to interaction verb [3], such as hold, work, talk. These methods employ one-stage Transformer based encoder-decoder architectures following the architecture from DETR [6]. These architectures rely on set-based predictions to generate SSG. Among these works, Qpic [38] uses a single encoder-decoder model while CDN [42] extends Qpic by using sequential decoding of objects and relationships. Additionally, MSTR [41] enables the use of multi-scale feature maps to these networks. Another concurrent work, SSRT [36] refines the overall architecture with spatial and semantic support features. Moreover, a recent line of research heavily exploits the usage of very large-scale semantic knowledge engines (e.g. CLIP [47]) [36, 43, 46]. Also, few works propose to utilize different types of post-processing steps [46] for the task. Apart from the obvious limitation of these works being unable to utilize temporal dependencies, they perform poorly while detecting unseen triplets. With the decoupled multi-branch design, we significantly differ from these works by using separate sets of queries for relationship and object detection.
### _Dynamic Scene-Graph (DSG) Generation_
DSG is an extension of the SSG where the scene-graph is created for videos. This process is harder since temporal cues need to be utilized [3, 4, 5]. Current works in this area have two-stage architectures following the initial works on SSG. Among these works, STTran [5] utilizes a temporal decoder based on the self-attention mechanism. DSGAP [4] expands STTran with an anticipatory pre-training paradigm. On the other hand, HORT [3] utilizes a multi-branch design with different types of Transformers.
Both STTran and HORT use similar features for relationship and object detection. These features come from the object bounding boxes predicted by off-the-shelf object detectors. STTran concatenates pair-wise object features to use as relationship features whereas HORT pools relationship features from a joint box of object pairs. However, using similar features for relationship and object detection forces the learning of relationships and objects to be dependent on each other. Therefore, to ensure generalized learning, we focus on learning the features independently.
### _Compositionality in Scene-Graph Generation_
Creating new compositions from base known concepts during inference is known as compositional zero-shot learning (CZSL) under the compositional setting [48, 49, 50, 51, 52]. In this paper, we utilize this setting to evaluate our model. Kato et al. [53] introduce CZSL in SSG generation with an embedding-based model. Many following works [54, 55, 56] adapt different object-affordance ideas. These works assume there exist common relationships between the subjects and the objects. This work does not have such limited assumptions, and as a result, is able to generate scene graphs even when the relationships are very unusual (See Table IV).
## III Method
This section describes the developed model for the problem of dependent feature learning in DSG generation that makes current models perform poorly in detecting unseen relationship triplets. This work proposes a multi-branch network that learns distinct feature representations for relationships and objects to improve upon this limitation. Before going into the details of the architecture, the problem will be formulated in detail first.
### _Problem Formulation_
Given an input video, \(\mathbf{V}=\{\mathbf{I}_{1},\mathbf{I}_{2},..,\mathbf{I}_{l},..,\mathbf{I}_{T}\}\) with \(T\) frames, the task in DSG generation is to predict a set of relationship triplets, \(\{R_{1},R_{2},..,R_{t},..,R_{M}\}\) at every frame of the video. Every frame has \(N_{M}\) number of relationship triplets. Each relationship triplet can be presented by \(\langle s,o,r_{so}\rangle\). Here,
\(s\), \(o\) refers to subject, object and are represented by bounding boxes and category labels. \(r_{so}\) is the relationship between \(s\) and \(o\). In a single frame \(\mathbf{I}_{t}\), \(s\) and \(o\) can have multiple relations, as shown in the sample input-output pair in Figure 2.
In this paper, the main goal is to predict relationship triplets under the compositional setting. In this setting the test set contains triplets that are not present in the training set. Consider having in total \(N_{o}\) number of objects and \(N_{r}\) number of relations. The training set has \(N_{s}\) number of relationship triplets composed from the mentioned \(N_{o}\) objects and \(N_{r}\) relationships. On the other hand, the test set has \(N_{u}\) number of unseen triplets not present in the training set in addition to the \(N_{s}\) number of seen triplets, where all the unseen triplets are composed of the same \(N_{o}\) objects and \(N_{r}\) relationships.
### _Technical Overview_
The proposed work adopts a one-stage approach for DSG generation compared to the current two-stage methods [3, 4, 5] as the former [37, 38, 39, 40] have shown impressive performance in creating SSG. However, these image-based works present poor generalization capabilities. Therefore, we propose a network that uses a different set of queries with two branches: the relation branch and the object branch. Each branch follows a Transformer-like encoder-decoder architecture. Fig. 2 shows a diagram of the model, where a convolutional neural network (CNN) extracts features from the input frame, and those are encoded differently by the object and the relation encoders. Each spatio-temporal decoder takes encoded features from their respective encoder in addition to two types of inputs: queries for the current frame (\(\mathbf{o}\mathbf{Q}_{t}\), \(\mathbf{r}\mathbf{Q}_{t}\)) and the embeddings (\(\mathbf{o}\mathbf{E}_{t-1}\), \(\mathbf{r}\mathbf{E}_{t-1}\)) propagated from the previous frame. As the encoded features differ for each branch, the queries learn decoupled features for relationships and objects. The decoder outputs are the learned object and relation spatio-temporal embeddings. These embeddings are sent to the relation and the object heads for final predictions. Moreover, these embeddings are propagated to the next frame of the video.
### _Feature Extraction & Encoders_
Consider a frame \(\mathbf{I}_{t}\in\mathbb{R}^{Nc\times H\times W}\) at time t of the input video \(V\). Here, \(N_{C},H,W\) are the number of channels, height, and width of the frame, \(\mathbf{I}_{t}\). DDS uses a CNN network as backbone, \(\mathbf{B}\) (e.g. resnet-50 [57]) to extract features \(\mathbf{B}(\mathbf{I}_{t})\in\mathbb{R}^{Nc_{C^{\prime}}\times H^{\prime} \times W^{\prime}}\) from the input frame. Then, \(1\times 1\) convolution is used to reduce the channel dimension from \(N_{C^{\prime}}\) to \(d\). After that, a flattening operation is performed and a fixed positional embedding is added to \(\mathbf{B}(\mathbf{I}_{t})\) like existing works [37, 38, 39, 40] to get the feature map \(\mathbf{F}_{t}\in\mathbb{R}^{(H^{\prime}W^{\prime})\times d^{\prime}}\). These embeddings express each spatial position of the feature map in high dimensions [6]. DDS uses \(\mathbf{F}_{t}\) as a common feature for both the relation and the object branches.
Both of the network's branches have an encoder comprising of stacked multi-head self-attention layers [58] with a feed-forward network (FFN). The output of the encoders are two
Fig. 2: _Overview of DDS’s architecture. Given an input frame \(\mathbf{I}_{t}\), features are extracted by the backbone. These features are fed to the object and the relation branch. These decoupled branches consist of an encoder and a spatio-temporal decoder. Encoders from both branches encode the feature maps differently and send them to the decoders. Each spatio-temporal decoder takes a set of queries (object/relation) along with the previous frame’s embeddings (shown by the red arrow). The output of the spatio-temporal decoders are learned embeddings. These learned embeddings are fed to the object and the relation heads to predict relationship triplets._
separate feature maps,
\[\mathbf{r}\mathbf{F}_{t}=\text{Relation Encoder}(\mathbf{F}_{t}) \tag{1}\] \[\mathbf{o}\mathbf{F}_{t}=\text{Object Encoder}(\mathbf{F}_{t}) \tag{2}\]
Here, \(\mathbf{r}\mathbf{F}_{t}\), \(\mathbf{o}\mathbf{F}_{t}\) refer differently encoded versions of the feature map \(\mathbf{F}_{t}\). Our spatio-temporal decoders will utilize these feature maps for decoupled learning.
### _Spatio-Temporal Decoders_
DDS's spatio-temporal decoders convert a set of learnable queries into output embeddings. This transformation occurs in two stages. In the first stage, the current frame's queries attend to the previous frame's output embeddings to aggregate temporal information. In the second stage, these aggregated queries gather information from the encoded feature maps of the current frame. The proposed multi-branch design ensures discriminative feature learning for the queries of each branch. Each decoder consists of two small components: temporal and spatial decoder. Please see Fig. 3 for reference.
**Temporal Decoders:** These decoders allow queries to leverage temporal dependencies. Each temporal decoder takes two sets as inputs: the current frame's queries and the embeddings from the previous frames. For frame \(\mathbf{I}_{t}\), the current frame's relation and object queries sets are defined as \(\mathbf{r}\mathbf{Q}_{t}\in\mathbb{R}^{N_{q}\times d}\) and \(\mathbf{o}\mathbf{Q}_{t}\in\mathbb{R}^{N_{q}\times d}\). Every query is a \(d\) dimensional vector, and every branch has \(N_{q}\) number of queries. Embeddings from the previous frames for the relation and the object branches are presented by \(\mathbf{r}\mathbf{E}_{t-1}\in\mathbb{R}^{N_{q}\times d}\) and \(\mathbf{o}\mathbf{E}_{t-1}\in\mathbb{R}^{N_{q}\times d}\), and are marked with a red arrow in Fig.3.
Temporal decoders are made of stacked multi-head cross-attention layers [6] with an FFN network. The cross-attention in the temporal decoders allows the current frame's queries to select what to learn from the previous frame's embeddings. The outputs of the temporal decoders are the temporally aggregated queries \(\mathbf{r}\mathbf{A}_{t}\), \(\mathbf{o}\mathbf{A}_{t}\). They are fed to their respective spatial decoders. In the case of the first frame of a video, the temporal decoders directly output \(\mathbf{r}\mathbf{Q}_{t}\) and \(\mathbf{o}\mathbf{Q}_{t}\) as \(\mathbf{r}\mathbf{A}_{t}\) and \(\mathbf{o}\mathbf{A}_{t}\) without passing them through the cross-attention and
Fig. 3: _Design of the spatio-temporal decoders. Every spatio-temporal decoder is composed of a temporal and a spatial decoder Each decoder converts a different set of queries into learned embeddings while making sure decoupled learning in each branch._
FFN blocks as there is no previous frame in this case.
**Spatial Decoders:** The spatial decoders architecture is similar to the standard Transformer decoder [6]. These decoders consist of both self-attention and cross-attention layers along with FFN networks. Each decoder takes encoded feature maps (\(\mathbf{r}\mathbf{F}_{t}\) or \(\mathbf{or}\mathbf{F}_{t}\)) along with the aggregated queries of the temporal decoders (\(\mathbf{r}\mathbf{A}_{t}\) or \(\mathbf{o}\mathbf{A}_{t}\)) from their respective branch as inputs. Also, these decoders take learnable positional embeddings. These embeddings for the relation and the object branch are defined as \(\mathbf{r}\mathbf{PE}\in\mathbb{R}^{N_{q}\times d}\), \(\mathbf{o}\mathbf{PE}\in\mathbb{R}^{N_{q}\times d}\).
\[\mathbf{r}\mathbf{E}_{t}=\text{Relation Decoder}(\mathbf{r} \mathbf{N}_{t},\mathbf{r}\mathbf{PE},\mathbf{r}\mathbf{Q}_{t}) \tag{3}\] \[\mathbf{o}\mathbf{E}_{t}=\text{Object Decoder}(\mathbf{o} \mathbf{N}_{t},\mathbf{o}\mathbf{PE},\mathbf{o}\mathbf{Q}_{t}) \tag{4}\]
The outputs of the decoders are the learned spatio-temporal embeddings. They are used in the object and the relation heads to make the final relationship triplet predictions. Also, each spatial decoder's output is fed to the next frame's temporal decoder as previous embeddings.
Both spatial and temporal decoders keep the separation in feature learning for relationships and objects using different encoded features, set of queries, and previous embeddings. As a result, the output embeddings from the decoders are decoupled generalized representations.
### _Object Heads_
The output embeddings from the object spatio-temporal decoder, \(\mathbf{o}\mathbf{E}_{t}\) are fed to four different FFNs. For input frame \(\mathbf{I}_{t}\), these FFNs predict subject bounding boxes, \(\mathbf{s}\mathbf{B}_{t}\in[0,1]^{N_{q}\times 4}\), object bounding boxes, \(\mathbf{o}\mathbf{B}_{t}\in[0,1]^{N_{q}\times 4}\), subject prediction vectors, \(\mathbf{s}\mathbf{P}_{t}\in[0,1]^{N_{q}\times O}\), and object prediction vectors, \(\mathbf{o}\mathbf{P}_{t}\in[0,1]^{N_{q}\times 4}\). Here, \(N_{q}\) is the number of queries, and \(N_{o}\) is the total number of objects.
### _Relation Heads_
Like the object heads, the learned output embeddings, \(\mathbf{r}\mathbf{E}_{t}\), are fed to two FFNs that produce as output the relation prediction vectors, \(\mathbf{r}\mathbf{P}_{t}[0,1]^{N_{q}\times N_{r}}\) and relation region bounding boxes \(\mathbf{r}\mathbf{B}_{t}\in[0,1]^{N_{q}\times 4}\). Here, \(N_{q}\) is the number of queries, and \(N_{r}\) is the total number of relationships under consideration. Notice that the relation region bounding box is defined as the union between the subject and object bounding boxes.
### _Inference_
We compose \(N_{q}\) relationship pairs by one-to-one matching of \(\mathbf{s}\mathbf{B}_{t}\) and \(\mathbf{o}\mathbf{B}_{t}\). One-to-one matching refers to matching the \(q\)-th prediction from \(\mathbf{s}\mathbf{B}_{t}\) with the \(q\)-th prediction from \(\mathbf{o}\mathbf{B}_{t}\). Moreover, for every prediction vector in \(\mathbf{s}\mathbf{P}_{t}\) and \(\mathbf{o}\mathbf{P}_{t}\), the maximum confidence score is used to create \(\mathbf{s}\mathbf{P}_{tmax}\in[0,1]^{N_{q}}\) and \(\mathbf{o}\mathbf{P}_{tmax}\in[0,1]^{N_{q}}\) and the corresponding index is used to determine the category label for each of the bounding boxes. For every composed relationship pair, the final relation score prediction vectors are calculated as:
\[\mathbf{r}\mathbf{P}_{tfimal}=\mathbf{r}\mathbf{P}_{t}*\mathbf{s} \mathbf{P}_{tmax}*\mathbf{o}\mathbf{P}_{tmax} \tag{5}\]
### _Training_
For training DDS, we utilize losses similar to Qpic [38]. This loss calculation implicitly binds the two sets of queries from the relation and the object branch. The loss calculation happens in two stages:
In the first stage, we find the bipartite matching between the predictions and the ground truths. First, the total prediction set for the input frame \(\mathbf{I}_{t}\) as, \(\mathbf{P}_{t}=\{\mathbf{s}\mathbf{B}_{t},\mathbf{o}\mathbf{B}_{t},\mathbf{s} \mathbf{P}_{t},\mathbf{o}\mathbf{P}_{t},\mathbf{r}\mathbf{B}_{t},\mathbf{r} \mathbf{P}_{t}\}\) is generated. This yields \(N_{q}\) number (equal to the number of queries in each branch) of predictions. \(N_{q}\) is chosen in such a way that it is always greater than the number of ground truths per frame. We pad ground truths with \(\phi\) (no relationship triplet) so that it is possible to have the ground truth set \(\mathbf{G}_{t}\) with \(N_{q}\) number of elements. One important detail to note here is that there are three kinds of ground truth bounding boxes: subject bounding boxes, object bounding boxes, and relation regions. Ground truth relation regions refer to the union bounding boxes between the subject and the object bounding boxes that have relations, and are only used during the training phase. Next, each element in \(\mathbf{P}_{t}\) is matched with an element from the ground truth set, \(\mathbf{G}_{t}\). The matching cost metrics is, \(\mathbf{C}\in\mathbb{R}^{[N_{q}\times N_{q}]}\). Any element \((i,j)\) in this metrics refers to the cost to match \(i^{th}\) element from \(\mathbf{P}_{t}\) with \(j^{th}\) element from \(\mathbf{G}_{t}\) and defined as,
\[\mathbf{C}^{(i,j)}=\eta_{b}(\mathbf{C}^{(i,j)}_{ab}+\mathbf{C}^{(i,j)}_{ob}+ \mathbf{C}^{(i,j)}_{rb})+\eta_{c}\mathbf{C}^{(i,j)}_{o}+\eta_{r}\mathbf{C}^{(i, j)}_{r} \tag{6}\]
\(\mathbf{C}^{(i,j)}_{sb},\mathbf{C}^{(i,j)}_{ob},\mathbf{C}^{(i,j)}_{rb}\) are the subject bounding box, the object bounding box and the relation region matching costs, \(\mathbf{C}^{(i,j)}_{o}\) is the object label matching cost, and \(\mathbf{C}^{(i,j)}_{r}\) is the relation label matching cost between \(i^{th}\) element from \(\mathbf{P}_{t}\) and \(j^{th}\) element from \(\mathbf{G}_{t}\). These costs are calculated following [38]. \(\eta_{b},\eta_{o},\eta_{r}\) are fixed hyper-parameters. The Hungarian matching algorithm [6] is used to find the optimal matching between the predictions and the ground truths by using these cost metrics. After this matching, every prediction is associated with a ground truth. Next, the following loss is calculated for training the network:
\[\mathcal{L}=\lambda_{q}\mathcal{L}_{GIOU}+\lambda_{l}\mathcal{L}_{L1}+\lambda_{o} \mathcal{L}_{obj}+\lambda_{r}\mathcal{L}_{rel}, \tag{7}\]
Here, \(\mathcal{L}_{GIOU}\) and \(\mathcal{L}_{L1}\) are the generalized intersection over union (gIOU) and L1 box regression losses for the predicted subject bounding boxes, object bounding boxes, and relation regions. \(\mathcal{L}_{obj}\) is the cross-entropy loss for subject and object label predictions. \(\mathcal{L}_{rel}\) is the binary cross-entropy loss for the relationship label predictions. \(\lambda_{o}\), \(\lambda_{g}\), \(\lambda_{l}\), and \(\lambda_{r}\) are the corresponding hyper-parameters.
Notice that a portion of the datasets [7, 8] fixes the subject as humans. In this case, the subject prediction vectors and subject bounding boxes are not used for loss calculation.
## IV Experiments
### _Experimental Setup_
We evaluate DDS's performance in Action Genome (AG) [7] dataset. Moreover, we show our model's performance in SSG generation datasets: HICO-DET [8] and UnRel [9]. As these datasets only contain images, each sample is treated as a
single-frame video. Next, the used datasets will be presented in more detail:
**Action Genome (AG) [7]:** This dataset is built on top of the Charades [67] dataset, provides frame-level annotations, and is extensively used in the literature for DSG generation. It has 36 distinct object classes and 25 relationship classes. The object classes are common household items such as doors, windows, and cups, and have a total (train and test set) of \(476,229\) bounding boxes. The relationship classes are divided into 3 distinct sub-types: (1) _Attention_ relationship denotes if the subject is looking at an object. (2) _Spatial_ relationship denotes the spatial location of the object with respect to the subject, for example, above, or below. (3) _Contacting_ relationship represents how the object is being contacted by the subject. In total, AG provides \(1,715,568\) instances of the mentioned classes contained in \(135,484\) subject-object pairs. Every subject-object pair can have multiple relations. Also, on AG the subject class is always human.
Originally, the AG dataset provided \(7,464\) videos with \(166,785\) frames in the training set and \(1,737\) videos with \(54,371\) frames in the test set. The original training set contains \(530\) relationship triplets. All these relationship triplets are present in the test set. We refer to this setting as the fully-supervised setting. As the main interest in this paper is to evaluate DDS's performance in the compositional setting, a new training split of the data is created. This new proposed training set contains \(6,784\) videos with \(146,517\) frames containing \(421\) relationship triplets. The original test set is not changed. It contains \(499\) object-relationship, where \(80\) of them are not present in our new training set.
**HICO-Det [8]:** This dataset has \(80\) objects and \(117\) relationship classes. The relationships are limited to interactions such as holding, working, etc. In the literature, this dataset is used for evaluating SSG generation performance under compositional setting [54, 55, 30, 56]. DDS's performance is reported following the RF (Rare First) protocol provided by [30]. This protocol has \(37,328\) images in the training set with \(480\) relationship triplets. The test set has \(9,552\) images with \(480\) seen relationship triplets and \(120\) unseen relationship triplets.
**UnRel [9]:** This dataset provides extremely unusual SSG triplets, for example: \((\langle elephant,bike,riding\rangle)\). It has \(4000\) training and \(1000\) test images with \(100\) objects and \(70\) relationships. The original train/test split provided by the authors already provides a compositional setting where the test set has \(65\) unseen relationship triplets. The training set contains \(4000\) seen relationship triplets.
### _Evaluation Metrics_
Following existing works [3, 4, 5, 7], we report our performances in AG dataset with Recall@K metric. Here, \(K=[20,50]\). We utilize the most challenging SGDet [7] protocol to report our performances. In this protocol, the network needs to detect relationship triplets along with subject and object bounding boxes. There can be multiple relationship triplets between a subject-object pair. Moreover, mAP (mean average precision) was selected to report the performances in UnRel and HICO-Det datasets similar to current works [9, 54, 56, 66]. Here, performances are reported in three categories: unseen (only unseen relationship triplets), seen (only seen relationship triplets), and full (all relationship triplets) [30]. For all datasets, a prediction triplet from DDS is considered correct if subject and object bounding boxes have at least \(0.5\) Intersection over Union (IoU) with ground truth bounding
boxes and subject, object, and relationship labels match with ground truth labels.
### _Implementation details_
ResNet-50 [57] is used as the CNN backbone. Both temporal decoders inside the spatio-temporal decoders have a single layer. We follow Qpic's [38] setup for the encoders. We select \(6\) layers for the object spatial decoder with \(3\) layers for the relation spatial decoder. All loss coefficients in equation 6 and equation 7 are set as [38]. The number of queries in each branch is \(64\). Each query is a vector of size \(256\). The model is trained with AdamW [68] optimizer. We initialize the parameters of DDS from DETR [6] trained on COCO [69] object detection dataset. The initial learning rate for the backbone network is \(10^{-6}\) and for the other part of the network is \(10^{-5}\).
When training in the AG dataset, we drop the learning rate by \(10\) times at every \(40\) epochs and utilize a batch size of \(128\). DDS processes each video frame from a single video sequentially. We utilize scale augmentation like [6]. Input frames are resized with the shortest side being at least 480 and at most 800 pixels, and longest side is at most 800.
In the other datasets [8, 9], the learning rate is dropped by \(10\) times at every \(60\) epochs with a batch size of \(16\). We use a scale augmentation scheme similar to the one used for AG, except that the longest side of the resized image is chosen as \(1333\). The training schedule is selected based on the convergence of losses. Upon acceptance, we will publicly release our trained models and code.
## V Results & Analysis
In this section, we first compare DDS's performance with the SOTA models in Section V-A. Next, a detailed study on the impact of different components of our network is presented in Section V-B. Finally, in Section V-C qualitative results from our model are provided.
### _Comparison with the SOTA models_
In the AG [7] dataset, we report DDS's performances in Table I under the compositional setting. In this setting, there are \(\sim 12\%\) less training data with \(80\) unseen relationship triplets. We retrain the SOTA model STTRan [5] in the mentioned setting for comparison. It is important to note here, among the three recent DSG generation models [3, 4, 5], only STTran's code is publicly available, therefore limiting the capacity to evaluate other models. DDS outperforms STTran in all recall levels in both unseen and seen relationship triplet detection. Especially, for detecting unseen triplets DDS achieves \(4-24\) times improvement over the SOTA model. It shows the generalization power of DDS.
Additionally, for a fair comparison with other models, we train DDS in the full training set of AG dataset and report performance in Table II. Here, all methods up to VCTree [12] are SSG generation methods implemented by [5]. As expected, with similar training data like other models in Table II, DDS achieves SOTA performance on all metrics.
In Table III and Table IV, DDS outperforms all existing methods in HICO-Det [8] and UnRel [9] datasets. Among these methods, [65] and [55] follow a Transformer based encoder-decoder architecture. The proposed network shows superior performance than both of the models. In summary, DDS outperforms existing works in both seen and unseen SSG generation (full category) in HICO-Det by \(5\%\) and in UnRel by \(33\%\).
**Multi-branch Design:** We first validate our decoupled multi-branch design. The performances are reported in Table V. The base network has one single branch where a single set of queries is used for both relationship and object detection. In row 1 of Table V the performance is pretty poor for the single branch base network, especially in the unseen category. Next, a multi-branch network is created by using two spatio-temporal decoders. Both decoders get the same encoded features. However, the relation branch in this case doesn't do relation region prediction. A performance improvement is observed compared to the single branch for this design. Similarly, with the gradual introduction of two encoders and relation region prediction, the performance keeps increasing. In particular, all these components yield a significant improvement in the unseen category compared to the seen category and thus show the impact of decoupled learning on unseen relationship triplet predictions. We also do qualitative analysis between DDS and the base network described in the section V-C.
**Relation Region Ground Truths:** As noted in Section III-F the relation branch only produces relation region prediction during training. Ground truth relation region is required for the training. However, ground truth relation regions are not strictly defined by the provided annotations such as subject and object bounding boxes. Therefore, we experiment with different settings to generate ground truth relation regions for subject and object pairs that have relationships:
* Mixture: In this case, the intersection of the subject, and object bounding boxes is selected as the relation region if the boxes have an IoU greater than the \(\theta\). For any other case, the relation region is defined as their union boxes. Different values of \(\theta\) are tested.
* Union box: Here, the union of the subject and object bounding boxes is defined as the relation region.
The performances of the network with different relation region ground truths are shown in Table VII. With union boxes, DDS performs best (last row). This may be due to the fact that the union box guarantees the inclusion of the spatial location of the relation and therefore it can be very helpful in detecting
Fig. 4: Qualitative results of DDS for predicting unusual relationship triplets in UnRel [9] dataset. The subject bounding box is green and the object bounding box is red. _Our base network (single branch) fails to detect these marked triplets._ For both networks, we utilize top-20 predictions per sample.
non-contact relationship triplets (e.g. subject looks at object etc.).
**Share Queries:** DDS utilizes two different sets of queries for the relation and the object branches. We also test DDS's performance by sharing the queries between the branches with two different strategies:
* DDS with o to r: In this case, the output of the object spatio-temporal decoder is fed as input relation queries to the relation spatio-temporal decoder.
* DDS with r to o: In this case, the output of the relation spatio-temporal decoder is fed as input object queries to the object spatio-temporal decoder.
The performances are reported in Table VI. Without sharing the queries DDS performs the best, especially in unseen categories. This matches our hypothesis on the importance of decoupled learning by utilizing two different sets of queries. We also notice a significant performance drop even in seen classes when we share queries from the relation to the object branch (2nd row). This shows object queries have more generalization ability than the relation queries.
**Spatial Decoders:** We test with different numbers of layers for spatial decoders. The result is shown in Table VIII. DDS performs poorly in extreme cases (first and last row). This is expected as DDS is getting underfitted due to the decrease in the number of layers (first row) and overfitted as the number of layers increases (last row) in the relation branch. Also, the best performance in the unseen category arises when the object decoder has more layers than the relation decoder. Given that the object spatial decoder decodes two entities (subject, object), it is reasonable to require more layers than the relation spatial decoder.
### _Qualitative Results_
This section compares DDS's performance with our base network, a single branch network with one encoder, and one spatio-temporal decoder (details in section V-B). This comparison is made in the UnRel [9] dataset that has very unusual unseen relationship triplets. Fig. 4 illustrates the samples where DDS is successful; however, our base network completely misses the relationship triplets (predicted bounding boxes do not match with ground truth). We utilize the most confident \(20\) predictions for each sample from both networks for this visualization.
We compare the attention maps from DDS and the base network to further analyze our improved performances. Fig. 5 presents attention maps for the samples where both DDS and the base network have correct predictions. The attention maps are of the queries that predict the marked subject and object bounding boxes from the last layer of the spatio-temporal decoders. We overlap attention maps from both our spatio-temporal decoders to get the final attention map. As can be seen, although both networks have correct predictions, DDS's attention maps cover the correct spatial region. In contrast, the base network with only one spatio-temporal decoder has attention on a very random portion of the object and the subject.
## VI Conclusion
This paper proposes a multi-branch decoupled network for DSG generation. The DDS network is comprised of two encoder-decoder based Transformer branches. This design enables independent learning of objects and relationships, thus enabling DDS to detect unseen relationship triplets. The effectiveness of DDS is demonstrated through extensive experiments with DDS achieving SOTA performance on three benchmark datasets. Moreover, the conducted ablation studies have provided the motivation and significance for different components of DDS. However, while successful compared to the existing works, the quantitative results (Table II, III, IV) show room for improvement in detecting unseen relationship triplets. Future research will focus on improving DDS for a better generalized DSG generation.
## Acknowledgments
This research is partially supported by the following grants: US Army Research Laboratory (ARL) under agreement number W911NF2020157; and by NSF award SI2-SSI #1664172. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of US Army Research Laboratory (ARL) or the U.S. Government.
Fig. 5: Performance analysis of DDS over the base network. The subject bounding box is green and the object bounding box is red. The attention maps are visualized from the last layer of the spatio-temporal decoder. |
2303.09032 | Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent
Reinforcement Learning | Efficient exploration is critical in cooperative deep Multi-Agent
Reinforcement Learning (MARL). In this work, we propose an exploration method
that effectively encourages cooperative exploration based on the idea of
sequential action-computation scheme. The high-level intuition is that to
perform optimism-based exploration, agents would explore cooperative strategies
if each agent's optimism estimate captures a structured dependency relationship
with other agents. Assuming agents compute actions following a sequential order
at \textit{each environment timestep}, we provide a perspective to view MARL as
tree search iterations by considering agents as nodes at different depths of
the search tree. Inspired by the theoretically justified tree search algorithm
UCT (Upper Confidence bounds applied to Trees), we develop a method called
Conditionally Optimistic Exploration (COE). COE augments each agent's
state-action value estimate with an action-conditioned optimistic bonus derived
from the visitation count of the global state and joint actions of preceding
agents. COE is performed during training and disabled at deployment, making it
compatible with any value decomposition method for centralized training with
decentralized execution. Experiments across various cooperative MARL benchmarks
show that COE outperforms current state-of-the-art exploration methods on
hard-exploration tasks. | Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran | 2023-03-16T02:05:16Z | http://arxiv.org/abs/2303.09032v2 | # Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
###### Abstract
Efficient exploration is critical in cooperative deep Multi-Agent Reinforcement Learning (MARL). In this paper, we propose an exploration method that efficiently encourages cooperative exploration based on the idea of the theoretically justified tree search algorithm UCT (Upper Confidence bounds applied to Trees). The high-level intuition is that to perform optimism-based exploration, agents would achieve cooperative strategies if each agent's optimism estimate captures a structured dependency relationship with other agents. At each node (i.e., action) of the search tree, UCT performs optimism-based exploration using a bonus derived by conditioning on the visitation count of its parent node. We provide a perspective to view MARL as tree search iterations and develop a method called Conditionally Optimistic Exploration (COE). We assume agents take actions following a sequential order, and consider nodes at the same depth of the search tree as actions of one individual agent. COE computes each agent's state-action value estimate with an optimistic bonus derived from the visitation count of the state and joint actions taken by agents up to the current agent. COE is adaptable to any value decomposition method for centralized training with decentralized execution. Experiments across various cooperative MARL benchmarks show that COE outperforms current state-of-the-art exploration methods on hard-exploration tasks.
## 1 Introduction
In recent years multi-agent reinforcement learning (MARL) has drawn much attention and has shown high potential to be applied to various real-world scenarios, such as transportation (Seow et al., 2009), robotics (Perrusquia et al., 2021), and autonomous driving (Shalev-Shwartz et al., 2016). Cooperative MARL is a multi-agent learning setting where the objective is to train multiple agents that can cooperate to maximize the expected return defined by the same reward function shared across all agents. There are several major challenges posed by this setting, such as credit assignment, scalability, non-stationarity, and partial observability. To address those challenges, Bernstein et al. (2002) propose the Centralized Training with Decentralized Execution (CTDE) learning paradigm. In this paradigm, information is shared across agents during training, guiding the learning of individual agents' policies and promoting cooperation during training, while agents still being able to run independently during decentralized execution.
One important line of research in CTDE is value decomposition, a general approach upon which many cooperative exploration methods build. Value decomposition learns a centralized action-value function that can be factorized into the individual utility function (i.e., individual Q-function) of each agent. To ensure the centralized policy is aligned with individual policies, Son et al. (2019) propose the Individual-Global-Max (IGM) principle that guarantees consistency between global and local greedy actions. A common approach to value decomposition is to learn a mixing network that computes the centralized action value from the utilities of all agents. Depending on the specific way to satisfy IGM, different methods have been introduced, including VDN (Sunehag et al., 2017), QMIX (Rashid et al., 2018), QTRAN (Son et al., 2019), and QPLEX (Wang et al., 2020).
Cooperative exploration adds another level of difficulty to the exploration challenge. In cooperative MARL, agents need to cooperatively explore the joint state-action space as the optimal joint policy may require a high degree of collaboration among them. In addition, there may exist different types of cooperative strategies associated with a task. Sound cooperative exploration methods should be able to identify the optimal strategy from potentially many sub-optimalities. For instance, if the task for a group of robots is to move
desks and chairs to another location, the optimal strategy is that at each time several agents together lift a heavy desk that one single agent cannot lift, meanwhile each other spare agent carries one chair. In this task delivering items either only collectively or only separately is sub-optimal, even though either way agents achieve a cooperative strategy. Therefore cooperative exploration is challenging in cooperative MARL, especially when the reward signals are sparse. Although directed exploration strategies have been widely studied in multi-armed bandit and single-agent RL settings, they fail to account for cooperation among agents. Moreover, it is not straightforward to adopt single-agent methods to cooperative MARL, due to the exponentially large state-action space and multi-agent credit assignment. The popular \(\varepsilon\)-greedy strategy has been shown to be ineffective in complex MARL coordination tasks (Wang et al., 2020).
Some recent works encourage cooperative exploration in MARL settings by maximizing the correlation among agents' behaviour, which trains each agent's policy to account for influences from other agents, hence agents achieve effective collaborative exploration behaviour. Correlation maximization is often realized by maximizing the mutual information (MI) between some quantities that can determine or reflect agents' behaviours, such as the trajectory history of each agent. Utilizing this idea, some works have been proposed and empirically outperformed the value decomposition baselines across various benchmark tasks (Jaques et al., 2019; Mahajan et al., 2019; Wang et al., 2019; Kim et al., 2020; Li et al., 2021). However, two major issues may prohibit agents from learning the optimal joint strategy. First, optimizing the MI quantity for every pair of agents is not scalable because the required computation to optimize all MI losses grows as the number of agents increases. Second, agents could learn different types of cooperative strategies, therefore one particular high degree of collaboration does not guarantee high performance. As pointed out by Li et al. (2022), simply maximizing the MI may not lead to high returns because agents may learn sub-optimal joint strategy, regardless of how strong the correlation they have.
In this work, we revisit the idea of UCT exploration (Kocsis and Szepesvari, 2006) proposed in a perfect-information game setting, where the game state is accessible at all nodes, and introduce it to encourage cooperative exploration in MARL. Our insight is simple: if each agent's optimism estimate encodes a structured dependency relationship with other agents, by performing optimism-based exploration, agents would be guided to explore cooperative strategies. Assuming at each timestep agents take actions according to a sequential order, the action execution sequence can be viewed as a path from the root to a leaf of a tree. At each node of the tree, the preceding agent's taken action can be considered as the parent node of the current agent. Then we can perform optimism-based exploration by computing the upper confidence bounds of each action for the current agent, conditioned on its parent node's visitation count. We disable exploration after training to obtain decentralized agents for execution. We first review the basic background in the MARL setting and the UCT algorithm. Then, we describe how the UCT exploration can be applied to the MARL setting to encourage cooperative exploration. We build COE on commonly used value decomposition methods, and used the hash-based counting technique (Tang et al., 2017) to enable counting the visitations in continuous state-action domains. Our empirical results on various benchmark domains show that our method is more effective than well-known baselines in exploration-challenging tasks, and matches baseline performance in general MARL tasks.
## 2 Background
### Dec-Pomdp
We model the cooperative multi-agent task as a Dec-POMDP (Decentralized Partially Observable Markov Decision Process) (Oliehoek and Amato, 2016), which is formally defined as a tuple \(G=\langle\mathcal{S},\mathcal{A},P,R,\Omega,O,n,\gamma\rangle\), where \(\mathcal{S}\) is the global state space, \(\mathcal{A}\) is the action space, \(\Omega\) is the observation space, \(n\) is the number of agents in the environment, and \(\gamma\in[0,1]\) is the discount factor. At each timestep \(t\), on state \(s\in\mathcal{S}\) each agent \(i\in\mathcal{N}\equiv\{1,\ldots,n\}\) takes an action \(a_{i}\in\mathcal{A}\). The joint action \(\mathbf{a}=[a_{i}]_{i=1}^{n}\in\mathcal{A}\equiv\mathcal{A}^{n}\) leads to the next state \(s^{\prime}\) sampled from the transition probability \(P(s^{\prime}|s,\mathbf{a}):\mathcal{S}\times\boldsymbol{\mathcal{A}}\times \mathcal{S}\rightarrow[0,1]\), and obtains a global reward \(r\) according to the reward function \(R(s,\mathbf{a}):\mathcal{S}\times\boldsymbol{\mathcal{A}}\rightarrow\mathbb{R}\) shared across all agents. Each agent \(i\) has a local policy \(\pi_{i}(a_{i}|s):\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\). Based on the joint policy \(\boldsymbol{\pi}\equiv[\pi_{i}]_{i=1}^{n}\), the joint action-value function is defined as \(Q_{\boldsymbol{\pi}}(s,\mathbf{a})=\mathbb{E}_{\boldsymbol{\pi}}[\sum_{t=0}^{ \infty}\gamma^{t}r_{t}|s,\mathbf{a}]\). The objective is to find a joint policy that maximizes the action-value function.
We consider the partially observable setting, where each agent \(i\) does not observe the global state \(s\), instead only has access to a local observation \(o_{i}\in\Omega\) drawn from the observation function \(O(s,i):\mathcal{S}\times\mathcal{N}\rightarrow\Omega\). Hence each agent \(i\) maintains its action-observation history \(\tau_{i}\in T\equiv(\Omega\times\mathcal{A})^{*}\), on which it can condition its policy \(\pi_{i}(a_{i}|\tau_{i}):T\times\mathcal{A}\rightarrow[0,1]\). With agent \(i\) observing the next observation \(o_{i}^{\prime}\), the updated next history is represented by \(\tau_{i}^{\prime}=\tau_{i}\cup\{o_{i}^{\prime}\}\). We denote the joint history by \(\boldsymbol{\tau}\equiv[\tau]_{i=1}^{n}\in\mathbf{T}\equiv T^{n}\), and similarly joint next history by \(\boldsymbol{\tau}^{\prime}\equiv[\tau^{\prime}]_{i=1}^{n}\).
### Uct
UCT (Upper Confidence bounds applied to Trees) (Kocsis and Szepesvari, 2006) is a tree search algorithm commonly used in Monte-Carlo Tree Search for perfect-information games. In UCT, node selection is treated as a multi-armed bandit problem, where at each node its children nodes corre
spond to the arms, and the Upper Confidence Bound (UCB) bandit algorithm [1] is used to select the child node with the highest upper confidence. In particular, consider a sequence of node selections from the root to a leaf of a search tree as a trajectory at one timestep, at each depth the child node \(i\) with the highest upper confidence bound is selected:
\[B_{i}=X_{i}+c\sqrt{\frac{2\log(p)}{n_{i}}}, \tag{1}\]
where \(X_{i}\) is the empirical mean of the rewards that have been obtained by trajectories going through node \(i\), \(c\) is a constant controlling the scale of exploration, \(n_{i}\) and \(p\) are the number of times node \(i\) and its parent node have been visited, respectively. Intuitively, conditioned on preceding agents' actions, at the current node actions that have been taken fewer times will have a higher exploration bonus, hence UCT tends to take action combinations that are under-explored or promising actions with higher reward estimates. When the trajectory is completed, a reward is received at the leaf. The visitation count and reward estimate of each selected node are updated accordingly. The UCT paper provides a regret analysis of the UCT algorithm, proving that its expected regret is upper bounded by \(O(\log t)\), where \(t\) is the number of trajectories/timesteps.
## 3 Related Work
Single-agent exploration.Exploration strategies have been extensively studied in single-agent deep RL settings. [1] provide a thorough literature survey of advanced exploration methods. In recent years, the category of bonus-based methods has been commonly applied to solve hard exploration tasks. Based on the Optimism in the Face of Uncertainty (OFU) principle, the high-level idea is to capture some notion of uncertainty or novelty, and augment the extrinsic reward the environment emits with an intrinsic reward that quantifies the uncertainty. For instance, the count-based method [1, 12, 13] measures novelty through the number of times the agent observes a state-action tuple. houthooft2016deep propose an intrinsic bonus based on the maximization of information gain about the agent's belief of environment dynamics. Despite their recent success, simply applying bonus-based methods to MARL does not guarantee effective exploration. As the reward signal is shared across all agents, adding one additional centralized intrinsic reward may still be inefficient to learn structured cooperation due to the multi-agent credit-assignment challenge. Other successful exploration approaches like BootstrappedDQN [1] are unscalable in MARL because of the exponentially large state-action space.
Multi-agent exploration.A recent branch of research proposes to drive multi-agent exploration by promoting collaboration among agents through the maximization of the correlation or influence of agents. The correlation is commonly realized by the mutual information (MI) of quantities that define or reflect agents' behaviour, such as the trajectory history of each agent. For instance, MAVEN [14] learns a hierarchical policy to produce a latent variable that encodes the information about the joint policy, and maximizes the MI between this latent variable and the joint trajectories to encourage the correlation of agents' behaviour. Some other methods try to promote collaboration by maximizing pairwise MI between every two agents. For instance, EITI [22] maximizes the MI between one agent's transition and the other's state-action. VM3-AC [15] maximizes the MI between two agents' policy distributions. Pairwise MI is hard to scale to scenarios with a large number of agents, because the computation grows with the number of agents. li2022learning claims one important downside of MI-based methods is the fact that a strong correlation does not necessarily correspond to high-return collaboration, especially when there exist multiple sub-optimal highly-cooperative strategies associated with the given task. Aside from MI-based methods, there are other approaches based on different intuitions. VACL [1] leverages variational inference and automatic curriculum learning to solve sparse-reward cooperative MARL challenges. Since this method only aims to solve goal-conditioned problems, it is not generally applicable. EMC [11] utilizes a value decomposition method to implicitly capture influence among agents and uses the prediction errors of individual Q-value functions as intrinsic rewards. Our method also tries to capture agent-wise dependency to guide exploration. Different from MI maximization or EMC, our method captures structured inter-dependency through each agent's conditional optimism estimate and performs optimism-based exploration.
Action Conditioned Training.As the learning objective in CTDE is to obtain decentralized agents for execution, previous works commonly assume agents take actions independently and simultaneously, even during the centralized training phase. A few recent works explicitly consider inter-dependency and cooperation learned through sequential action selection, where each agent's policy is conditioned on preceding agents' joint action. MACPF [22] learns a dependent joint policy and its independent counterpart by maximum-entropy RL [13]. ACE [11] is a Q-learning method that models the multi-agent MDP into a single-agent MDP by making the bootstrap target dependent on subsequent agents' actions. Leveraging the multi-agent advantage decomposition theorem [10], Multi-Agent Transformer (MAT) [23] casts MARL into a sequence modeling problem and uses a transformer architecture to map agents' observation sequences to agents' optimal action sequences. These methods consider action conditioning to increase the
expressiveness of the joint policy, hence improving its performance. Our method leverages action conditioning from a different perspective: predecessors' actions reflect dependency among agents, therefore can be used to adjust the optimism level to achieve efficient cooperative exploration.
## 4 Method: conditionally optimistic exploration (COE)
In this section, we introduce our UCT-inspired method Conditionally Optimistic Exploration (COE) to effectively drive exploration in cooperative deep MARL. We describe how we can view cooperative MARL as a sequence of tree search iterations. We then discuss the challenges to directly applying UCT to MARL. Then we present approaches to address these issues, which concludes details of our COE method.
### Multi-agent exploration as UCT
We first formulate action selection at each timestep of MARL as a bandit-based tree search procedure. We consider the following sequential decision-making scheme: at each timestep \(t\) all \(n\) agents take actions sequentially following some arbitrary but fixed pre-determined order. Without loss of generality, we use the identities of agents as the order, i.e., agent \(i\in\mathcal{N}\equiv\{1,\ldots,n\}\) is the \(i\)-th agent to select its action. Figure 1 depicts the formulation of MARL as UCT at each timestep \(t\), where the tree is a partially shown binary tree for the simplicity of illustration. To construct the tree structure, first the state \(s^{t}\) is the root node. Each agent \(i\) has \(k\) actions, corresponding to the \(k\) children nodes of the parent node at depth \(i-1\). Each node at depth \(i\) represents an intermediate stage to which the action sequence of agents \(\{1,\ldots,i\}\) sequentially transitions. When agent \(n\) takes its action, the action sequence reaches a leaf node, where the environment emits a reward to all agents, and transitions to the next state \(s^{t+1}\), which is the root node of the next tree.
By applying the UCT tree search algorithm to each tree, we model the cooperative MARL exploration as a sequence of UCT procedures. For action selection, conditioned on predecessors' joint action, denoted by \(a_{<i}\), each agent \(i\) estimates an action-value function \(Q_{i}(s,a_{i}|a_{<i})\). Augmenting the Q-value by an optimistic bonus conditioned on prior agents' actions, we have the upper confidence bound, denoted as \(B_{i}(s,a_{i}|a_{<i})\). It is worth noting that the Q-value estimate is generalized across states and preceding agents' joint actions using neural network function approximators, which removes the need to maintain an empirical reward estimate at every node in every tree. At depth \(i\), agent \(i\) uses the same Q-value function to take action, no matter which subtree the corresponding node is in.
It should be noted that there is a distinction between the conventional tree search problem setting and the cooperative MARL setting. In UCT, the game state information is accessible at all nodes, but our Dec-POMDP setting assumes partial observability. Accessing full state information enables agents to estimate action values based on the same global state and predecessors' actions, while in our setting we cannot have such estimates. CTDE also requires agents to act independently at execution time, without following any sequential order or conditioning policies on other agents' actions. As an approximate implementation, we build the conditional count module on value decomposition methods, and disable exploration after training to obtain decentralized agents. We empirically test QMIX with conditional optimism in Section 5.
### CoE algorithm
We first briefly describe a common value decomposition learning paradigm. Then we present how we utilize conditional counts to drive optimistic exploration.
Each agent \(i\) has an independent Q-network \(Q_{i}^{ind}(\tau_{i},a_{i};\phi_{i})\) parameterized by \(\phi_{i}\). A mixing network \(\text{Mixer}(\cdot;\theta)\) parameterized by \(\theta\) is used to compute the joint Q-values:
\[Q_{jt}^{ind}(\boldsymbol{\tau},\mathbf{a})=\text{Mixer}([Q_{i}^{ind}(\tau_{i },a_{i})]_{i=1}^{N},s;\theta). \tag{2}\]
Individual agent's action-value networks \(Q_{i}^{ind}\) and the mixing network Mixer are trained by minimizing the mean-squared temporal-difference error:
\[\mathcal{L}^{ind}([\phi]_{i=1}^{N},\theta)=\mathbb{E}_{\mathcal{D}}[(Q_{jt}^{ ind}(\boldsymbol{\tau},\mathbf{a})-y^{ind})^{2}] \tag{3}\]
where \(y^{ind}=(r+\gamma\max_{\mathbf{a}^{\prime}}(Q_{jt}^{ind}(\boldsymbol{\tau}^{ \prime},\mathbf{a}^{\prime})))\) is the update target, and \(\mathcal{D}\) is the replay buffer containing trajectory data collected by \(Q_{i}^{ind}\)'s. It is worth noting that by IGM principle the greedy actions selected by \(Q_{i}^{ind}\)'s are the same actions \(Q_{jt}^{ind}\) would have taken. As centralized training backpropagates the global reward signal to learn the individual utilities \(Q_{i}^{ind}\)'s, value factorization implements an implicit multi-agent credit assignment that enables each agent to grasp the inter-dependency among all utilities.
Figure 1: Modelling of MARL as Tree Search Procedure and Application of the UCT Algorithm.
Building on top of the value decomposition skeleton, we incorporate count-based optimism in both action selection and learning. During action selection, each agent \(i\) takes greedy actions with respect to its optimistic action-value
\[a_{i}\!=\!\arg\max_{a_{i}^{\prime}}\Bigg{\{}Q_{i}(\tau_{i},a_{i}^{\prime})\!+\!c _{\text{act}}\sqrt{\frac{2\log(N(s,a_{<i}))}{N(s,a_{<i},a_{i}^{\prime})}} \Bigg{\}}, \tag{4}\]
where \(a_{<i}\) represents the joint actions taken by agents prior to agent \(i\), and \(c_{\text{act}}\in\mathbb{R}_{+}\) is a hyper-parameter controlling the scale of optimism. Note that counting is performed in the global state space thanks to centralized training.
Moreover, we augment the global reward and the bootstrapped target each with a bonus term, such that the update target becomes
\[y\!=\!\left(r(s,\mathbf{a})+\frac{c_{\text{ew}}}{\sqrt{N(s, \mathbf{a})}}\right)+\] \[\gamma\max_{\mathbf{a}^{\prime}}\text{Mixer}\!\left(\!\left[Q_{i }(\tau_{i}^{\prime},a_{i}^{\prime})\!+\!\frac{c_{\text{boot}}}{\sqrt{N(s^{ \prime},a_{<i}^{\prime},a_{i}^{\prime})}}\right]_{i=1}^{N}\right)\!, \tag{5}\]
where \(c_{\text{ew}},c_{\text{boot}}\in\mathbb{R}_{+}\) are hyper-parameters controlling the scale of the optimistic bias in reward and bootstrapped target, respectively. These two bonus terms are added for two major reasons. First, we intend to maintain long-term optimism in the Q-functions. The acting-time optimism decreases as agents take actions, but unlike bandit or tabular MDP methods, COE's Q-value estimate is updated at a relatively slower rate due to the nature of gradient updates of neural networks. To encourage COE to explore persistently, the augmentation to the bootstrap target allows the Q-value itself to encode optimism through TD loss update. Second, since the bootstrap target is defined based on the Q-value estimates of the next states, the optimistic bootstrap target also captures uncertainty from subsequent agents and future timesteps. The idea of learning optimistic Q-values is originally proposed by Jin et al. (2018, 2020), Yang et al. (2020) and extended to deep RL by Rashid et al. (2020).
With the count-based optimism introduced, the complete learning algorithm is presented in Algorithm 1. During decentralized execution, the optimistic bonuses, although decayed to negligible magnitude, are disabled, and agents take independent actions according to \(Q_{i}^{ind}\)'s only.
```
Initialize parameters \(\mathbf{\phi},\theta\) Visitation count \(N(s,\mathbf{a})\gets 0,\forall(s,\mathbf{a})\in\mathcal{S}\times\mathbf{\mathcal{A}}\) Replay buffer \(\mathcal{D}\leftarrow\{\}\) for each episode \(m=1,\dots,M\)do for each environment timestep \(t=1,\dots,T\)do for agent \(i=1,\dots,n\)do Select action \(a_{i}^{t}\) according to Equation (4) endfor \(N(s^{t},\mathbf{a}^{t})\gets N(s^{t},\mathbf{a}^{t})+1\) \(s^{t+1}\sim P(s^{\prime}|s^{t},\mathbf{a}^{t}),r^{t}=r(s^{t},\mathbf{a}^{t})\) \(\mathcal{D}\leftarrow\mathcal{D}\cup\{(s^{t},\mathbf{a}^{t},r^{t},s^{t+1})\}\) Perform a gradient update on Equation (3) endfor endfor
```
**Algorithm 1**Conditionally Optimistic Exploration
To apply COE to deep MARL tasks, we need to approximate counts in high-dimensional or continuous state space. In our experiments we use the SimHash method (Tang et al., 2017) that projects states to a lower-dimensional feature space before counting. We record the visitation count for the tuple of the state \(s\) and all agents' joint action \(\mathbf{a}\), denoted by \(N(s,\mathbf{a})\). For each agent \(i\), the count up to its action \(a_{i}\) satisfies \(N(s,a_{<i},a_{i})=\sum_{a_{i+1}}N(s,a_{<i},a_{i},a_{i+1})=\sum_{a_{>i}}N(s,a_{<i },a_{i},a_{>i})\), where \(a_{<i}\) and \(a_{>i}\) denote the joint actions taken by preceding and subsequent agents of \(i\), respectively. This relationship shows that we can obtain any count up to \(a_{i}\) by summing up the counts of joint actions that overlap \(a_{<i}\) at state \(s\). Details about SimHash counting are presented in Appendix A.
## 5 Experiments
In this section, we evaluate COE on two sets of cooperative MARL tasks across three commonly used benchmarks: 1) sparse-reward tasks that specifically pose the cooperative exploration challenge, and 2) tasks that generally assess MARL methods' ability for effective coordination. Empirical results show that COE achieves higher sample efficiency and performance than other state-of-the-art approaches in sparse-reward tasks, and matches their performance in general cooperative tasks. We also present ablation studies to demonstrate the effectiveness of conditional optimism and COE's compatibility with common MARL methods. As a sanity check, we examine conditional optimism in a didactic multi-agent bandit problem.
### Evaluation Setup
We evaluate all algorithms on nine tasks over three benchmark environments. The tasks can be categorized into two sets according to their challenges: (1) Challenging sparse-reward tasks focused on efficient exploration. This includes SparseTag and Sparse Spread from the Multi-agent Particle Environments (MPE) (Lowe et al., 2017; Mordatch and Abbeel, 2018), and four tasks with different configurations from the Level-Based Foraging (LBF) environments (Albrecht and Ramamoorthy, 2015; Christiano et al., 2020; Papoudakis et al., 2020). (2) Tasks that generally assess multi-agent coordination. This includes Adversary in MPE, and an easy task 2s-_vs-1_sc and a hard task 3s-_vs-5_z in StarCraft Multi-Agent Challenge (SMAC) (Samvelyan et al.,
2019). Note that LBF tasks and Adversary are fully observable, whereas SMAC and other MPE domains are partially observable environments. More detailed descriptions of the environments and the evaluation protocol can be found in Appendix B and Appendix C, respectively.
To promote fair comparisons, we build all methods on the QMIX agents with the same agent and mixer network architectures. For the same reason we implement the canonical version of all methods where the only additional component is the exploration module unless otherwise specified. We follow the same protocol presented by Papoudakis et al. (2020) to optimize hyperparameters. Specifically we sweep hyperparameters on one task of each environment with three random seeds, and run the best configuration for all tasks in the respective environment with five seeds for the final experiments. Appendix F explains hyperparameter optimization in more detail.
### Performance
We evaluate COE and compare it with the following state-of-the-art baselines in the experiments: (i) QMIX (Rashid et al., 2018): \(\varepsilon\)-greedy QMIX with linearly annealed epsilon schedule; (ii) EMC (Zheng et al., 2021); (iii) MAVEN (Mahajan et al., 2019): combined with annealing \(\varepsilon\)-greedy. Empirical results show that COE outperforms all baselines in the sparse-rewards domains well-known for exploration challenges, and matches strong baseline performance in general multi-agent tasks.
Table 1 summarizes the average returns for the four algorithms in all nine tasks. We highlight the maximum average return in bold. We perform a two-sample t-test (Snedecor and Cochran, 1980) with a significance level \(0.05\) between the best performing algorithm and each other algorithm in each task. We mark the return values with an asterisk if the corresponding algorithm achieves a performance level that is not statistically significantly different from the highest performance. Difficult exploration tasks are shown in bold. The same table also reports the average win-rates in SMAC tasks as it is a common practice in MARL literature. The table summarizing maximum returns over training is presented in Appendix D.
The results in Table 1 and Figure 2 show that COE significantly outperforms other baselines in sparse-reward tasks that require efficient exploration. Particularly COE has higher sample efficiency in difficult LBF domains. In the
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & **Tasks Algs.** & COE & EMC & MAVEN & QMIX \\ \hline \multirow{3}{*}{**Snedecor**} & Adversary & \(17.77\pm 0.71\) & \(16.73\pm 0.83\) & \(19.57\pm 0.51\) & \(18.20\pm 0.56\) \\ & **Sparse Tag** & \(0.65\pm 0.09\) & \(0.43\pm 0.06\) & \(0.01\pm 0.00\) & \(0.40\pm 0.05\) \\ & **Sparse Spread** & \(0.79\pm 0.09\) & \(0.41\pm 0.18\) & \(0.10\pm 0.14\) & \(0.29\pm 0.05\) \\ \hline \multirow{3}{*}{**Snedecor**} & **10x10-3p-M** & \(0.71\pm 0.05\) & \(0.68\pm 0.03\)* & \(0.16\pm 0.06\) & \(0.49\pm 0.01\) \\ & **15x15-3p-M** & \(0.20\pm 0.02\) & \(0.12\pm 0.02\) & \(0.03\pm 0.00\) & \(0.08\pm 0.01\) \\ & **15x15-4p-M** & \(0.41\pm 0.06\) & \(0.25\pm 0.07\) & \(0.04\pm 0.01\) & \(0.19\pm 0.02\) \\ & **15x15-4p-gf** & \(0.30\pm 0.02\) & \(0.23\pm 0.04\) & \(0.04\pm 0.00\) & \(0.15\pm 0.02\) \\ \hline \multirow{3}{*}{**Snedecor**} & 2x-vs-1sc & \(17.83\pm 0.16\)* & \(17.88\pm 0.74\)* & \(17.78\pm 1.26\)* & \(18.21\pm 0.39\) \\ & 3x-vs-5z & \(16.03\pm 1.58\) & \(9.66\pm 2.62\) & \(14.11\pm 2.36\)* & \(11.74\pm 1.87\) \\ \cline{1-1} \cline{2-5} & 2x-vs-1sc & \(0.79\pm 0.01\) & \(0.83\pm 0.04\) & \(0.82\pm 0.08\)* & \(0.83\pm 0.02\) \\ \cline{1-1} \cline{2-5} & 3x-vs-5z & \(0.45\pm 0.09\) & \(0.08\pm 0.14\) & \(0.29\pm 0.12\)* & \(0.13\pm 0.11\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average Returns and 95% Confidence Interval for All Four Algorithms, and Average Win-rates for SMAC Tasks.
Figure 2: Episodic Returns and 95% Confidence Interval for All Algorithms in All Tasks except Adversary, with Sparse-Reward Tasks Marked Bold.
early exploration stage, all algorithms gain performance slowly, resulting in indistinguishable learning curves. As time progresses, COE makes improvements much faster than the baselines. The sample efficiency improvement leads to higher final and overall return values. In relatively easier exploration tasks SparseTag, SparseSpread, and Foraging-10x10-3p-3f, COE's outperformance is not as large as it is in the hard tasks. Some other baselines also learn strong policies in these tasks. Since all algorithms are built on the same QMIX agent, overall the results in sparse-reward domains demonstrate the effectiveness of conditional-optimism-guided exploration.
In the general MARL coordination tasks, COE has similar performance as the baselines. Adversary is evidently the easiest task among all tested tasks, where all algorithms quickly converge to the optimal policy at almost identical speed. In the hard _3s-vs-5z_ task in SMAC, COE shows better sample efficiency and final performance in terms of the mean episodic returns. This trend is similar to the trends we observe in sparse-reward tasks, although in this task the outperformance is not statistically significant. These results indicate that COE is not only an effective approach to hard exploration tasks; it is also a strong algorithm generally applicable to common MARL domains.
### Ablations
COE consists of two major components, namely the independent Q-value functions learned through centralized training, and the conditional optimism. In order to have a better understanding of COE, we test several ablation variants to evaluate these two components' contribution to performance gain. Results suggest that conditional optimism plays a dominant role in performance improvement. Compared to dependent Q-values conditioned on predecessors' actions, independent Q-values learned through value decomposition also work well with conditional optimism despite partial observability.
To evaluate the contributions of _independent_ Q-values in COE, we test the following ablation variants that learn _conditional_ Q-values: (1) COE-Cond-IQ: We apply conditional optimism to IQL. Each agent simultaneously learns an independent Q-network and a dependent Q-network that takes in predecessors' actions as extra inputs without centralized training. The dependent network selects actions during training. Two nets are trained on separate TD losses using the same replay batches. After training the independent network is responsible for decision-making at execution time. This variant directly mimics UCT in MARL without considering each agent's partial observability issue. (2) COE-Cond-CQ: We add a QMIX mixer to COE-Cond-IQ to enable centralized Q-value training. The same mixer computes the centralized Q-value \(Q_{i\,t}^{ind}\) for independent networks and \(Q_{i\,t}^{dep}\) for dependent networks. This variant ignores the potential issue that within individual Q-values the inter-dependency among agents captured by centralized training and that captured by action conditioning may not be aligned with each other.
To evaluate the contributions of _conditional_ optimism, we propose the following ablation variants that use _non-conditional_ optimism: (1) UCB-Ind: Similar to COE, each agent performs UCB-based exploration except that optimism is not conditioned on others' actions. (2) UCB-Cen: Agents receive UCB optimism only through the intrinsic reward \(\frac{c_{\text{max}}}{\sqrt{N(s,\mathbf{a})}}\) during centralized training.
We follow the same evaluation protocol described in Section 5.1 to conduct experiments. The average returns of the ablations are summarized in Table 2. Appendix D and Appendix E present the learning curves and a more detailed introduction to the ablations, respectively. Results show that COE has a similar performance as COE-Cond-CQ and COE-Cond-IQ in the majority of tested tasks. COE-Cond-IQ performs relatively worse in MPE tasks, but better in LBF tasks. This may be attributed to the partial observability issue: since LBF is fully observable, COE-Cond-IQ becomes a more legitimate adoption of UCT to cooperative MARL. COE-Cond-CQ matches COE's performance in MPE and SMAC. Although it underperforms COE in three LBF tasks, COE-Cond-CQ is still competitive and matches EMC's performance in LBF. These results suggest that conditional optimism boosts sample efficiency and overall performance with different Q-value estimation approaches.
On the other hand, UCB-Ind underperforms COE in hard LBF tasks and SMAC tasks. It also has a large variance across random seeds in SMAC tasks. UCB-Cen matches COE in half of the tasks, but it also suffers from large variances. Through these comparisons, we observe conditional optimism guides more steady performance improvement.
### Didactic Problem
A rigorous application of UCT in MARL requires learning each agent's state-action value conditioned on earlier agents' state-action pairs. Due to partial observability in MARL that prevents access to global states, we use value decomposition as an approximate implementation of conditional value estimation and empirically show its effectiveness in previous sections. In this section, we provide a sanity check on the bandit problem to re-demonstrate conditional optimism is important whereas conditional value estimation is unnecessary.
We consider the cooperative multi-agent multi-armed stochastic bandit problem, where the reward is based on the joint action of a group of agents. Suppose we have \(n\) agents, each has \(k\) actions. In our didactic Bernoulli bandit problem, only one out of \(k^{n}\) joint actions is optimal with distribution \(\mathcal{B}(p=0.9)\), and all other joint actions are sub-optimal with distribution \(\mathcal{B}(p=p_{0})\), where the sub
optimality value \(p_{0}\) is an environment hyper-parameter. For each bandit instance, a uniformly sampled joint action from all combinations is set to be optimal.
We test four UCB-based algorithm variants: (1) DepRew-DepOpt: it performs UCT exploration as in Equation (1), where both reward estimates and count-based optimism are dependent on prior agents' joint action; (2) IndRew-DepOpt: each agent maintains its reward estimates independently, but the optimism is dependent on predecessors; (3) IndRew-IndOpt: both reward estimates and optimism are independent; (4) UCB-Cen: one UCB learner whose action space is the cartesian product of all agents' action set. This variant is merely for performance comparison because it is not factorizable to decentralized agents.
We run experiments on a bandit problem with \(8\) agents and \(3\) actions of each agent over \(50\) seeds, and report the performance of all four algorithms for different sub-optimality settings in Figure 3. We evaluate algorithms with two metrics, the expected regret, which is preferably lower, and the percentage of optimal joint action being selected, which is preferably higher. Results show that both DepRew-DepOpt and IndRew-DepOpt quickly converge to the optimal policy across different sub-optimality settings. This suggests that in this bandit task, conditional optimism robustly drives efficient cooperative exploration, regardless of whether the reward estimates are learned independently. IndRew-IndOpt, on the other hand, is inefficient to identify the optimal joint action and has high variances across random seeds. These results highlight the significance of conditional count-based optimism, and its dominant role over the action-value estimates in coordinated exploration. In general, these results are consistent with MARL ablation results from Section 5.3.
## 6 Conclusions
In this paper, we draw the connection between cooperative multi-agent reinforcement learning (MARL) and tree search. Inspired by the tree search algorithm UCT, we propose a multi-agent exploration method Conditionally Optimistic Exploration (COE), that utilizes the sequential decision-making scheme and visitation count conditioned on previous agents' actions. Empirical results show that our method significantly outperforms state-of-the-art MARL baselines in sparse-reward hard-exploration tasks, and matches their performance in general coordination tasks.
One limitation of our method is that it may require a large amount of memory due to storing visitation counts of state-action tuples during training, which makes our method hard to scale to tasks with very large state-action space. An interesting future work is to utilize neural network density models to estimate pseudo-counts. Training such a model would require more computation but the model itself only occupies constant memory, which could eliminate potentially high memory usage.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & **Tasks Valges** & COE & COE-Cond-IQ & COE-Cond-CQ & UCB-Ind & UCB-Cen \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & Adversary & \(17.77\pm 0.71\) & \(15.46\pm 0.68\) & \(18.99\pm 0.26\) & \(17.07\pm 0.37\) & \(17.15\pm 0.82\) \\ & **Sparse Tag** & \(0.65\pm 0.09\) & \(0.07\pm 0.01\) & \(0.83\pm 0.17\) & \(0.52\pm 0.13\) & \(0.49\pm 0.13\) \\ & **Sparse Spread** & \(0.79\pm 0.09\) & \(0.36\pm 0.05\) & \(0.54\pm 0.19\) & \(0.59\pm 0.31\) & \(0.75\pm 0.16\) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & **10x10-3p-3f** & \(0.71\pm 0.05\) & \(0.76\pm 0.04\) & \(0.68\pm 0.01\) & \(0.67\pm 0.07\) & \(0.64\pm 0.02\) \\ & **15x15-3p-3f** & \(0.20\pm 0.02\) & \(0.19\pm 0.02\) & \(0.12\pm 0.05\) & \(0.15\pm 0.03\) & \(0.13\pm 0.05\) \\ & **15x15-4p-3f** & \(0.41\pm 0.06\) & \(0.47\pm 0.06\) & \(0.24\pm 0.06\) & \(0.23\pm 0.05\) & \(0.16\pm 0.10\) \\ & **15x15-4p-3f** & \(0.30\pm 0.02\) & \(0.23\pm 0.04\) & \(0.14\pm 0.02\) & \(0.23\pm 0.07\) & \(0.27\pm 0.06\) \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & 2e-vs-1sc & \(17.83\pm 0.16\) & \(16.67\pm 1.56\) & \(18.64\pm 0.48\) & \(11.09\pm 7.27\) & \(15.77\pm 1.45\) \\ & 3e-vs-5z & \(16.03\pm 1.58\) & \(16.85\pm 1.55\) & \(17.01\pm 0.74\) & \(11.03\pm 3.03\) & \(13.36\pm 3.41\) \\ \cline{1-1} \cline{2-7} & 2e-vs-1sc & \(0.79\pm 0.01\) & \(0.66\pm 0.13\) & \(0.87\pm 0.04\) & \(0.47\pm 0.35\) & \(0.66\pm 0.07\) \\ \cline{1-1} \cline{2-7} & 3e-vs-5z & \(0.45\pm 0.09\) & \(0.56\pm 0.15\) & \(0.57\pm 0.06\) & \(0.19\pm 0.18\) & \(0.25\pm 0.21\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average Returns and 95% Confidence Interval for All Ablations, and Average Win-rates for SMAC Tasks.
Figure 3: Mean Performance and Standard Error for Bandit Problem with \(8\) Agents and \(3\) Actions Each.
## Acknowledgements
We acknowledge the computational resources provided by the Digital Research Alliance of Canada. Janarthanan Rajendran acknowledges the support of the IVADO postdoctoral fellowship. Sarath Chandar acknowledges the support of the Canada CIFAR AI Chair program and an NSERC Discovery Grant.
|
2302.01747 | Approximation by Egyptian Fractions and the Weak Greedy Algorithm | Let $0 < \theta \leqslant 1$. A sequence of positive integers
$(b_n)_{n=1}^\infty$ is called a weak greedy approximation of $\theta$ if
$\sum_{n=1}^{\infty}1/b_n = \theta$. We introduce the weak greedy approximation
algorithm (WGAA), which, for each $\theta$, produces two sequences of positive
integers $(a_n)$ and $(b_n)$ such that
a) $\sum_{n=1}^\infty 1/b_n = \theta$;
b) $1/a_{n+1} < \theta - \sum_{i=1}^{n}1/b_i < 1/(a_{n+1}-1)$ for all
$n\geqslant 1$;
c) there exists $t\geqslant 1$ such that $b_n/a_n \leqslant t$ infinitely
often.
We then investigate when a given weak greedy approximation $(b_n)$ can be
produced by the WGAA. Furthermore, we show that for any non-decreasing $(a_n)$
with $a_1\geqslant 2$ and $a_n\rightarrow\infty$, there exist $\theta$ and
$(b_n)$ such that a) and b) are satisfied; whether c) is also satisfied depends
on the sequence $(a_n)$. Finally, we address the uniqueness of $\theta$ and
$(b_n)$ and apply our framework to specific sequences. | Hung Viet Chu | 2023-02-01T15:49:14Z | http://arxiv.org/abs/2302.01747v2 | # Underapproximation by Egyptian Fractions and the Weak Greedy Algorithm
###### Abstract.
Let \(0<\theta\leqslant 1\). A sequence of positive integers \((b_{n})_{n=1}^{\infty}\) is called a weak greedy underapproximation of \(\theta\) if \(\sum_{n=1}^{\infty}1/b_{n}=\theta\). We introduce the weak greedy underapproximation algorithm (WGUA), which, for each \(\theta\), produces two sequences of positive integers \((a_{n})\) and \((b_{n})\) such that
1. \(\sum_{n=1}^{\infty}1/b_{n}=\theta\);
2. \(1/a_{n+1}<\theta-\sum_{i=1}^{n}1/b_{i}<1/(a_{n+1}-1)\) for all \(n\geqslant 1\);
3. there exists \(t\geqslant 1\) such that \(b_{n}/a_{n}\leqslant t\) infinitely often.
We then investigate when a given weak greedy underapproximation \((b_{n})\) can be produced by the WGUA. Furthermore, we show that for any increasing \((a_{n})\) with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\), there exist \(\theta\) and \((b_{n})\) such that a) and b) are satisfied; whether c) is also satisfied depends on the sequence \((a_{n})\). Finally, we address the uniqueness of \(\theta\) and \((b_{n})\) and apply our framework to specific sequences.
Key words and phrases:Egyptian fraction; greedy algorithm; sequences 2020 Mathematics Subject Classification: 11A67, 11B99
## 1. Introduction
Throughout this paper, let \(\theta\) denote a number in \((0,1]\) and let \(G:(0,1]\to\mathbb{N}_{\geqslant 2}\) be the function
\[G(\theta)\ =\ \left\lfloor\frac{1}{\theta}\right\rfloor+1;\]
that is, \(G(\theta)\) gives the unique positive integer \(a\geqslant 2\) such that
\[\frac{1}{a}\ <\ \theta\ \leqslant\ \frac{1}{a-1}.\]
An _Egyptian_ fraction is a fraction of the form \(1/n\) for some positive integer \(n\). We consider the problem of representing \(\theta\) as an infinite sum of Egyptian fractions. One natural method is the _greedy underapproximation algorithm_ (GUA), which constructs a sequence of positive integers \((a_{n})_{n=1}^{\infty}\) recursively as follows: \(a_{1}=G(\theta)\geqslant 2\); supposing that \(a_{1},\ldots,a_{n}\) have been constructed, let
\[a_{n+1}\ =\ G\left(\theta-\sum_{i=1}^{n}\frac{1}{a_{n}}\right).\]
By [13, (3)], the sequence \((a_{n})\) is strictly increasing and particularly, satisfies
\[a_{1}\geqslant 2\text{ and }a_{n+1}\ \geqslant\ a_{n}^{2}-a_{n}+1. \tag{1.1}\]
Since by construction,
\[\theta-\sum_{i=1}^{n}\frac{1}{a_{n}}\;\leqslant\;\frac{1}{a_{n+1}-1}\;\to\;0,\]
we have
\[\sum_{n=1}^{\infty}\frac{1}{a_{n}}=\theta.\]
According to [14, Theorem 5], if \(\theta=p/q\), where \(p,q\) are positive integers such that \(p\) divides \(q+1\), then the GUA produces the best approximations, i.e., the \(n\)-term approximation \(\sum_{i=1}^{n}1/a_{i}\) outperforms any other \(n\)-term underapproximations using Egyptian fractions. This generalizes a result in [13, 12, 14]. The proof involves an useful inequality established in [1] (see also [14].) However, such optimality does not hold for general \(\theta\) (see [14, Section 5].)
The goal of this paper is to investigate a weak version of the GUA, which is inspired by the so-called _(weak) thresholding greedy algorithm_ (TGA) in the area of functional analysis. We describe the (weak) TGA briefly. Let \(X\) be an infinite-dimensional, complete, normed vector space. Assume further that \(X\) has a basis \(\mathcal{B}=(e_{n})_{n=1}^{\infty}\) so that every vector \(x\in X\) can be represented by a formal series \(\sum_{n=1}^{\infty}a_{n}e_{n}\), where \(a_{n}\) are scalars. In order to form an \(m\)-term approximation of \(x\), the TGA chooses \(m\) largest coefficients \(a_{n}\) in modulus. Formally, let \(A\subset\mathbb{N}\) verify \(|A|=m\) and
\[\min_{n\in A}|a_{n}|\;\geqslant\;\max_{n\notin A}|a_{n}|. \tag{1.2}\]
Then the TGA produces the \(m\)-term approximation \(\sum_{n\in A}a_{n}e_{n}\). It is not always true that approximations produced by this method converge to the orginal vector \(x\) as \(m\) grows. In fact, Konyagin and Temlyakov [10] called a basis _quasi-greedy_ if these approximations converge to the desired \(x\). Meanwhile, Temlyakov [11] introduced a weaker version of the TGA, called the weak TGA (WTGA), which is more flexible in forming approximating sums. In particular, fixing a number \(t\in(0,1]\), the WTGA considers sets \(A\) satisfying \(|A|=m\) and
\[\min_{n\in A}|a_{n}|\;\geqslant\;t\max_{n\notin A}|a_{n}|. \tag{1.3}\]
Clearly, (1.3) is weaker than (1.2). In other words, the WTGA chooses the "largest coefficients up to a constant." Surprisingly, the flexibility of the WTGA does not affect convergence: a basis is quasi-greedy under the TGA if and only if it is quasi-greedy under the WTGA (see [11, Section 1.5].)
Inspired by the aforementioned interactions between the TGA and the WTGA, we introduce the _weak greedy underapproximation algorithm_ (WGUA) as a companion of the GUA. The idea is that at the \(n\)th step of our weak algorithm, we pick \(a_{n}\) based on the "greedy choice up to a constant". Specifically, fix \(t\in\mathbb{R}_{\geqslant 1}\) and an infinite set \(\Lambda\subset\mathbb{N}\). For each \(\theta\in(0,1]\), we define the \((t,\Lambda)\)-WGUA as follows: let \(a_{1}=G(\theta)\). Choose \(b_{1}\geqslant a_{1}\). Additionally, we require \(b_{1}\leqslant ta_{1}\) if \(1\in\Lambda\). Assuming that \(a_{1},b_{1},\ldots,a_{n},b_{n}\) have been defined, we let
\[a_{n+1}\;=\;G\left(\theta-\sum_{i=1}^{n}\frac{1}{b_{i}}\right). \tag{1.4}\]
Choose \(b_{n+1}\geqslant a_{n+1}\). Additionally, we require \(b_{n+1}\leqslant ta_{n+1}\) if \(n+1\in\Lambda\). We see that the \((t,\Lambda)\)-WGUA generalizes the GUA by simply setting \(t=1\) and \(\Lambda=\mathbb{N}\).
**Definition 1.1**.: An infinite sequence of positive integers \((b_{n})_{n=1}^{\infty}\) is called a _weak greedy underapproximation_ of \(\theta\) if \(\sum_{n=1}^{\infty}1/b_{n}=\theta\) and for all \(n\geqslant 1\),
\[G\left(\theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\right)\ \leqslant\ b_{n}. \tag{1.5}\]
Inequality (1.5) indicates that a term \(b_{n}\) is not necessarily picked by the greedy algorithm. Attentive readers may notice that (1.5) is superfluous. Indeed, suppose that for some \(N\),
\[b_{N}\ <\ G\left(\theta-\sum_{i=1}^{N-1}\frac{1}{b_{i}}\right)\ =:\ a_{N}.\]
Then
\[\sum_{i=1}^{N}\frac{1}{b_{i}}\ =\ \sum_{i=1}^{N-1}\frac{1}{b_{i}}+\frac{1}{b_{ N}}\ \geqslant\ \sum_{i=1}^{N-1}\frac{1}{b_{i}}+\frac{1}{a_{N}-1}\ \geqslant\ \theta,\]
which contradicts that \(\sum_{n=1}^{\infty}1/b_{n}=\theta\).
We describe the paper's structure. In Section 2, we shows that the WGUA satisfies the minimal requirement for an algorithm to be sensible; that is, for every \(\theta\), the sequence \((b_{n})\) produced by the WGUA satisfies
\[\sum_{n=1}^{\infty}\frac{1}{b_{n}}\ =\ \theta. \tag{1.6}\]
This is the analog of the relation between the TGA and the WTGA. Moreover, we compute the growth rate of the sequence \((b_{n})\) produced by the \((t,\Lambda)\)-WGUA when \(\Lambda=\mathbb{N}\) and \(b_{n}=\lceil ta_{n}\rceil\) (Proposition 2.2.)
In Section 3, we carry out a deeper study of the two sequences \((a_{n})\) and \((b_{n})\) produced by the WGUA. According to Section 2, if \((b_{n})\) is produced by the WGUA applied to \(\theta\), then \((b_{n})\) is a weak greedy underapproximation of \(\theta\). We shall show that the converse is not true: there exist \(\theta\) and \((b_{n})\) such that \(\sum_{n=1}^{\infty}1/b_{n}=\theta\), but \((b_{n})\) cannot be produced by the WGUA. To do so, we observe that \(\theta,(a_{n}),(b_{n})\) produced by the WGUA have three properties
* \(\sum_{n=1}^{\infty}1/b_{n}=\theta\) (see Section 2);
* (1.4) holds for all \(n\);
* there exists \(t\geqslant 1\) such that \(b_{n}/a_{n}\leqslant t\) infinitely often.
As we shall see, condition c) guarantees the convergence (1.6). However, even when \(\theta\), \((a_{n})\), and \((b_{n})\) verify a) and b), they do not necessarily satisfy c). As a result, in such cases, \((b_{n})\) cannot be produced by the WGUA. We then go further to characterize the situation when c) does not hold (see Proposition 3.2.)
Next, we consider the following question: given an increasing sequence \((a_{n})\) with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\), are there \(\theta\in(0,1]\) and \((b_{n})\) such that a) and b) hold? According to [13, Corollary 3], the answer is positive if \(a_{n+1}\geqslant a_{n}^{2}-a_{n}+1\), in which case, \(\theta=\sum_{n=1}^{\infty}1/a_{n}\) and \(b_{n}=a_{n}\) for all \(n\geqslant 1\). By explicit construction, we answer
the aforementioned question in the affirmative for any increasing sequence \((a_{n})\) with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\) (see Theorem 3.5 and its Corollary 3.6.)
Section 4 gives necessary and sufficient conditions for when a sequence \((a_{n})\) gives unique \(\theta\) and \((b_{n})\) (Corollary 4.2 and Proposition 4.3.) Finally, Section 5 applies the framework from previous sections to particular sequences including geometric progressions, arithmetic progressions, and the Fibonacci sequence.
## 2. Convergence of the Wgua
The minimal requirement we want the WGUA to satisfy is convergence, which is confirmed by the following proposition.
**Proposition 2.1**.: _If \((b_{n})_{n=1}^{\infty}\) is obtained from the \((t,\Lambda)\)-WGUA applied to \(\theta\), then_
\[\sum_{n=1}^{\infty}\frac{1}{b_{n}}\ =\ \theta.\]
Proof.: Let \((a_{n}),(b_{n})\) be the two sequences produced by the \((t,\Lambda)\)-WGUA applied to \(\theta\): for each \(n\geqslant 1\),
\[a_{n}\ =\ G\left(\theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\right);\ \text{ equivalently,}\ 0\ <\ \frac{1}{a_{n}}\ <\ \theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\ \leqslant\ \frac{1}{a_{n}-1}. \tag{2.1}\]
Hence, \((a_{n})\) is increasing. It suffices to prove that \((a_{n})\) is unbounded. Suppose otherwise that there is some \(M\) such that \(a_{n}\leqslant M\) for all \(n\). Then \(b_{n}\leqslant Mt\) infinitely often, which implies that \(\sum_{n=1}^{\infty}1/b_{n}=\infty\), contradicting (2.1).
Next, we consider a special case of the general \((t,\Lambda)\)-WGUA by requiring that \(\Lambda=\mathbb{N}\) and for all \(n\), \(b_{n}=\lceil ta_{n}\rceil\). Let us denote this algorithm by \(\mathcal{G}(t)\). Suppose that we use \(\mathcal{G}(t)\) to obtain an \(n\)-term underapproximation \(\sum_{i=1}^{n}1/c_{i}\) of \(\theta\). Then a logical choice is to have \(c_{i}=b_{i}=\lceil ta_{i}\rceil\) for all \(1\leqslant i\leqslant n-1\), while \(c_{n}=a_{n}\). (It makes no sense if we do not choose the last term \(c_{n}\) greedily.) An approximation by \(\mathcal{G}(4/3)\) may outperform the GUA. We borrow an example from [10]. The GUA gives \(1/3+1/17\) as a \(2\)-term underapproximation of \(19/48\), while \(\mathcal{G}(4/3)\) gives \(1/4+1/7\). We have
\[\frac{1}{3}+\frac{1}{17}\ <\ \frac{1}{4}+\frac{1}{7}\ <\ \frac{19}{48}.\]
By definition, \(\mathcal{G}(1)\) is the greedy underapproximation algorithm. There is an interesting difference between \(t=1\) and \(t>1\). If \((b_{n})\) is obtained by \(\mathcal{G}(1)\) applied to \(\theta\), then [10, (3)] gives
\[\frac{b_{n+1}}{b_{n}}\ \geqslant\ b_{n}-1+\frac{1}{b_{n}}.\]
Since \(\lim_{n\to\infty}b_{n}=\infty\), we get \(\lim_{n\to\infty}b_{n+1}/b_{n}=\infty\). However, the limit is finite when \(t>1\) as the following proposition shows.
**Proposition 2.2**.: _If \((b_{n})_{n=1}^{\infty}\) is the sequence from \(\mathcal{G}(t)\) applied to \(\theta\), then_
\[\lim_{n\to\infty}\frac{b_{n+1}}{b_{n}}\ =\ \begin{cases}t/(t-1)&\text{ if }t>1,\\ \infty&\text{ if }t=1.\end{cases}\]
Before proving Proposition 2.2, we record an important inequality addressing the relation between \((a_{n})\) and \((b_{n})\) produced by the WGUA. For each \(n\geqslant 1\), we have
\[\frac{1}{a_{n+1}}\;<\;\theta-\sum_{i=1}^{n}\frac{1}{b_{i}}\;=\;\left(\theta-\sum _{i=1}^{n-1}\frac{1}{b_{i}}\right)-\frac{1}{b_{n}}\;\leqslant\;\frac{1}{a_{n}- 1}-\frac{1}{b_{n}},\]
and
\[\frac{1}{a_{n+1}-1}\;\geqslant\;\theta-\sum_{i=1}^{n}\frac{1}{b_{i}}\;=\; \left(\theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\right)-\frac{1}{b_{n}}\;>\;\frac{ 1}{a_{n}}-\frac{1}{b_{n}}.\]
Hence,
\[\frac{1}{a_{n}}-\frac{1}{a_{n+1}-1}\;<\;\frac{1}{b_{n}}\;<\;\frac{1}{a_{n}-1}- \frac{1}{a_{n+1}},\forall n\in\mathbb{N}. \tag{2.2}\]
Proof of Proposition 2.2.: The case \(t=1\) is explained right before Proposition 2.2. Let \(t>1\). The right side of (2.2) yields
\[\frac{1}{a_{n+1}}\;<\;\frac{1}{a_{n}-1}-\frac{1}{b_{n}}\;=\;\frac{1}{a_{n}-1} -\frac{1}{\left\lceil ta_{n}\right\rceil}\;<\;\frac{1}{a_{n}-1}-\frac{1}{ta_{ n}+1},\forall n\in\mathbb{N}.\]
Therefore,
\[\frac{1}{a_{n+1}}\;<\;\frac{(t-1)a_{n}+2}{(ta_{n}+1)(a_{n}-1)}\;\Longrightarrow \;\frac{a_{n+1}}{a_{n}}\;>\;\frac{\left(t+\frac{1}{a_{n}}\right)\left(1-\frac {1}{a_{n}}\right)}{t-1+\frac{2}{a_{n}}}. \tag{2.3}\]
The left side of (2.2) yields
\[\frac{1}{a_{n+1}-1}\;>\;\frac{1}{a_{n}}-\frac{1}{b_{n}}\;=\;\frac{1}{a_{n}}- \frac{1}{\left\lceil ta_{n}\right\rceil}\;\geqslant\;\frac{1}{a_{n}}-\frac{1 }{ta_{n}}.\]
Hence,
\[\frac{a_{n+1}}{a_{n}}\;<\;\frac{t}{t-1}+\frac{1}{a_{n}}. \tag{2.4}\]
From (2.3) and (2.4), we obtain that \(\lim_{n\to\infty}a_{n+1}/a_{n}=t/(t-1)\). Since \(b_{n}=\left\lceil ta_{n}\right\rceil\), we have the desired conclusion.
## 3. The range of the WGUA
In this section, we address the question of whether every weak greedy underapproximation can be obtained from the WGUA. The boundedness condition on the WGUA requires that for some \(t\geqslant 1\), \(b_{n}/a_{n}\leqslant t\) infinitely often, which guarantees the convergence of \(\sum_{n=1}^{\infty}1/b_{n}\) to the desired \(\theta\) (see the proof of Proposition 2.1.) However, there exist \(\theta\) and \((b_{n})\) such that if \((a_{n})\) satisfies (1.4), then \(\lim_{n\to\infty}b_{n}/a_{n}=\infty\). By studying such a situation, we know more about the sequence \((a_{n})\) (see Corollary 3.3.) First, consider the following example.
**Example 3.1**.: For \(n\in\mathbb{N}\), let \(b_{n}=n(n+2)\) and \(\theta=3/4\). It is easy to check that \(\sum_{n=1}^{\infty}1/b_{n}=\theta\). We claim that if \((a_{n})\) satisfies (1.4), then \(a_{n}=n+1\). Indeed, it suffices to show that
\[\left|\left(\frac{3}{4}-\sum_{i=1}^{n-1}\frac{1}{i(i+2)}\right)^{-1}\right|\; =\;n,\forall n\in\mathbb{N}.\]
We have
\[\left|\left(\frac{3}{4}-\sum_{i=1}^{n-1}\frac{1}{i(i+2)}\right)^{-1}\right| \ =\ \left|\left(\sum_{i=1}^{\infty}\frac{1}{i(i+2)}-\sum_{i=1}^{n-1} \frac{1}{i(i+2)}\right)^{-1}\right|\] \[\ =\ \left|\left(\sum_{i=n}^{\infty}\frac{1}{i(i+2)}\right)^{-1}\right|\] \[\ =\ \left|\left(\frac{1}{2}\left(\frac{1}{n}+\frac{1}{n+1}\right) \right)^{-1}\right|\ \text{by telescoping}\] \[\ =\ \left|n+\frac{n}{2n+1}\right|\ =\ n.\]
Hence, \(a_{n}=n+1\) and \(b_{n}/a_{n}\to\infty\).
The sequences \((a_{n})\) and \((b_{n})\) in Example 3.1 do not have \(b_{n}/a_{n}\) infinitely often bounded. In other words, a weak greedy underapproximation does not necessarily come from the WGUA. The next proposition provides a characterization of this situation.
**Proposition 3.2**.: _Let \((b_{n})_{n=1}^{\infty}\) be a weak greedy underapproximation of \(\theta\) and \((a_{n})_{n=1}^{\infty}\) satisfy (1.4). The following are equivalent_
* _for all_ \(t\geqslant 1\)_,_ \(\{n:b_{n}/a_{n}\leqslant t\}\) _is finite._
* \(\lim_{n\to\infty}a_{n+1}/a_{n}=1\)_._
**Corollary 3.3**.: _Let \((b_{n})_{n=1}^{\infty}\) be a weak greedy underapproximation of \(\theta\) and \((a_{n})_{n=1}^{\infty}\) satisfy (1.4). Then \((a_{n})_{n=1}^{\infty}\) and \((b_{n})_{n=1}^{\infty}\) are obtained from the WGUA if and only if for some \(\varepsilon>0\), \(a_{n+1}>(1+\varepsilon)a_{n}\) infinitely often._
Proof of Proposition 3.2.: i)\(\implies\)ii): Since \((a_{n})\) is increasing, it suffices to show that for all \(\varepsilon>0\), there exists \(N\) such that \(a_{n+1}/a_{n}<1+\varepsilon\) for all \(n>N\). Choose \(M\) sufficiently large such that \(M/(M-1)<1+\varepsilon/2\). By i), there exists \(N\) such that for all \(n>N\), \(b_{n}>Ma_{n}\) and \(1/a_{n}<\varepsilon/2\). By (2.2),
\[\frac{1}{a_{n+1}-1}\ >\ \frac{1}{a_{n}}-\frac{1}{b_{n}}\ >\ \frac{1}{a_{n}}- \frac{1}{Ma_{n}}\ =\ \frac{M-1}{M}\frac{1}{a_{n}},\forall n>N,\]
which gives
\[\frac{a_{n+1}}{a_{n}}\ <\ \frac{M}{M-1}+\frac{1}{a_{n}}\ <\ 1+\varepsilon, \forall n>N.\]
ii)\(\implies\)i): We prove by contrapositive. Choose \(t\geqslant 1\) and suppose that \(b_{n}/a_{n}\leqslant t\) infinitely often. Let \(A\) be the infinite set \(\{n:b_{n}/a_{n}\leqslant t\}\). By (2.2), we have
\[\frac{1}{a_{n+1}}\ <\ \frac{1}{a_{n}-1}-\frac{1}{b_{n}}\ \leqslant\ \frac{1}{a_{n}-1}-\frac{1}{ta_{n}}, \forall n\in A.\]
Trivial calculations give
\[\frac{a_{n+1}}{a_{n}}\ >\ \frac{t(a_{n}-1)}{ta_{n}-(a_{n}-1)}\ =\ \frac{a_{n}-1}{(a_{n}-1)-\frac{a_{n}-1}{t}+1}\ =\ \frac{1}{1-\frac{1}{t}+\frac{1}{a_{n}-1}}, \forall n\in A.\]
If \(t=1\), then \(a_{n+1}/a_{n}>a_{n}-1\) for all \(n\in A\). That \(a_{n}\to\infty\) implies that \(a_{n+1}/a_{n}\geqslant 2\) infinitely often, making ii) fail. If \(t>1\), choose \(N\) sufficiently large such that for \(n>N\), \(a_{n}>2t+1\). Then for all \(n\in A\) and \(n>N\),
\[\frac{a_{n+1}}{a_{n}}\ >\ \frac{1}{1-\frac{1}{t}+\frac{1}{2t}}\ =\ \frac{1}{1- \frac{1}{2t}},\]
which contradicts ii).
**Remark 3.4**.: If we replace the hypothesis "\(\sum_{n=1}^{\infty}1/b_{n}=\theta\)" in Proposition 3.2 by "\(\sum_{n=1}^{\infty}1/b_{n}<\theta\)", both i) and ii) in Proposition 3.2 hold. Indeed, if \(\theta-\sum_{n=1}^{\infty}1/b_{n}=:c>0\), then
\[a_{n}\ :=\ G\left(\theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\right)\ \leqslant\ G(c),\]
so \((a_{n})\) is bounded.
We state and prove the last result in this section.
**Theorem 3.5**.: _Let \((a_{n})_{n=1}^{\infty}\subset\mathbb{N}\) be increasing such that \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). There exist \(\theta\in(0,1)\) and \((b_{n})_{n=1}^{\infty}\) such that_
\[\sum_{n=1}^{\infty}\frac{1}{b_{n}}\ =\ \theta,\]
_and for every \(n\geqslant 1\),_
\[a_{n}\ =\ G\left(\theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\right).\]
Proof.: Since \(a_{n}\to\infty\), we can form the infinite set \(A\subset\mathbb{N}\) such that \(n\in A\) if and only if \(a_{n+1}-a_{n}\geqslant 1\). In other words, \(A\) contains all the indices immediately before a jump in \((a_{n})\). Write \(A=\{n_{1},n_{2},n_{3},\ldots\}\), where \(n_{1}<n_{2}<n_{3}<\cdots\). Note that \(a_{n_{j}}<a_{n_{j+1}}\) for all \(j\). We obtain the sequence \((b_{n})\) by first constructing all the \(b_{n}\) for \(n\in A\) then constructing the rest.
Step 1: for each \(j\geqslant 1\), choose \(b_{n_{j}}\) such that
\[\frac{a_{n_{j}}a_{n_{j+1}}}{a_{n_{j+1}}-a_{n_{j}}}-1-\frac{2a_{n_{j}}-1}{a_{n_ {j+1}}-a_{n_{j}}}\ <\ b_{n_{j}}\ <\ \frac{a_{n_{j}}a_{n_{j+1}}}{a_{n_{j+1}}-a_{n_{j}}}, \tag{3.1}\]
which can be done since the distance between the two ends are greater than \(1\). Note that (3.1) is equivalent to
\[\frac{1}{a_{n_{j}}}-\frac{1}{a_{n_{j+1}}}\ <\ \frac{1}{b_{n_{j}}}\ <\ \frac{1}{a_{n_{j}}-1}-\frac{1}{a_{n_{j+1}}-1}. \tag{3.2}\]
It follows that for each \(j\geqslant 1\),
\[\frac{1}{a_{n_{j}}}\ <\ \sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}\ <\ \frac{1}{a_{n_{j}}-1}. \tag{3.3}\]
Step 2: Due to (3.3), we can choose a sequence of positive numbers \((\theta_{j})_{j=1}^{\infty}\) satisfying
\[\frac{1}{a_{n_{j}}}\ <\ \sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}+\theta_{j}\ <\ \frac{1}{a_{n_{j}}-1}.\]
Let \(n_{0}=0\). For each \(j\geqslant 1\), set \(b_{n_{j-1}+1}=b_{n_{j-1}+2}=\ldots=b_{n_{j}-1}=N_{j}\), where \(N_{j}\) is sufficiently large such that
\[\frac{n_{j}-n_{j-1}-1}{N_{j}}\;<\;\min\left\{\frac{\theta_{1}}{2^{j}},\frac{ \theta_{2}}{2^{j-1}},\ldots,\frac{\theta_{j}}{2}\right\}.\]
Step 3: Set \(\theta:=\sum_{n=1}^{\infty}\frac{1}{b_{n}}\). We claim that \(\theta\in(0,1)\). We have
\[\sum_{n=1}^{\infty}\frac{1}{b_{n}} =\;\sum_{j=1}^{\infty}\frac{1}{b_{n_{j}}}+\sum_{j=1}^{\infty}\sum _{i=n_{j-1}+1}^{n_{j}-1}\frac{1}{b_{i}}\] \[=\;\sum_{j=1}^{\infty}\frac{1}{b_{n_{j}}}+\sum_{j=1}^{\infty} \frac{n_{j}-n_{j-1}-1}{N_{j}}\] \[<\;\sum_{j=1}^{\infty}\frac{1}{b_{n_{j}}}+\sum_{j=1}^{\infty} \frac{\theta_{1}}{2^{j}}\] \[=\;\sum_{j=1}^{\infty}\frac{1}{b_{n_{j}}}+\theta_{1}\;<\;\frac{1 }{a_{n_{1}}-1}\;\leqslant\;1.\]
Step 4: Finally, we need to verify that
\[\frac{1}{a_{n}}\;<\;\sum_{i=n}^{\infty}\frac{1}{b_{i}}\;\leqslant\;\frac{1}{a _{n}-1},\forall n\geqslant 1.\]
Fix \(n\geqslant 1\) and choose \(j\) such that \(n_{j-1}<n\leqslant n_{j}\). By (3.3), we have
\[\sum_{i=n}^{\infty}\frac{1}{b_{i}}\;\geqslant\;\sum_{i=n_{j}}^{\infty}\frac{1 }{b_{i}}\;\geqslant\;\sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}\;>\;\frac{1}{a_{n _{j}}}\;=\;\frac{1}{a_{n}}.\]
On the other hand,
\[\sum_{i=n}^{\infty}\frac{1}{b_{i}} \;\leqslant\;\sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}+\sum_{i=j}^{ \infty}\sum_{n_{i-1}+1}^{n_{i}-1}\frac{1}{b_{n}}\] \[=\;\sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}+\sum_{i=j}^{\infty} \frac{n_{i}-n_{i-1}-1}{N_{i}}\] \[<\;\sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}+\sum_{i=j}^{\infty} \frac{\theta_{j}}{2^{i+1-j}}\] \[=\;\sum_{i=j}^{\infty}\frac{1}{b_{n_{i}}}+\theta_{j}\;<\;\frac{1} {a_{n_{j}}-1}\;=\;\frac{1}{a_{n}-1}.\]
This completes our proof.
**Corollary 3.6**.: _Let \((a_{n})_{n=1}^{\infty}\subset\mathbb{N}\) be increasing with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). Then \(\lim_{n\to\infty}a_{n+1}/a_{n}\neq 1\) is equivalent to the existence of \(\theta\in(0,1)\) and \((b_{n})_{n=1}^{\infty}\) such that \((a_{n})_{n=1}^{\infty}\) and \((b_{n})_{n=1}^{\infty}\) are the sequences obtained from the WGUA applied to \(\theta\)._
Proof.: Use Proposition 3.2 and Theorem 3.5.
**Remark 3.7**.: Observe that (3.2) is stronger than (2.2). This observation is important in studying the uniqueness of \(\theta\) and \((b_{n})\) in the next section.
## 4. Uniqueness of \(\theta\) and \((b_{n})\)
Thanks to Theorem 3.5, we know the existence of \(\theta\) and \((b_{n})\) given any increasing sequence \((a_{n})\) with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). We now give sufficient and necessary conditions for when \((a_{n})\) determines \(\theta\) and \((b_{n})\) uniquely. By step 2 in the proof of Theorem 3.5, a necessary condition is that \((a_{n})\) must be strictly increasing. We can then eliminate step 2 in constructing the sequence \((b_{n})\) because \(A=\mathbb{N}\). We claim further that \(a_{n+1}-a_{n}\geqslant 2\) for all \(n\in\mathbb{N}\). Indeed, suppose \(a_{N+1}-a_{N}=1\) for some \(N\). We rewrite (3.1) as
\[a_{N+1}a_{N}-2a_{N}\;\leqslant\;b_{N}\;\leqslant\;a_{N}a_{N+1}. \tag{4.1}\]
There are at least \(2a_{N}+1\) choices of \(b_{N}\), so \(\theta\) and \((b_{n})\) are not unique. (Note that we allow equalities in (4.1) because the construction in the proof of Theorem 3.5 still works if we allow equalities in finitely many (3.1).)
Moreover, \((b_{n})\) must satisfy (2.2). The following proposition tells us precisely when (2.2) determines \((b_{n})\) unequivocally.
**Proposition 4.1**.: _Let \((a_{n})_{n=1}^{\infty}\) be increasing such that \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). Then \((b_{n})_{n=1}^{\infty}\) is uniquely determined by (2.2) if and only if_
\[a_{n+1}-2\;\geqslant\;a_{n}\;\geqslant\;2,\forall n\geqslant 1,\]
_and for each \(n\), one of the following holds_
* \(a_{n+1}-a_{n}-1\) _divides_ \(a_{n}^{2}\)_, and_ \[a_{n+1}\;\geqslant\;\frac{\sqrt{3}}{2}\sqrt{4a_{n}^{2}-4a_{n}+3}+2a_{n}-\frac{ 1}{2};\]
* \(a_{n+1}-a_{n}-1\) _does not divide_ \(a_{n}^{2}\)_, and_ \[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}-1}\right\rfloor\;\leqslant\;\frac{ (a_{n}-1)^{2}}{a_{n+1}-a_{n}+1}.\]
Proof of Theorem 3.5.: By (2.2), \((b_{n})\) is uniquely determined if and only if each of the intervals
\[I_{n}\;:=\;\left(\frac{(a_{n}-1)a_{n+1}}{a_{n+1}-a_{n}+1},\frac{a_{n}(a_{n+1} -1)}{a_{n+1}-a_{n}-1}\right)\]
contains exactly one positive integer. It is easy to verify that there always exists one largest integer in \(I_{n}\), called \(k_{n}\). In order that \(I_{n}\) contains no other integers, we need
\[k_{n}-\frac{(a_{n}-1)a_{n+1}}{a_{n+1}-a_{n}+1}\;\leqslant\;1. \tag{4.2}\]
We obtain a formula for \(k_{n}\) depending on whether \(a_{n}(a_{n+1}-1)/(a_{n+1}-a_{n}-1)\) is an integer or not.
Case 1: if \(a_{n}(a_{n+1}-1)/(a_{n+1}-a_{n}-1)\in\mathbb{N}\), then
\[k_{n}\;=\;\frac{a_{n}(a_{n+1}-1)}{a_{n+1}-a_{n}-1}-1.\]
Hence, (4.2) is equivalent to
\[\frac{a_{n}(a_{n+1}-1)}{a_{n+1}-a_{n}-1}-\frac{(a_{n}-1)a_{n+1}}{a_{n+1}-a_{n}+1 }\;\leqslant\;2.\]
Equivalently,
\[a_{n+1}^{2}-(4a_{n}-1)a_{n+1}+(a_{n}^{2}+a_{n}-2)\;\geqslant\;0,\]
giving
\[a_{n+1}\;\geqslant\;2a_{n}+\frac{\sqrt{3}}{2}\sqrt{4a_{n}^{2}-4a_{n}+3}-\frac {1}{2}.\]
Case 2: if \(a_{n}(a_{n+1}-1)/(a_{n+1}-a_{n}-1)\notin\mathbb{N}\), then
\[k_{n}\;=\;\left\lfloor\frac{a_{n}(a_{n+1}-1)}{a_{n+1}-a_{n}-1}\right\rfloor\; =\;a_{n}+\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}-1}\right\rfloor.\]
Hence, (4.2) is equivalent to
\[\left(a_{n}+\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}-1}\right\rfloor\right) -\left(\frac{(a_{n}-1)^{2}}{a_{n+1}-a_{n}+1}+(a_{n}-1)\right)\;\leqslant\;1,\]
giving
\[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}-1}\right\rfloor\;\leqslant\;\frac{ (a_{n}-1)^{2}}{a_{n+1}-a_{n}+1}.\]
**Corollary 4.2** (Sufficient condition for uniqueness).: _Let \((a_{n})_{n=1}^{\infty}\) be increasing with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). If_
* \(a_{n+1}-2\geqslant a_{n}\geqslant 2\) _for all_ \(n\)_, and_
* _for each_ \(n\geqslant 1\)_, one of the following holds_
* \(a_{n+1}-a_{n}-1\) _divides_ \(a_{n}^{2}\)_, and_ \[a_{n+1}\;\geqslant\;\frac{\sqrt{3}}{2}\sqrt{4a_{n}^{2}-4a_{n}+3}+2a_{n}-\frac {1}{2};\]
* \(a_{n+1}-a_{n}-1\) _does not divide_ \(a_{n}^{2}\)_, and_ \[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}-1}\right\rfloor\;\leqslant\;\frac{ (a_{n}-1)^{2}}{a_{n+1}-a_{n}+1},\]
_then there exist unique \(\theta\in(0,1]\) and \((b_{n})_{n=1}^{\infty}\) such that_
\[\sum_{n=1}^{\infty}\frac{1}{b_{n}}\;=\;\theta, \tag{4.3}\]
_and for every \(n\geqslant 1\),_
\[a_{n}\;=\;G\left(\theta-\sum_{i=1}^{n-1}\frac{1}{b_{i}}\right). \tag{4.4}\]
Proof.: Theorem 3.5 guarantees the existence of \(\theta\) and \((b_{n})\). Suppose that there exists another pair \((\theta^{\prime},(b_{n}^{\prime}))\) different from \((\theta,(b_{n}))\). Then for some \(N\), \(b_{N}\neq b_{N}^{\prime}\), both of which must verify (2.2). This contradicts Proposition 4.1.
Next, we establish a necessary condition for the uniqueness of \(\theta\) and \((b_{n})\) by requiring the inequalities
\[\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}-1-\frac{2a_{n}-1}{a_{n+1}-a_{n}}\;\leqslant\;b _{n}\;\leqslant\;\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}} \tag{4.5}\]
to determine exactly one solution \(b_{n}\). Again, (4.5) is slightly different from (3.2) as we allow equalities, because the construction in the proof of Theorem 3.5 still works if equalities appear in finitely many (3.1).
**Proposition 4.3** (Necessary condition for uniqueness).: _Let \((a_{n})_{n=1}^{\infty}\) be increasing with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). Suppose that there exist unique \(\theta\in(0,1)\) and \((b_{n})_{n=1}^{\infty}\) that satisfy (4.3) and (4.4), then for all \(n\geqslant 1\), we have_
\[a_{n+1}\;\geqslant\;a_{n}+2,\]
\[(a_{n+1}-a_{n})\;\text{does not divide $a_{n}a_{n+1}$},\]
_and_
\[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}}\right\rfloor\;<\;\frac{(a_{n}-1)^ {2}}{a_{n+1}-a_{n}}. \tag{4.6}\]
Proof.: That \(a_{n+1}\geqslant a_{n}+2\) is due to the discussion at the beginning of this section. We find a sufficient and necessary condition for (4.5) to have exactly one solution \(b_{n}\). If \(a_{n+1}-a_{n}\) divides \(a_{n}a_{n+1}\), then \(I_{n}:=[\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}-1-\frac{2a_{n}-1}{a_{n+1}-a_{n}}, \frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}]\) contains at least two integers because
\[\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}-\left(\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}-1 -\frac{2a_{n}-1}{a_{n+1}-a_{n}}\right)\;>\;1.\]
If \(a_{n+1}-a_{n}\) does not divide \(a_{n}a_{n+1}\), then the largest integer in \(I_{n}\) is
\[\left\lfloor\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}\right\rfloor,\]
and \(I_{n}\) contains exactly one integer if and only if
\[\left\lfloor\frac{a_{n}a_{n+1}}{a_{n+1}-a_{n}}\right\rfloor-\left(\frac{a_{n} a_{n+1}}{a_{n+1}-a_{n}}-1-\frac{2a_{n}-1}{a_{n+1}-a_{n}}\right)\;<\;1.\]
Equivalently,
\[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}}\right\rfloor\;<\;\frac{(a_{n}-1)^ {2}}{a_{n+1}-a_{n}}.\]
This completes our proof.
**Corollary 4.4**.: _Let \((a_{n})_{n=1}^{\infty}\) be increasing with \(a_{1}\geqslant 2\) and \(a_{n}\to\infty\). Suppose that there exist unique \(\theta\in(0,1)\) and \((b_{n})_{n=1}^{\infty}\) that satisfy (4.3) and (4.4), then for all \(n\geqslant 1\),_
* \(a_{n+1}-a_{n}\) _divides none of_ \((a_{n}-1)^{2},a_{n}^{2},a_{n}a_{n+1}\)_;_
* \(3a_{n}<a_{n+1}\)_._
Proof.: i) By Proposition 4.3, \(a_{n+1}-a_{n}\) does not divide \(a_{n}a_{n+1}\). By (4.6), \(a_{n+1}-a_{n}\) does not divide \(a_{n}^{2}\). Also by (4.6), \(a_{n+1}-a_{n}\) does not divide \((a_{n}-1)^{2}\). Indeed, supposing otherwise, we have
\[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}}\right\rfloor\;=\;\left\lfloor\frac {(a_{n}-1)^{2}+2a_{n}-1}{a_{n+1}-a_{n}}\right\rfloor\;=\;\frac{(a_{n}-1)^{2}}{ a_{n+1}-a_{n}}+\left\lfloor\frac{2a_{n}-1}{a_{n+1}-a_{n}}\right\rfloor,\]
contradicting (4.6).
ii) We write (4.6) as
\[\left\lfloor\frac{a_{n}^{2}}{a_{n+1}-a_{n}}\right\rfloor\;<\;\frac{a_{n}^{2}}{a_{ n+1}-a_{n}}-\frac{2a_{n}-1}{a_{n+1}-a_{n}}.\]
Hence,
\[\frac{2a_{n}-1}{a_{n+1}-a_{n}}\;<\;1,\]
which gives \(a_{n+1}\geqslant 3a_{n}\). However, \(a_{n+1}\) cannot be \(3a_{n}\). Otherwise, we obtain from (4.6) that
\[\left\lfloor\frac{a_{n}^{2}}{2a_{n}}\right\rfloor\;<\;\frac{(a_{n}-1)^{2}}{2a _{n}}.\]
By i), \(\lfloor a_{n}^{2}/(2a_{n})\rfloor=(a_{n}-1)/2\). Hence,
\[\frac{a_{n}-1}{2}\;<\;\frac{(a_{n}-1)^{2}}{2a_{n}}\;\Longrightarrow\;a_{n}<1,\]
a contradiction.
## 5. Applications to particular sequences
In this section, we look at sequences \((a_{n})\) of special forms and find \((b_{n})\) that satisfies (3.2). We use specific sequences in [10] as examples.
### Geometric progressions
Let \(a,r\in\mathbb{N}\) with \(a\geqslant 2\) and \(r\geqslant 2\). Let \((a_{n})\) be the sequence
\[a,ar,ar^{2},ar^{3},\ldots.\]
By Corollary 3.6, \((a_{n})\) can be obtained from the WGUA applied to some \(\theta\).
If \(r-1\) divides \(a\), we have the sequence \(b_{n}=ar^{n}/(r-1)-1\) satisfy (3.2) and
\[\theta\;=\;\sum_{n=1}^{\infty}\frac{1}{ar^{n}/(r-1)-1}.\]
For example, take \(a=2,r=3\) to have
\[\begin{cases}a_{n}&=\quad 2\cdot 3^{n-1}\quad(\text{\rm A008776}),\\ b_{n}&=\quad 3^{n}-1\quad(\text{\rm A024023}),\\ \theta&\approx\quad 0.68215.\end{cases}\]
If \(r-1\) does not divide \(a\), we have the sequence \(b_{n}=\lfloor ar^{n}/(r-1)\rfloor\) satisfy (3.2) and
\[\theta\;=\;\sum_{n=1}^{\infty}\frac{1}{\lfloor ar^{n}/(r-1)\rfloor}.\]
For example, take \(a=2,r=4\) to have
\[\begin{cases}a_{n}&=\quad 2^{2n-1}\quad(\text{\rm A004171}),\\ b_{n}&=\quad\lfloor 2^{2n+1}/3\rfloor=2(4^{n}-1)/3\quad(\text{\rm A020988}),\\ \theta&\approx\quad 0.63165.\end{cases}\]
### Arithmetic progressions
Let \(a,d\in\mathbb{N}\) with \(a\geqslant 2\) and \(d\geqslant 1\). Let \((a_{n})\) be the sequence
\[a,a+d,a+2d,a+3d,\dots.\]
By Corollary 3.6, \((a_{n})\) cannot be obtained from the WGUA applied to some \(\theta\).
If \(d\) divides \(a^{2}\), then
\[b_{n}\ =\ \frac{(a+(n-1)d)(a+nd)}{d}-1\ =\ \frac{a^{2}}{d}+(2n-1)a+n(n-1)d-1.\]
satisfies (3.2) and
\[\theta\ =\ \sum_{n=1}^{\infty}\left(\frac{a^{2}}{d}+(2n-1)a+n(n-1)d-1\right)^ {-1}.\]
For example, take \(a=2\) and \(d=1\) to have
\[\begin{cases}a_{n}&=\quad n+1,\\ b_{n}&=\quad n^{2}+3n+1\quad(\text{\small{A028387}}),\\ \theta&=\quad\pi\tan\left(\frac{\sqrt{5}\pi}{2}\right)/\sqrt{5}\ \approx\ 0.54625.\end{cases}\]
If \(d\) does not divide \(a^{2}\), then
\[b_{n}\ =\ \left\lfloor\frac{(a+(n-1)d)(a+nd)}{d}\right\rfloor\]
satisfies (3.2) and
\[\theta\ =\ \sum_{n=1}^{\infty}\left\lfloor\frac{(a+(n-1)d)(a+nd)}{d}\right\rfloor ^{-1}.\]
For example, take \(a=3\) and \(d=2\) to have
\[\begin{cases}a_{n}&=\quad 2n+1,\\ b_{n}&=\quad 2n^{2}+4n+1\quad(\text{\small{A056220}}),\\ \theta&=\quad\left(-2-\sqrt{2}\pi\cot\left(\frac{\pi}{\sqrt{2}}\right)\right) /4\ \approx\ 0.34551.\end{cases}\]
### Fibonacci sequence
The Fibonacci sequence is defined as \(F_{0}=0\), \(F_{1}=1\), and \(F_{n}=F_{n-1}+F_{n-2}\) for \(n\geqslant 2\). Define \(a_{n}=F_{n+1}\) for \(n\geqslant 1\). Then
\[\frac{1}{a_{n}}-\frac{1}{a_{n+1}}\ =\ \frac{1}{F_{n+1}}-\frac{1}{F_{n+2}}\ =\ \frac{F_{n}}{F_{n+1}F_{n+2}}.\]
Using (3.2), we choose \(b_{1}=3\) and for \(n>1\), choose
\[b_{n} =\ \left\lfloor\frac{F_{n+1}F_{n+2}}{F_{n}}\right\rfloor\] \[=\ \left\lfloor\frac{F_{n}F_{n+1}+F_{n-1}F_{n+1}+F_{n}^{2}+F_{n-1}F _{n}}{F_{n}}\right\rfloor\] \[=\ \left\lfloor\frac{F_{n}F_{n+1}+F_{n}^{2}+(-1)^{n+1}+F_{n}^{2}+F _{n-1}F_{n}}{F_{n}}\right\rfloor\ \text{by the Cassini's identity}\] \[=\ \begin{cases}F_{n+3}-1&\text{ if $n$ is odd},\\ F_{n+3}&\text{ if $n$ is even}.\end{cases}\] |
2308.14477 | Medical needle tip tracking based on Optical Imaging and AI | Deep needle insertion to a target often poses a huge challenge, requiring a
combination of specialized skills, assistive technology, and extensive
training. One of the frequently encountered medical scenarios demanding such
expertise includes the needle insertion into a femoral vessel in the groin.
After the access to the femoral vessel, various medical procedures, such as
cardiac catheterization and extracorporeal membrane oxygenation (ECMO) can be
performed. However, even with the aid of Ultrasound imaging, achieving
successful insertion can necessitate multiple attempts due to the complexities
of anatomy and tissue deformation. To address this challenge, this paper
presents an innovative technology for needle tip real-time tracking, aiming for
enhanced needle insertion guidance. Specifically, our approach revolves around
the creation of scattering imaging using an optical fiber-equipped needle, and
uses Convolutional Neural Network (CNN) based algorithms to enable real-time
estimation of the needle tip's position and orientation during insertion
procedures. The efficacy of the proposed technology was rigorously evaluated
through three experiments. The first two experiments involved rubber and bacon
phantoms to simulate groin anatomy. The positional errors averaging 2.3+1.5mm
and 2.0+1.2mm, and the orientation errors averaging 0.2+0.11rad and
0.16+0.1rad. Furthermore, the system's capabilities were validated through
experiments conducted on fresh porcine phantom mimicking more complex
anatomical structures, yielding positional accuracy results of 3.2+3.1mm and
orientational accuracy of 0.19+0.1rad. Given the average femoral arterial
radius of 4 to 5mm, the proposed system is demonstrated with a great potential
for precise needle guidance in femoral artery insertion procedures. In
addition, the findings highlight the broader potential applications of the
system in the medical field. | Zhuoqi Cheng, Simon Lyck Bjært Sørensen, Mikkel Werge Olsen, René Lynge Eriksen, Thiusius Rajeeth Savarimuthu | 2023-08-28T10:30:08Z | http://arxiv.org/abs/2308.14477v2 | # Medical needle tip tracking based on Optical Imaging and AI
###### Abstract
Medical needle tip tracking plays an important role in various needle-based procedures such as biopsy, tumor ablation, intravenous access and deep brain stimulation. Accurate and real-time tracking of the needle tip assures precise targeting, minimizing tissue damage and enhance procedural efficiency.
In recent years, significant efforts [1] have been dedicated to the needle tip tracking technologies. These advancements can be grouped into three categories.
1. **Image-Based Modalities: Trans-Illumination and Imaging Processing** One of the most commonly used techniques is based on trans-illumination imaging processing. Machine learning algorithms have been widely explored for detecting the needle tip from fluoroscopic or ultrasound images.
2. **Sensor-Based Approaches: Integration of Optical and ElectroMagnetic Sensors** Apart from methods relying on images, sensor-driven strategies have gained traction and exhibited strong efficacy. These methods encompass the integration of optical or ElectroMagnetic (EM) sensors at the needle tip, enabling direct tracking of its position and orientation. Alternatively, the needle tip's location can be approximated by analyzing needle deflection. Prior studies have employed needle kinematics modeling to devise such estimation methods.
3. **Needle Shape Estimation: Fiber Bragg Grating (FBG) Sensor or Strain Gauge** Another related technique involves reconstructing the needle shape directly using Fiber Bragg Grating (FBG) sensors or strain gauges [2].
In this abstract, we introduce an innovative approach to track the needle tip as it enters opaque materials. Our method employs a fiber optic needle with an exposed tip, producing a scattering image on the tissue surface. An external camera captures this image, and we utilize machine learning-driven image processing to precisely determine the needle tip's spatial position
## Materials and Methods
### System setup
The proposed setup integrates a fiber optic needle and a camera to enable needle tip tracking, as depicted in Fig. 1. A fiber optic needle is created by inserting an optic fiber within the needle lumen, exposing its end at the tip. Utilizing an 850 nm lighting source, a corresponding camera with an 850 nm filter is positioned 60 nm above the phantom surface. Camera exposure time is set at 20 ms, with ambient light removed during data acquisition. The system employs supervised learning, necessitating ground truth values. These are captured using an EM sensor, comprised of a coil and a magnet sensor. The magnet sensor attaches to the needle hub (Fig. 1), while the coil is secured to the platform. Through pivot calibration, the needle tip's position relative to the coil's coordinates is determined. The entire system operates within the Robot Operating System (ROS) framework, facilitating concurrent collection of scattering images and needle tip spatial data.
### Preprocessing
Pre-processing significantly impacts neural network performance by enhancing feature extraction. However, your feedback rightly points out that neural networks often handle image variability through techniques like batch normalization and input centering.
In this study, we emphasize leveraging structural information, particularly position categories _(x, y, z)_ for normalization guidance. These categories define data and normalization ranges, optimizing input data quality for model training.
Fig. 1: The system setup includes a fiber optic needle for delivering lighting at its tip and a camera for capturing scattering image. An EM sensor is used for obtaining the needle tip position as ground truth value. |
2305.05586 | RLocator: Reinforcement Learning for Bug Localization | Software developers spend a significant portion of time fixing bugs in their
projects. To streamline this process, bug localization approaches have been
proposed to identify the source code files that are likely responsible for a
particular bug. Prior work proposed several similarity-based machine-learning
techniques for bug localization. Despite significant advances in these
techniques, they do not directly optimize the evaluation measures. We argue
that directly optimizing evaluation measures can positively contribute to the
performance of bug localization approaches. Therefore, In this paper, we
utilize Reinforcement Learning (RL) techniques to directly optimize the ranking
metrics. We propose RLocator, a Reinforcement Learning-based bug localization
approach. We formulate RLocator using a Markov Decision Process (MDP) to
optimize the evaluation measures directly. We present the technique and
experimentally evaluate it based on a benchmark dataset of 8,316 bug reports
from six highly popular Apache projects. The results of our evaluation reveal
that RLocator achieves a Mean Reciprocal Rank (MRR) of 0.62, a Mean Average
Precision (MAP) of 0.59, and a Top 1 score of 0.46. We compare RLocator with
two state-of-the-art bug localization tools, FLIM and BugLocator. Our
evaluation reveals that RLocator outperforms both approaches by a substantial
margin, with improvements of 38.3% in MAP, 36.73% in MRR, and 23.68% in the Top
K metric. These findings highlight that directly optimizing evaluation measures
considerably contributes to performance improvement of the bug localization
problem. | Partha Chakraborty, Mahmoud Alfadel, Meiyappan Nagappan | 2023-05-09T16:19:33Z | http://arxiv.org/abs/2305.05586v3 | # RLocator: Reinforcement Learning for Bug Localization
###### Abstract
Software developers spend a significant portion of time fixing bugs in their projects. To streamline this process, bug localization approaches have been proposed to identify the source code files that are likely responsible for a particular bug. Prior work proposed several similarity-based machine-learning techniques for bug localization. Despite significant advances in these techniques, they do not directly optimize the evaluation measures. We argue that directly optimizing evaluation measures can positively contribute to the performance of bug localization approaches.
Therefore, in this paper, we utilize Reinforcement Learning (RL) techniques to directly optimize the ranking metrics. We propose RLocator, a Reinforcement Learning-based bug localization approach. We formulate RLocator using a Markov Decision Process (MDP) to optimize the evaluation measures directly. We present the technique and experimentally evaluate it based on a benchmark dataset of 8,316 bug reports from six highly popular Apache projects. The results of our evaluation reveal that RLocator achieves a Mean Reciprocal Rank (MRR) of 0.62, a Mean Average Precision (MAP) of 0.59, and a Top 1 score of 0.46. We compare RLocator with two state-of-the-art bug localization tools, FLIM and BugLocator. Our evaluation reveals that RLocator outperforms both approaches by a substantial margin, with improvements of 38.3% in MAP, 36.73% in MRR, and 23.68% in the Top X metric. These findings highlight that directly optimizing evaluation measures considerably contributes to performance improvement of the bug localization problem.
Reinforcement Learning, Bug Localization, Deep Learning
## 1 Introduction
Software bugs are an inevitable part of software development. Developers spend one-third of their time debugging and fixing bugs [1]. After a bug report/issue has been filed, the project team identifies the source code files that need to be inspected and modified to address the issue. However, manually locating the files responsible for a bug is expensive (in terms of time and resources), especially when there are a lot of files and bug reports. Moreover, the number of bugs reported is often higher than the number of available developers [2]. Consequently, the fix-time and maintenance costs rise when the customer satisfaction rate decreases [3].
Bug Localization is a method that refers to identifying the source code files where a particular bug originated. Given a bug report, bug localization approaches utilize the textual information in the bug report, and the project source code files to shortlist the potentially buggy files. Prior work has proposed various Information Retrieval-based Bug Localization (IRBL) approaches to help developers speed up the debugging process (e.g., Deeplicocator [4], CAST [5], KGBugLocator [6], BL-GAN [7]).
One common theme among these approaches is that they follow a similarity-based approach to localize bugs. Such techniques measure the similarity between bug reports and the source code files. For estimating similarity, they use various methods such as cosine distance [8], Deep Neural Networks (DNN) [9], and Convolutional Neural Networks (CNN) [5]. Then, they rank the source code files based on their similarity score. In the training phase of these approaches, the model learns to optimize the similarity metrics. In contrast, in the testing phase, the model is tested with ranking metrics (e.g., Mean Reciprocal Rank (MRR) or Mean Average Precision (MAP)).
While most of these approaches showed promising performance, they optimize a metric that indirectly represents the performance metrics. Prior studies [10, 11, 12, 13] found that direct optimization of evaluation measures substantially contributes to performance improvement of ranking problems. Direct optimization is also efficient compared to optimizing indirect metrics [13]. Hence, we argue that it is challenging for the solutions proposed by prior studies to sense how a wrong prediction would affect the performance evaluation measures [10]. In other words, if we use the retrieval metrics (e.g., MAP) in the training phase, the model will learn how each prediction will impact the evaluation metrics. A wrong prediction will change the rank of the source code file and ultimately impact the evaluation metrics.
Reinforcement Learning (RL) is a sub-category of machine learning methods where labeled data is not required. In RL, the model is not trained to predict a specific value. Instead, the model is given a signal about a right or wrong choice in training [14]. Based on the signal, the model updates its decision. This allows RL to use evaluation measures such as MRR and MAP in the training phase and directly optimize the evaluation metrics. Moreover, because of using MRR/MAP as a signal instead of a label, the problem of overfitting will be less prevalent. Markov Decision Process (MDP) is a foundational element of RL. MDP is a mathematical framework that allows the formalization of discrete-time decision-making problems [15]. Real-world problems
often need to be formalized as MDP to apply RL.
In this paper, we present _RLocator_, an RL technique for localizing software bugs in source code files. We formulate RLocator into an MDP. In each step of the MDP, we use MRR and MAP as signals to guide the model to optimal choice. We evaluate RLocator on a benchmark dataset of six Apache projects and find that, compared with existing state-of-the-art bug localization techniques, RLocator achieves substantial performance improvement.
The main contributions of our work are as follows:
* We present RLocator, an RL-based software bug localization approach. The key technical novelty of RLocator is using RL for bug localization, which includes formulating the bug localization process into an MDP.
* 0.62, MAP of 0.47
- 0.59, and Top 1 of 0.38
- 0.46 across all studied projects. Moreover, we compare its performance with state-of-the-art bug localization approaches. Regarding MAP, MRR, and Top K, RLocator surpasses FLIM [16] by margins of 38.3%, 36.73%, and 23.68%, respectively. Additionally, RLocator surpasses BugLocator [3] by margins of 56.86%, 41.51%, and 26.32% in terms of MAP, MRR, and Top K, respectively.
**Paper organization.** The rest of the paper is organized as follows. Section 2 discusses the necessary background of this study. In Section 3, we present our approach, RLocator. Section 4 discusses the dataset and the evaluation metric used in the study. Section 5 presents the results of the experiments. Section 6 situates our work with respect to the literature on bug localization. Section 7 explains the threats to the validity of our results. Finally, in Section 8, we conclude and outline possible future work.
## 2 Background
In this section, we describe terms related to the bug localization problem, which we use throughout our study. Also, we present an overview of reinforcement learning.
### _Bug Localization System_
A typical bug localization system utilizes several sources of data, e.g., bug reports, stack traces, and logs, to identify the responsible source code files. One particular challenge of the system is that the bug report contains natural language, whereas source code files are written in a programming language.
Typically, bug localization systems identify whether a bug report relates to a source code file. To do so, the system extracts features from both the bug report and the source code files. Previous studies used techniques such as N-gram [17, 18] and Word2Vec [19, 20] to extract features (embedding) from bug reports and source code files. Other studies (e.g., Devlin et al. [21]) introduced the transformer-based model BERT which has achieved higher performance than all the previous techniques. One of the reasons transformer-based models perform better in extracting textual features is that the transformer uses multi-head attention, which can utilize long context while generating embedding. Previous studies have proposed a multi-modal BERT model [22] for programming languages, which can extract features from both bug reports and source code files.
Figure 1 represents a sample bug report1 from the Aspect1 project. A bug report mainly contains information related to unexpected behavior and how to reproduce it. It mainly includes a bug id, title, textual description of the bug, and version of the codebase where the bug exists. The bug report may have an example of code or stack trace, or logs. A bug localization system retrieves all the source code files from a source code repository at that particular version. For example, assume we have 100 source code files in a repository in a specific version. After retrieving 100 files from that version, the system will estimate the relevance between the bug report and each of the 100 files. The relevance can be measured in several ways. For example, a naive system can check how many words of the bug report exist in each source code file. A sophisticated system can compare embeddings. A BERT-based system may generate embedding for each of the 100 files and the bug report. Then it can measure the relevance between them by cosine distance [23]. After relevance estimation, the system scales the relevance of all the files. To compare the relevance of different files, some metrics, such as cosine distance, already provide us with a scaled value. Thus, it can skip this step in some cases. After scaling the relevance, the system ranks the files based on their relevance. The ranked list of files is the final output of a bug localization system that developers will use.
Footnote 1: [https://bugs.eclipse.org/bugs/showbug.cgi?id=134471](https://bugs.eclipse.org/bugs/showbug.cgi?id=134471)
### _Reinforcement Learning_
In Reinforcement Learning (RL), the agent interacts with the environment through observation. Formally, an observation
Fig. 1: Sample bug report.
is called "State," \(S\). In each state, at time \(t\), \(S_{t}\), the agent takes action \(A\) based on its understanding of the state. Then, the environment provides feedback/reward \(\Re\) and transfers the agent into a new state \(S_{t+1}\). The agent's strategy to determine the best action, which will eventually lead to the highest cumulative reward, is referred to as _policy_[24, 14].
The cumulative reward (until the goal/end) that an agent can get if it takes a specific action in a certain state is called _Q value_. The function that is used to estimate the _Q value_ is often referred as _Q function_ or _Value function_.
In RL, an agent starts its journey from a starting state and then goes forward by picking the appropriate action. The journey ends in a pre-defined end state. The journey from start to end state is referred to as _episode_.
From a high level, we can divide the state-of-the-art RL algorithms into two classes. The first is the model-free algorithms, where the agent has no prior knowledge about the environment. The agent learns about the environment by interacting with the environment. The other type is the model-based algorithm. In a model-based algorithm, the agent uses the reward prediction from the model instead of interacting with the environment.
The bug localization task is quite similar to the model-free environment as we cannot predict/identify the buggy files without checking the bug report and source code files (without interacting with the environment). Thus, we use model-free RL algorithms in this study. Two popular variants of model-free RL algorithms are:
* _Value Optimization_: The agent tries to learn the Q value function in value optimization approaches. The agent keeps the Q value function in memory and updates it gradually. It consults the Q value function in a particular state and picks the action that will give the highest value (reward). An example of the value optimization-based approach is Deep Q Network (DQN) [14].
* _Policy Optimization_: In the Policy optimization approach, the agent tries to learn the mapping between the state and the action that will result in the highest reward. The agent will pick the action based on the mapping in a particular state. An example of the policy optimization-based approach is Advantage Actor-Critic (A2C) [25, 14].
A2C is a policy-based algorithm where the agent learns an optimized policy to solve a problem. In Actor-Critic, the actor-model picks action. The future return (reward) of action is estimated using the critic model. The actor model uses the critic model to pick the best action in any state. Advantage actor-critic subtracts a base value from the return in any timestep. A2C with entropy adds the entropy of the probability of the possible action with the loss of the actor model. As a result, in the gradient descent step, the model tries to maximize the entropy of the learned policy. Maximization of entropy ensures that the agent assigns almost an equal probability to an action with a similar return.
## 3 Rlocator: Reinforcement Learning for Bug Localization
In this section, we discuss the steps we follow for designing the RLocator approach. First, we explain (in Section 3.1) a filtration process required for the approach. Then, we explain (in Section 3.2) the formulation steps of our design of RLocator. We present the overview of our approach in Figure 2.
### _Filtering the number of source code files_
The inputs to our bug localization tool are bug reports and source code files associated with a particular version of a project repository. Software projects maintain a repository for their bugs or issues (e.g., Jira, Github, Bugzilla). The first component, the bug report, can be retrieved from those issue repositories. We use the bug report to obtain the second component (i.e., source code) by identifying the project version affected by the bug. Typically, each bug report is associated with a version or commit SHA of the project repository. After identifying the buggy version, we collect all source code files from the specific version of the code repository. The number of source code files in different versions of the repository can be different. However, due to the nature of RL, we cannot pass a variable number of source code files to the RL model. We overcome this limitation by sending a fixed number (\(k\)) of the most relevant source code files for a bug report to the RL model.
To identify the relevant files, we use ElasticSearch (ES). ES is a search engine based on the Lucene search engine project. It is a distributed, open-source search and analytic engine for all data types, including text. It analyzes and indexes words/tokens for the textual match and uses BM25 to rank the files matching the query. We use the ES index for identifying the topmost k source code files related to a bug report. Following the study by Liu et al. [26] (who used ES in the context of code search), we build an ES index using the source code files and then queried the index using the bug report as the query. Then, we picked the first \(k\) files, which have the most textual similarities with the bug report. We want to note that the goal of bug localization is to get the relevant files to be ranked as close to the 1\({}^{st}\) rank as possible. Hence, metrics like MAP and MRR can measure the performance of bug localization techniques. While one can argue why we not only rely on ES to rank the relevant files, we find that the MAP and MRR of using ES are poor. Our RL-based technique aims to rerank the output from ES to get higher MAP and MRR scores.
One limitation of ES is that, in some cases, it returns irrelevant files in the \(k\) most relevant source code files. As there are no relevant files in the first \(k\) files, we cannot use those files for training/testing RLocator. Also, including such data in the model can mislead developers, as the ranking produced by the model will not be meaningful. This is because we will just be reranking \(k\) irrelevant files. Therefore, we first identify the cases where ES returns no relevant files in the top \(k\) files. To do so, we build an XGBoost-based binary classifier model [27]. We pass the bug report and the top \(k\) relevant files returned by the ES to the model. The model aims to predict whether a textual similarity-based ranking system (like ES) can rank the relevant source code files in the top \(k\) files or not.
To build the model, we study the most important features associated with the prediction task. We consult the related literature on the field of information retrieval [29, 32,
[30] and bug report classification [28] for feature selection. The list of computed features is presented in Table I. For our dataset, we calculate the selected features and trained the model using 10-fold cross-validation. The results show that our classifier model has a precision of 0.78, a recall of 0.93, and an F1-score of 0.85. Additionally, the model is able to correctly classify 91% of the dataset (there will be relevant source code files in the top \(k\) files returned by ES).
When XGBoost predicts that the relevant file will be in the top \(k\) files, then for those bug reports, we use our technique, RLocator which means RLocator will work on the 91% of the dataset. We rerank the \(k\) most relevant files so that the MAP and MRR metrics are optimized.
### _Formulation of RLocator_
After choosing \(k\) most relevant files for each bug report and filtering out bug reports where we are unlikely to find the relevant file in the top \(k\), we pass them to RLocator. Following prior studies [33, 34], we formulate RLocator into a Markov Decision Process (MDP) by dividing the ranking problem into a sequence of decision-making steps. A general RL model can be represented by a tuple \(\langle S,A,\tau,\Re,\pi\rangle\), which is composed of states, actions, transition, reward, and policy, respectively. Figure 2 shows an overview of our RLocator approach. Next, we describe the formulation steps of each component of RLocator.
**States:**\(S\) is the set of states. The RL model moves from one state to another state until it reaches the end state. To form the states of the MDP, we apply the following steps:
**Input:** The input of our MDP comprises two components, a bug report and the top \(k\) relevant source code files (documents) of a project repository. We use CodeBERT [22], a transformer-based BERT model for converting the text to embeddings. An embedding represents the text (source code/bug report) in a multi-dimensional space. We have opted for BERT based model as it can utilize long context. In our case, the source code files are long. Thus, a method declared far away from its usage may not be captured in other embedding techniques (e.g.; Word2Vec). However, because of the long context, BERT can utilize that information. Moreover, Word2Vec generates embedding for a word, which will not be updated in the inference phase based on the word's context. Contrary, the BERT model will generate the embedding for a sequence. Thus, it can utilize the context in the inference phase too. In source code files, the use of a variable depends on the scope. The variable name can be declared again in another scope and may contain different objects. As the BERT model generates embedding for sequence, the embedding will be different for each use of the variable. CodeBERT has been trained on natural language and programming language pairs and accepts input in both programming and natural languages. In Figure 2, the embedding model generates embedding of the source code files, \(F_{1},F_{2},...\), and \(F_{k}\), and the embedding of the bug report \(R\).
**Concatenation:** After we obtain the embedding for the source codes and the bug report, we concatenate them. Given our example in Figure 2, we concatenate the em
\begin{table}
\begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}} \hline
**Feature** & **Description** & **Rationale** \\ \hline Bug Report Length & Length of the bug report. & Fan et al. [28] found that it is hard to localize bugs using a short bug report. A short bug report will contain little information about the bug. Thus it will be hard for the ElasticSearch to retrieve source code file responsible for this bug. & Fan et al. [28] found that calculating textual similarity is challenging for long parts. Length of source code may contribute to the performance of EntitySearch. & Fan et al. [28] found that stacktraces in bug reports can help the debugging process as they may contain useful information. Availability of stack traces may improve the performance of EntitySearch. \\ \hline Similarity & Ratio of similar tokens between a bug report and source code files & Similarity indicates the amount of helpful information in the bug report. We calculate the similarity based on the equation presented in Section 5. \\ \hline \end{tabular}
\end{table} TABLE I: Description and rationale of the selected features.
Fig. 2: Bug Localization as Markov Decision Process.
bedding of \(F_{1},F_{2},...\), and \(F_{k}\) with the embedding of \(R\) individually. This step leads us to obtain the corresponding concatenated embedding \(E_{1},E_{2},...\), and \(E_{k}\), as shown in Figure 2.
Note that each state of the MDP comprises two lists: a candidate list and a ranked list. The candidate list contains the concatenated list of embedding. As shown in our example in Figure 2, the candidate list contains \(E_{1},E_{2},...\), and \(E_{k}\). In the candidate list, source code embeddings (code files and bug reports embedding concatenated together) are ranked randomly. The other list is the ranked list of source code files based on their relevance to the bug report \(R\). Initially (at \(State_{1}\)), the candidate list is full, and the ranked list is empty. In each state transition, the model moves one embedding from the candidate list to the ranked list based on their probability of being responsible for a bug. In the final state, the ranked list will be full, and the candidate list will be empty. We describe the process of selecting and ranking a file in detail in the next step.
**Actions:** We define _Actions_ in our MDP as selecting a file from the candidate list and moving it to the ranked list. Suppose at the timestep \(t\); the RL model picks the embedding \(E_{1}\), then the rank of that particular file will be \(t\). In Figure 2 at the timestamp 1, the model picks concatenated embedding of file \(F_{2}\). Thus, the rank of \(F_{2}\) will be \(1\). As in each timestamp, we are moving one file from the candidate list to the ranked list; the total number of files will be equal to the number of states and the number of actions. For identifying the potentially best action at any timestamp \(t\), we use a deep learning (DL) model (indicated as _Ranking Model_ in Figure 2), which is composed of a Convolutional Neural Network (CNN) followed by a Long Short-Term Memory (LSTM) [35]. First, we need to identify the optimal embedding (in terms of relevance) and add it to the ranked list of embeddings. Following the previous studies [36, 5, 37], we use a CNN to capture two relations. The first is the relation between source code embedding and bug report embedding, and the second is the relation across different concatenated embeddings. In each timestamp, the model should pick the potentially best embedding. Thus, it must know the relative relevance of different source code files embedding with the bug report. Also, our ranking model takes into consideration one of the restrictions set by the environment, which we call _state awareness_, i.e., if a file is selected at \(State_{i}\), it cannot be selected again in a later state (i.e., \(State_{i+j};j\geq 1\)). We use LSTM to make the model aware of previous actions. LSTM cell [38] helps the model to select files remembering the files it has already selected.
**Transition:**\(\tau\)(S, A) is a function \(\tau:\) S x A \(\rightarrow\) S which maps a state \(s_{t}\) into a new state \(s_{t+1}\) in response to the selected action \(a_{t}\). Choosing an action \(a_{t}\) means removing a file from the candidate list and placing it in the ranked list.
**Reward:** A reward is a value provided to the RL agent as feedback on their action. We refer to a reward received from one action as _return_. The RL technique signals the agent about the appropriate action in each step through the _Reward Function_, which can be modeled using the retrieval metrics. Thus, the RL agent can learn to optimize the retrieval metrics through the reward function. We consider two important factors in the ranking evaluation: the position of the relevant files and the distance between relevant files in the ranked list of embedding. We incorporated both factors in designing the reward function shown below.
\[\Re(S,A)=\frac{M*file\;relevance}{\log_{2}(t+1)*distance(s)};if\;A\;is\] \[an\;action\;that\;has\;not\;been\;selected\;before \tag{1}\] \[\Re(S,A)=-\log_{2}(t+1);otherwise \tag{2}\]
\[distance(S)=Avg.(Distance\;between\;\;currently\\ picked\;subsequent\;related\;files) \tag{3}\]
In Equations 1 and 2, \(t\) is the timestamp, \(S\) is State and \(A\) is Action. Mean reciprocal rank (MRR) measures the average reciprocal rank of all the relevant files. In Equation 1, \(\frac{file\;relevance}{\log_{2}(t+1)}\) represents the MRR. The use of a logarithmic function in the equation is motivated by previous studies [39, 40], which found that it leads to a stable loss. When the relevant files are ranked higher, the average precision tends to be higher. To encourage the reinforcement learning system to rank relevant files higher, we introduce a punishment mechanism if there is a greater distance between two relevant files. By imposing this punishment on the agent, we incentivize it to prioritize relevant files in higher ranks, which in turn contributes to the Mean Average Precision (MAP).
We illustrate the reward functions with an example below. Presuming that the process reaches State \(S_{6}\) and the currently picked concatenated embeddings are \(E_{1},E_{2},E_{3},E_{4},E_{5},E_{6}\) and their relevancy to the bug report is \(\langle 0,0,1,0,1,1\rangle\). This means that these embeddings (or files) ranked in the \(3^{rd}\), \(5^{th}\), and \(6^{th}\) positions are relevant to the bug report. The position of the relevant files are \(\langle 3,5,6\rangle\), and the distance between them is \(\langle 1,0\rangle\). Hence, \(distance(S_{6})=Avg.\langle 1,0\rangle=0.5\). If the agent picks a new relevant file, we reward the agent \(M\) times the reciprocal rank of the file divided by the distance between the already picked related files. In our example, the last picked file, \(E_{6}\)'s relevancy is \(1\). Thus, we have the following values for Equation 1: \(distance(S_{6})=0.5\); \(\log_{2}(6+1)=2.8074\); \(file\;relevance=1\). Note that \(M\) is a hyper-parameter. We find that three as the value of M results in the highest reward for our RL model. We identify the best value for \(M\) by experimenting with different values (1, 3, 6, and 9). Figure 3 shows the resulting reward-episode graph using
Fig. 3: Effect of M in the reward-episode graph.
different values of \(M\). Hence, given \(M=3_{s}\) the value of the reward function will be \(\Re(S,A)=\frac{3_{s1}}{2.807440.5}=2.14\). The reward can vary between \(M\) to \(\sim 0\). A higher value of the reward function indicates a better action of the model. Finally, in the case of optimal ranking, the \(distance(S)\) will be zero. We handle this case by assigning a value of \(1\) for \(distance(S)\).
Finally, note that we limit the number of actions to 31. According to our formulation, the number of states will be equal to the number of actions. Thus, the number of states is also limited to a maximum of 31. Like any deep learning system, the prediction space of a reinforcement learning agent can not be a variable. In our formulation, the number of actions equals the number of source code files, and the number of source code files is variable. As we cannot have a variable number of actions, we must fix the number of actions \(k\). We can select any number as a value of \(k\) if it fits in the memory. Previously, we formulated _state_ using embedding all the source code files concatenated with the embedding of the bug report. A state composed of concatenated embedding of a large number of source code files can lead to a larger _state_. In this study, we use an Nvidia V100 16 GB GPU. We conduct tests with a different number of actions. Our tests revealed that when the number of actions reached 32, the training scripts began to fail with an out-of-memory error. This is due to the state size being too large to fit into the GPU memory. To resolve this issue, we limit the number of actions to \(k=31\) to keep the state size under control. While we could potentially increase the number of actions, our current resources limit us to 31 actions. In section 3.1, we mention that we select the top \(k\) relevant source code files and passed them to the RLocator. According to our formulation, we have 31 states. As the number of files and the number of states is the same, we can select the top 31 (\(k=31\)) most relevant source code files from ES and pass them to RLocator.
## 4 Dataset and Evaluation Measures
In this section, we discuss the dataset used to train and evaluate our model (section 4.1). Then, we present the evaluation metrics we use for evaluation (section 4.2).
### _Dataset_
In our experiment, we evaluate our approach on six real-world open-source projects [41], which are commonly used benchmark datasets in bug localization studies [16, 6]. Also, prior work (e.g., [42]) showed that this dataset contains the lowest number of false positive or negative cases among other datasets [43, 44, 3, 45, 46] in the literature. Previous studies evaluate their approaches to each project separately. Thus, in this study, we follow a similar approach and train our RLocator model for each one of the six projects.
The dataset contains bug reports of six Apache projects (Aspectl), Birt, Eclipse Platform UI, JDT, SWT, Tomcat). Table II shows descriptive statistics on the datasets. The dataset contains metadata of bug reports, i.e., bug id, bug description, report timestamp, commit SHA of the fixing commit, and buggy source code files path. As mentioned in Section 2.1 each bug report is associated with a commit SHA/version. We use the commit SHA to identify the version repository version where the bug resides (the commit SHA, just before the bug fixing commit). Once the version containing the bug is identified from the commit SHA, we collect all the relevant source code files from that version. This approach guarantees that the bug localization system closely resembles real-world bug localization, as the bug-fixing source code is excluded from the bug localization process. In section 3, we state that we used only 91% of the data for training and testing. To prepare the train and test dataset, we sort the dataset according to the ascending order of the bug reports date and divide the dataset into 60:40 split. Previous studies [47] have used 60:20:20 settings for train, validation, and testing purpose. As we do not use validation to shorten the training duration, we use the validation data for testing purposes. We use the first 60% of the data for training purposes and the remaining 40% for testing purposes.
### _Evaluation Measures_
The dataset proposed by Ye et al. [41] provides ground truth associated with each bug report. The ground truth contains the path of the file in the project repository that has been modified to fix a particular bug. To evaluate RLocator performance, we use the ground truth and analyze the experimental results based on three criteria, which are widely adopted in bug localization studies [3, 5, 6, 7, 8].
* **Mean Reciprocal Rank (MRR):** To identify the average rank of the relevant file in the retrieved files set, we adopted the Mean Reciprocal Rank. MRR is the average reciprocal rank of the source code files for all the bug reports. We present the equation for calculating MRR below, where \(A\) is the set of bug reports. \[MRR=\frac{1}{|A|}\sum_{A}\frac{1}{Least\ rank\ of\ the\ relevant\ files}\] (4) Suppose we have two bug reports, \(report_{1}\) and \(report_{2}\). For each bug report, the bug localization model will rank six files. For \(report_{1}\) the ground truth of the retrieved files are \([0,0,1,0,1,0]\) and for \(report_{2}\) the ground truth of the retrieved files are \([1,0,0,0,0,1]\). In this case, the least rank of relevant files is 3 and 1, respectively, for \(report_{1}\) and \(report_{2}\). Now, the \(MRR=\frac{1}{2}(\frac{1}{3}+\frac{1}{1})=0.67\)
* **Mean Average Precision (MAP):** To consider the case where a bug is associated with multiple source code
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Project** & **\# of Bug** & **Avg. \# of Buggy** \\ & **Reports** & **Files per Bug** \\ \hline Aspect\()\) & 593 & 4.0 \\ Birt & 6,182 & 3.8 \\ Eclipse UI & 6,495 & 2.7 \\ JDT & 6,274 & 2.6 \\ SWT & 4,151 & 2.1 \\ Tomcat & 1,056 & 2.4 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Dataset statistics.
files, we adopted Mean Average Precision. It provides a measure of the quality of the retrieval [3, 48]. MRR considers only the best rank of relevant files; on the contrary, MAP considers the rank of all the relevant files in the retrieved files list. Thus, MAP is more descriptive and unbiased than MRR. Precision means how noisy the retrieval is. If we calculate the precision on the first two retrieved files, we will get precision@2. For calculating average precision, we have to figure precision@1, precision@2,... precision@k, and then we have to average the precision at different points. After calculating the average precision for each bug report, we have to find the mean of the average precision to calculate the MAP.
\[MAP=\frac{1}{|A|}\sum_{A}AvgPrecision(Report_{i}) \tag{5}\]
We show the MAP calculation for the previous example of two bug reports. The Average precision for \(report_{1}\) and \(report_{2}\) will be 0.37 and 0.67. So, the \(MAP=\frac{1}{2}(0.36+0.67)=0.52\)
* **Top K:** For fare comparison with prior studies [37, 36] and to present a straightforward understanding of performance we calculate Top K. Top K measures the overall ranking performance of the bug localization model. It indicates the percentage of bug reports for which at least one buggy source appears among the top K positions in the ranked list generated by the bug localization tool. Following previous studies (e.g., [37, 36]), we consider three values of K: 1, 5, and 10.
## 5 RLocator Performance
We evaluate RLocator on the hold-out dataset using the metrics described in Section 4.2. As there has been no RL-based bug localization tool, we compare RLocator with two state-of-the-art bug localization tools, BugLocator, and FLIM. We choose these approaches as the replication package of the studies is available and accessible, which can facilitate our comparison to these approaches.
BugLocator uses IR based approach, whereas FLIM is a deep neural network-based method. A short description of both approaches has been presented below.
* BugLocator [3]: an IR-based tool that utilizes a vector space model to identify the potentially responsible source code files by estimating the similarity between source code file and bug report.
* FLIM [16]: a deep-learning-based model that utilizes a large language model like CodeBERT.
We use the original implementation for determining performance for BugLocator. [3] and FLIM [16].
FBL-BERT [8] is one of the more recent bug localization techniques. However, we do not compare against them since FBL-BERT performs bug localization at the changeset level. If we were to take their technique (which is available in the arxiv version of their paper) and apply it to our dataset that is at the file level, then we would be disadvantageing the performance of FBL-BERT. This is because FBL-BERT is not meant to work with long documents like source code files. Rather they work best at the changeset level. Hence we feel that replication and comparison of FBL-BERT with our technique would unfairly say that our technique would be better. Since the granularity is different, we do not compare RLocator against FBL-BERT.
There are a few other studies (e.g., DeepLoc [47], CAST [5], KGBugLocator [6], BL-GAN [7], biXnet [49], and Cheng et al. [50]) that proposed deep learning-based bug localization approaches. However, these studies do not provide a replication package containing code or pre-trained models. Still, those approaches use the same projects to evaluate the performance of the proposed tools. Since we are unable to find any replication packages and/or models directly from the respective papers, we do not provide any further comparison or discussion in our paper. However, to provide comprehensive information, we include a table in our online appendix [51], displaying their performance alongside that of RLocator.
### _Retrieval performance_
As previously mentioned, we utilize \(k\) (=31) relevant files in RLocator. Note that with \(k\)=31 and using ES as we describe in Section 3.1, we can rerank files using RLocator for 91% of the bug reports in our dataset. Table III presents the performance of RLocator along with the performance of the examined studies considering 91% of the data and 100% of the data. Note that RLocator is not designed to utilize 100% data since it cannot rerank the files for any bug report for which there are no relevant files among the top \(k\) files. Thus, for the cases where RLocator is not applicable (no relevant files identified by the XGBoost model), we estimate the performance of RLocator. For estimation, we assume the contribution of the case is zero. For example, if we have three bug reports where for the 1\({}^{st}\) bug report, the rank of the relevant files is three and five, \(2^{nd}\) bug report, the rank of the files is five, and seven and the 3\({}^{rd}\) bug report the XGBoost model predicts that there will be no relevant file in top \(K\) files. The MRR for this scenario will be \(\frac{1}{3}(\frac{1}{2}+\frac{1}{5}+0)=0.23\). As we are using zero instead of reciprocal ranks when calculating Mean Reciprocal Rank (MRR) (or MAP), we can say that estimated performance is the worst possible performance or performance lower bound of RLocator. We choose such an overly conservative approach to avoid overestimating the effectiveness of our technique.
**Table III showcases that RLocator achieves better performance than BugLocator and FLIM in both MRR and MAP across all studied projects when using the 91% data.** On 91% data, RLocator outperforms FLIM by 5.56-38.3% in MAP and 3.77-36.73% in MRR. Regarding Top K, the performance improvement is up to 23.68%, 15.22%, and 11.32% in terms of Top 1, Top 5, and Top 10, respectively. Comparing to BugLocator, RLocator achieves performance improvement of 12.5 - 56.86% and 9.8% - 41.51%, in terms of MAP and MRR, respectively. Regarding Top K, the performance improvement is up to 26.32%, 41.3%, and 35.85% in terms of Top 1, Top 5, and Top 10, respectively.
When we consider 100% of the data, RLocator has better MAP results than FLIM in three out of the six projects (AspectJ, Birt, and JDT) by 6.82-34.21%, equal to FLIM in one project (Tomcat) and worse than FLIM in 2 projects (Eclipse Platform UI and SWT) by 2-14%. In terms of MRR, RLocator
is better than FLIM in 2 projects (Aspectl) and Bird) by 10-31.71% and worse than FLIM in the remaining four projects (Eclipse platform UI, JDT, SWT, and Tomcat) by 6-18%. In terms of Top K, RLocator ranks 4.29-12.5% more bugs in the top 10 positions than FLIM in two projects. On the other hand, in the rest of the six projects, FLIM ranks more bugs in the top 10 positions ranging between 2.74 - 34.48%. When comparing RLocator with BugLocator for the 100% data along MAP, we find that the RLocator is better in five of the six projects and similar in just the Tomcat project. With respect to MRR, RLocator is better than BugLocator in all six projects. In terms of Top K, RLocator ranks more bugs than BugLocator in the top 10 position, where the improvement ranges between 12.07 -39.58%.
It is important to note that MAP provides a more balanced view than MRR and top K since it accounts for all the files that are related to a bug report and not just one file. Additionally, in our technique, we optimize to give more accurate results for most of the bug reports than give less accurate results on average for all the bug reports. Thus, **by looking at the MAP data for the 91%, we can see that RLocator performs better than the state-of-the-art techniques in all projects. Even if we consider 100% of the data, RLocator is still better than other techniques in the majority of the projects.** Only with 100% of the data and when using MRR as the evaluation metric, RLocator does not perform better than the state-of-the-art in most projects.
Across all projects, we observe that RLocator performs the lowest in the Bird project. Interestingly, besides RLocator, FLIM also achieved the lowest performance (in both MRR and MAP). Compared to the average performance on 91% data, RLocator's performance has dropped by nearly 10.47%, 11.71%, and 41.42% in terms of MAP, MRR, and top 10, respectively. Nevertheless, compared with FLIM, the performance is 38.3%, 36.73%, and 11.32% better for MAP, MRR, and the top 10, respectively. With respect to the BugLocator technique, the performance is 36.17%, 20.41%, and 35.85% better in terms of MAP, MRR, and top 10, respectively. In fact, several factors can contribute to such performance drop [52] in the Bird project, e.g., the quality of bug reports, source code length, amount of information in the bug report, etc. As _similarity_ is one of the important criteria for text retrieval-based bug localization systems [53], we estimate the amount of helpful information by calculating _similarity_ according to equation 6. The similarity metric measures the ratio of similar tokens between source code files and bug reports. High similarity indicates that the bug report contains much potentially helpful information to localize the bug.
\[Similarity=\frac{Bug\ Report\ Tokens\cap File\ Tokens}{\#\ of\ Unique\ Tokens\ in\ Bug\ Report} \tag{6}\]
We find that the median similarity for the Bird project is 0.29, which is the lowest among the six projects, which indicates that the bug reports' quality (i.e., similarity to files) can impact the performance drop of RLocator in the Bird project.
For using RLocator effectively in real-world scenarios, we use an XGBoost model (Section 3.1). The XGBoost model filters out the bug reports where the relevant files do not appear in the top \(k\) (=31) files. To provide richer insights into our analysis and understand the characteristics of the bug reports and the associated 31 retrieved files, we compute the importance of our selected features mentioned in Table I. XGBoost offers a built-in module to calculate feature importance. The importance score indicates how much each feature contributes to the model. We report the
Fig. 4: Feature importance of classifier model.
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Project**} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**Top 1**} & \multicolumn{3}{c|}{**Top 5**} & \multicolumn{3}{c|}{**Top 10**} & \multicolumn{3}{c|}{**MAP**} & \multicolumn{3}{c}{**MRR**} \\ \cline{3-11} & & **91\%** & **100\%** & **91\%** & **100\%** & **91\%** & **100\%** & **91\%** & **100\%** & **91\%** & **100\%** \\ \hline \multirow{3}{*}{Aspectl} & RLocator & 0.46 & **0.40** & **0.69** & 0.63 & **0.75** & **0.70** & **0.56** & **0.46** & **0.59** & **0.50** \\ \cline{2-11} & BugLocator & 0.36 & 0.28 & 0.50 & 0.45 & 0.36 & 0.51 & 0.33 & 0.31 & 0.49 & 0.48 \\ \cline{2-11} & FLIM & **0.51** & 0.36 & 0.55 & 0.60 & 0.72 & 0.67 & 0.41 & 0.35 & 0.47 & 0.45 \\ \hline \multirow{3}{*}{Birt} & RLocator & **0.38** & **0.25** & **0.46** & **0.41** & **0.53** & **0.48** & **0.47** & **0.38** & **0.49** & **0.41** \\ \cline{2-11} & BugLocator & 0.28 & 0.15 & 0.27 & 0.21 & 0.34 & 0.29 & 0.30 & 0.30 & 0.39 & 0.38 \\ \cline{2-11} & FLIM & 0.29 & 0.18 & 0.39 & 0.34 & 0.47 & 0.42 & 0.29 & 0.25 & 0.31 & 0.28 \\ \hline \multirow{3}{*}{Eclipse Platform UI} & RLocator & 0.45 & 0.37 & 0.69 & 0.63 & 0.78 & 0.73 & **0.54** & 0.42 & **0.59** & 0.50 \\ \cline{2-11} & BugLocator & 0.45 & 0.33 & 0.54 & 0.49 & 0.63 & 0.58 & 0.29 & 0.30 & 0.38 & 0.35 \\ \cline{2-11} & FLIM & **0.48** & **0.41** & **0.72** & **0.67** & **0.80** & **0.75** & 0.51 & **0.48** & 0.52 & **0.53** \\ \hline \multirow{3}{*}{JDT} & RLocator & **0.44** & 0.33 & **0.67** & **0.61** & 0.78 & 0.75 & **0.51** & **0.44** & **0.53** & 0.45 \\ \cline{2-11} & BugLocator & 0.34 & 0.21 & 0.51 & 0.45 & 0.60 & 0.55 & 0.22 & 0.20 & 0.31 & 0.28 \\ \cline{2-11} & FLIM & 0.40 & **0.35** & 0.65 & 0.60 & **0.82** & **0.77** & 0.42 & 0.41 & 0.51 & **0.49** \\ \hline \multirow{3}{*}{SWT} & RLocator & 0.40 & 0.30 & 0.57 & 0.51 & 0.63 & 0.58 & **0.48** & 0.42 & **0.51** & 0.44 \\ \cline{2-11} & BugLocator & 0.37 & 0.25 & 0.50 & 0.45 & 0.56 & 0.51 & 0.42 & 0.40 & 0.46 & 0.43 \\ \cline{2-11} & FLIM & **0.51** & **0.37** & **0.70** & **0.65** & **0.83** & **0.78** & 0.43 & **0.43** & 0.48 & **0.50** \\ \hline \multirow{3}{*}{Tomcat} & RLocator & 0.46 & 0.39 & 0.61 & 0.55 & 0.73 & 0.68 & **0.59** & **0.47** & **0.62** & 0.51 \\ \cline{2-11} & BugLocator & 0.40 & 0.29 & 0.43 & 0.38 & 0.55 & 0.50 & 0.31 & 0.27 & 0.37 & 0.35 \\ \cline{1-1} \cline{2-11} & FLIM & **0.51** & **0.42** & **0.70** & **0.65** & **0.76** & **0.71** & 0.52 & **0.47** & 0.59 & **0.60** \\ \hline \hline \end{tabular}
\end{table} TABLE III: RLocator performance.
default type of importance, "weight". A higher value of this metric implies that the feature is more important for generating a prediction. Figure 4 illustrates the significance of our selected features. It demonstrates that the most crucial feature is Similarity, closely followed by Source Code Length, Bug Report Length. These findings suggest that the similarity of a bug report plays a critical role in text-based search systems. Furthermore, it is well-known that low-quality bug reports can impede developers [54]. Therefore, these insights can assist users in composing high-quality bug reports, ultimately leading to reduced resolution time.
### _Entropy Ablation Analysis: Impact of Entropy on RLocator performance_
In RL, entropy refers to the unpredictability of an agent's actions. A low entropy indicates a predictable policy, while high entropy represents a more random and robust policy. An agent in RL will tend to repeat actions that previously resulted in positive rewards while learning the policy. The agent may become stuck in a local optimum due to exploiting learned actions instead of exploring new ones and finding a higher global optimum. This is where entropy comes useful: we can use entropy to encourage exploration and avoid getting stuck in local optima [55]. Because of this, entropy in RL has become very popular in the design of RL approaches such as A2C [56]. In our proposed model (Section 5.1), we use A2C with entropy to train the RLocator aiming to rank relevant files closer to each other. As entropy is part of the reward, the gradient descent process will try to maximize the entropy. Entropy will increase if the model identifies different actions as the best in the same state. However, those actions must select a relevant file; otherwise, the reward will be decreased. Thus, if there are multiple relevant files in a state, the A2C with entropy regularized model will assign almost the same probability in those actions (actions related to selecting those relevant files). This means that when the states are repeated, a different action will likely be selected each time. This probability assignment will lead to a higher MAP.
The observed performance of RLocator in achieving higher MAP can be interpreted due to two factors: 1) the way how we design our reward function, given that we define a function that aims to encourage higher MAP; 2) the inclusion of entropy, as entropy regularization is assumed to enable the model to achieve higher MAP. Hence, to provide a better understanding of the impact of entropy on the RLocator, we conduct a further experiment where we train a new model that does not consider entropy. We compare the model performance to the performance obtained in our previous models (i.e., A2C with entropy). Due to resource (time and GPU) limitations, we limit our evaluation to half of the total projects in our dataset, i.e., AspectJ, Birt, and Eclipse Platform UI. We observe a similar trend in those three projects. Thus, we believe our results will follow a similar trend in the remaining projects. Table IV presents the performance of both models (i.e., A2C only vs. A2C with Entropy). From the table, we find that the MRR and MAP of the models without entropy are lower than that of the A2C with entropy models. Table IV shows that in terms of MAP, the performance of A2C with entropy models is higher than A2C models by a range of 27.78-34.04%. In MRR and the top 10, the A2C with entropy model achieves higher performance by a range of 11.86-13.56% and 18.87 - 36%, respectively. Such results indicate that entropy could substantially contribute to the model performance regarding MAP, MRR, and Top K. Moreover, this shows that the use of entropy encourages the RL agent to explore possible alternate policies [57]; thus, it has a higher chance of getting a better policy for solving the problem in a given environment than the A2C model.
## 6 Related Work
The work most related to our study falls into studies on bug localization techniques. In the following, we discuss the related work and reflect on how the work compares with ours.
A plethora of work studied how developers localize bugs [58, 59, 60, 54]. For example, Bohme et al. [58] studied how developers debug. They found that the most popular technique for localizing a bug is forward reasoning, where developers go through each computational step of a failing test case to identify the location. Zimmermann et al. [54] studied the characteristics of a good bug report and found that test cases and stack traces are one of the most important criteria that makes a good bug report. While these studies focused on developers' manual localization of bugs, our approach examines the automation of the bug localization process for developers.
Several studies offered test case coverage-based solutions for bug localization [61, 62, 63]. Vancsics et al. [61] proposed a count-based spectrum instead of a hit-based spectrum in the Spectrum-Based Fault Localization (SBFL) tool. GRACE [62] proposed gated graph neural network-based representation learning to improve the SBFL technique. However, these studies mainly utilize test cases to localize bugs, whereas our approach (RLocator) focuses on the bug report for bug localization.
There were several efforts to assess the impact of query reformulation in improving the performance of existing bug localization tools. [64, 65]. For example, Rahman et al. [64] found that instead of using the full bug report as a full-text query, a reformulated query with some additional expansion performs better. BugLocator [3] used a revised Vector Space Model (rVSM) to estimate the textual similarity between bug reports and source code files.
A few studies incorporated the information of program structures such as the Program Dependence Graph (PDG), Data Flow Graph (DFG) [66], and Abstract Syntax Tree
\begin{table}
\begin{tabular}{l|l|c|c|c|c|c} \hline
**Project** & **Model** & **Top 1** & **Top 5** & **Top 10** & **MAP** & **MRR** \\ \hline \multirow{2}{*}{Aspect} & A2C & 0.27 & 0.39 & 0.48 & 0.40 & 0.52 \\ \cline{2-6} & A2C with Entropy & 0.46 & 0.69 & 0.75 & 0.56 & 0.59 \\ \hline \multirow{2}{*}{Birt} & A2C & 0.21 & 0.30 & 0.43 & 0.31 & 0.42 \\ \cline{2-6} & A2C with Entropy & 0.38 & 0.46 & 0.53 & 0.47 & 0.49 \\ \hline \multirow{2}{*}{Eclipse Platform UI} & A2C & 0.25 & 0.38 & 0.51 & 0.39 & 0.51 \\ \cline{2-6} & A2C with Entropy & 0.45 & 0.69 & 0.78 & 0.54 & 0.59 \\ \hline \end{tabular}
\end{table} TABLE IV: RLocator performance with and without Entropy for A2C.
(AST) for learning source code representation [4, 47, 66, 5]. For example, CAST [5] used AST of the source code to extract the semantic information and then used Word2Vec to project the source code and the bug report in the same embedding space. They used a CNN model that measures the similarity between a bug report and source code. The model ranks the file based on the calculated similarity. Hyloc [9] incorporated the techniques of IR-based bug localization with deep learning. It concatenated the TF-IDF vector of the source code with repository and file-level metadata.
Other studies applied several deep learning-based approaches for bug localization [67, 6, 49]. DEMOB [67] used attention on ELMo [68] embedding, whereas KGBoLocator [6] used attention on graph embedding. BLGAN [7] offered a generative adversarial network-based solution for bug localization. BiXnet [49] utilized text semantic graph and code property graph together to localize bugs. One major drawback of biXnet is using the hierarchical property to estimate similarity. Source code has hierarchical property, but bug reports do not have any hierarchical property. Thus, using a deep graph model to estimate similarity may not be useful. Other studies focused on associating commits with bug reports [69, 8]. For example, FBL-BERT [8] used CodeBERT embedding for estimating the similarity between source code files and changesets of a commit. Based on the similarity, it ranks the suspicious commit. FLIM [16] also used CodeBERT embedding for estimating similarity. However, FLIM works on function-level bug localization.
Our work differs from previous studies on bug localization since we propose our approach (RLocator) based on deep reinforcement learning. The key insight behind RLocator is directly optimizing the evaluation measures, while the prior work focused on similarity-based approaches. We formulate the bug localization problem using a Markov Decision Process (MDP) to optimize the evaluation measures directly. We experimentally evaluate it based on a benchmark dataset of 8,316 projects from six highly popular Apache projects. Our results demonstrate that direct optimization of evaluation measures considerably contributes to performance improvement of the bug localization problem.
## 7 Threats to Validity
RLocator has a number of limitations as well. We identify them and discuss how to overcome the limitations below.
**Internal Validity.** One limitation of our approach is we are not able to utilize 9% of our dataset due to the limitation of text-based search. One may point out that we exclude the bug reports where we do not perform well. But our the XGBoost model in our approach automatically identifies them and we say that we would rather not localize the source code files for these bug reports than localize them incorrectly. Hence, developers need to rely on their manual analysis only for the 9%. Moreover, as a measure of full transparency, we estimate the lower bound of RLocator performance for the 100% data and show that the difference is negligible.
**External Validity.** The main threat to external validity lies in the selection of projects for evaluation. We evaluate RLocator on a limited number of bugs, which may concern our results' generalizability. However, we evaluate RLocator on six real-world open-source projects that are used by prior studies [4, 5, 67, 6, 9, 70]. Those projects are from different domains and use varied development styles. Additionally, to identify the reason for the better performance of the A2C with entropy model, we have trained the A2C (without entropy) model for only three of the six projects. However, training models for each of the six projects are resource (time and GPU) consuming. For example, for end-to-end training and evaluation of an A2C (without entropy) model, it will take 4 days in an Nvidia V100 16 GB GPU. The three projects' results (Aspect), Birt, Eclipse Platform UI) follow the same trend. Thus we can assume that the remaining projects' performance would follow the same trend.
**Construct Validity.** Finally, our evaluation measures might be one threat to construct validity. The evaluation measures may not completely reflect real-world situations. The threat is mitigated by the fact that the used evaluation measures are well-known [8, 16, 41, 3] and best available to measure and compare the performance of information retrieval-based bug localization tools.
## 8 Conclusion
In this paper, we propose RLocator, a reinforcement learning-based (RL) technique to rank the source code files where the bug may reside, given the bug report. The key contribution of our study is the formulation of the bug localization problem using the Markov Decision Process (MDP), which helps us to optimize the evaluation measures directly. We evaluate RLocator on 8,316 bug reports and find that RLocator performs better than the state-of-the-art techniques when using MAP as an evaluation measure. Using 91% bug reports dataset, RLocator outperforms prior tools in all the project in terms of both MAP and MRR. When using 100% data, RLocator outperforms all prior approaches in four of six projects using MAP and two of the six projects using MRR. RLocator can be used along with other bug localization approaches to improve performance. Our results show that RL is a promising avenue for future exploration when it comes to advancing state-of-the-art techniques for bug localization. Future research can explore the application of the latest Reinforcement learning algorithms in the domain of bug localization
## 9 Data Availability
To foster future research in the field, we make a replication package comprising our dataset and code are publicly available [51]. |
2307.03956 | The annealed parabolic Anderson model on a regular tree | We study the total mass of the solution to the parabolic Anderson model on a
regular tree with an i.i.d. random potential whose marginal distribution is
double-exponential. In earlier work we identified two terms in the asymptotic
expansion for large time of the total mass under the quenched law, i.e.,
conditional on the realisation of the random potential. In the present paper we
do the same for the annealed law, i.e., averaged over the random potential. It
turns out that the annealed expansion differs from the quenched expansion. The
derivation of the annealed expansion is based on a new approach to control the
local times of the random walk appearing in the Feynman-Kac formula for the
total mass. In particular, we condition on the backbone to infinity of the
random walk, truncate and periodise the infinite tree relative to the backbone
to obtain a random walk on a finite subtree with a specific boundary condition,
employ the large deviation principle for the empirical distribution of Markov
renewal processes on finite graphs, and afterwards let the truncation level
tend to infinity to obtain an asymptotically sharp asymptotic expansion. | Frank den Hollander, Daoyi Wang | 2023-07-08T11:21:26Z | http://arxiv.org/abs/2307.03956v1 | # The annealed parabolic Anderson model
###### Abstract
We study the total mass of the solution to the parabolic Anderson model on a regular tree with an i.i.d. random potential whose marginal distribution is double-exponential. In earlier work we identified two terms in the asymptotic expansion for large time of the total mass under the _quenched law_, i.e., conditional on the realisation of the random potential. In the present paper we do the same for the _annealed law_, i.e., averaged over the random potential. It turns out that the annealed expansion differs from the quenched expansion. The derivation of the annealed expansion is based on a _new approach_ to control the local times of the random walk appearing in the Feynman-Kac formula for the total mass. In particular, we condition on the backbone to infinity of the random walk, truncate and periodise the infinite tree relative to the backbone to obtain a random walk on a finite subtree with a specific boundary condition, employ the large deviation principle for the empirical distribution of _Markov renewal processes_ on finite graphs, and afterwards let the truncation level tend to infinity to obtain an asymptotically sharp asymptotic expansion.
**MSC2010:** 60H25, 82B44, 05C80.
**Keywords:** Parabolic Anderson model, Feynman-Kac formula, regular tree, double-exponential random potential, backbone of random walk, annealed Lyapunov exponent, variational formula.
**Acknowledgment:** The research in this paper was supported by the Netherlands Organisation for Scientific Research through NWO Gravitation Grant NETWORKS-024.002.003.
###### Contents
* 1 Introduction and main results
* 1.1 Background and motivation
* 1.2 The PAM on a graph
* 1.2.1 Notations and definitions
* 1.2.2 Assumption on the potential
* 1.2.3 Variational formula
* 1.2.4 Reformulation
* 1.3 The PAM on an unrooted regular tree: annealed total mass for large times and key variational formula
* 1.4 Discussion
* 2 Proof of the main theorem: lower bound
* 2.1 Killing and lower variational formula
* 2.2 Limit of the lower variational formula
* 3 Proof of the main theorem: upper bound
* 3.1 Backbone, projection, periodisation and upper variational formula
* 3.1.1 Backbone
* 3.1.2 Projection
* 3.1.3 Periodisation
* 3.1.4 Upper variational formula
* 3.2 Limit of the upper variational formula
* A Large deviation principle for the local times of Markov renewal processes
* B Sojourn times: cumulant generating functions and Legendre tranforms
* B.1 General observations
* B.2 Exponential sojourn time
* B.3 Non-exponential sojourn time
* C Analysis of the variational problem on the infinite regular tree
* C.1 Two properties
* C.2 Proof of the two properties
* D Large deviation estimate for the local time away from the backbone
* E Analysis of the upper variational formula
* E.1 Identification of the rate function for the local times on the truncated tree
* E.2 Limit of the upper variational formula
Introduction and main results
Section 1.1 provides background and motivation, Section 1.2 lists notations, definitions and assumptions, Section 1.3 states the main theorems, while Section 1.4 places these theorems in their proper context.
### Background and motivation
The _parabolic Anderson model_ (PAM) is the Cauchy problem
\[\partial_{t}u(x,t)=\Delta_{\mathscr{X}}u(x,t)+\xi(x)u(x,t),\qquad t>0,\,x\in \mathscr{X}, \tag{1.1}\]
where \(t\) is time, \(\mathscr{X}\) is an ambient space, \(\Delta_{\mathscr{X}}\) is a Laplace operator acting on functions on \(\mathscr{X}\), and \(\xi\) is a random potential on \(\mathscr{X}\). Most of the literature considers the setting where \(\mathscr{X}\) is either \(\mathbb{Z}^{d}\) or \(\mathbb{R}^{d}\) with \(d\geq 1\), starting with the foundational papers [7], [8], [6] and further developed through a long series of follow-up papers (see the monograph [14] and the survey paper [1] for an overview). More recently, other choices for \(\mathscr{X}\) have been considered as well:
* _Deterministic graphs_ (the complete graph [4], the hypercube [2]).
* _Random graphs_ (the Galton-Watson tree [11], [12], the configuration model [11]).
Much remains open for the latter class.
The main target for the PAM is a description of _intermittency_: for large \(t\) the solution \(u(\cdot,t)\) of (1.1) concentrates on well-separated regions in \(\mathscr{X}\), called _intermittent islands_. Much of the literature focusses on a detailed description of the size, shape and location of these islands, and on the profiles of the potential \(\xi(\cdot)\) and the solution \(u(\cdot,t)\) on them. A special role is played by the case where \(\xi\) is an i.i.d. random potential with a _double-exponential_ marginal distribution
\[\mathrm{P}(\xi(0)>u)=\mathrm{e}^{-\mathrm{e}^{u/\varrho}},\qquad u\in\mathbb{R}, \tag{1.2}\]
where \(\varrho\in(0,\infty)\) is a parameter. This distribution turns out to be critical, in the sense that the intermittent islands neither grow nor shrink with time, and represents a class of its own.
In the present paper we consider the case where \(\mathscr{X}\) is an _unrooted regular tree_\(\mathcal{T}\). Our focus will be on the asymptotics as \(t\to\infty\) of the total mass
\[U(t)=\sum_{x\in\mathcal{T}}u(x,t).\]
In earlier work [11], [12] we were concerned with the case where \(\mathscr{X}\) is a _rooted Galton-Watson tree_ in the _quenched_ setting, i.e., almost surely with respect to the random tree and the random potential. This work was restricted to the case where the random potential is given by (1.2) and the offspring distribution of the Galton-Watson tree has support in \(\mathbb{N}\backslash\{1\}\) with a sufficiently thin tail. In the present paper our focus will be on the _annealed_ setting, i.e., averaged over the random potential. We derive two terms in the asymptotic expansion as \(t\to\infty\) of the average total mass
\[\langle U(t)\rangle=\sum_{x\in\mathcal{T}}\langle u(x,t)\rangle,\]
where \(\langle\cdot\rangle\) denotes expectation with respect to the law of the random potential. It turns out that the annealed expansion _differs_ from the quenched expansion, even though the same variational formula plays a central role in the two second terms.
The derivation in the annealed setting forces us to follow a different route than in the quenched setting, based on various approximations of \(\mathcal{T}\) that are more delicate than the standard approximation of \(\mathbb{Z}^{d}\) (see [10, Chapter VIII]). This is the reason why we consider regular trees rather than Galton-Watson trees, to which we hope to return later. A key tool in the analysis is the large deviation principle for the empirical distribution of _Markov renewal processes_ on finite graphs derived in [15], which is recalled in Appendix A.
### The PAM on a graph
#### 1.2.1 Notations and definitions
Let \(G=(V,E)\) be a _simple connected undirected_ graph, either finite or countably infinite, with a designated vertex \(\mathcal{O}\) called the root. Let \(\Delta_{G}\) be the Laplacian on \(G\), i.e.,
\[(\Delta_{G}f)(x)=\sum_{\begin{subarray}{c}y\in V:\\ \{x,y\}\in E\end{subarray}}[f(y)-f(x)],\qquad x\in V,\,f\colon\,V\to\mathbb{R},\]
which acts along the edges of \(G\). Let \(\xi:=(\xi(x))_{x\in V}\) be a random potential attached to the vertices of \(G\), taking values in \(\mathbb{R}\). Our object of interest is the non-negative solution of the Cauchy problem with localised initial condition,
\[\begin{array}{lll}\partial_{t}u(x,t)&=&(\Delta_{G}u)(x,t)+\xi(x)u(x,t),&x \in V,\,t>0,\\ u(x,0)&=&\delta_{\mathcal{O}}(x),&x\in V.\end{array} \tag{1.3}\]
The quantity \(u(x,t)\) can be interpreted as the amount of mass at time \(t\) at site \(x\) when initially there is unit mass at \(\mathcal{O}\). The total mass at time \(t\) is \(U(t)=\sum_{x\in V}u(x,t)\). The total mass is given by the _Feynman-Kac formula_
\[U(t)=\mathbb{E}_{\mathcal{O}}\left(\mathrm{e}^{\int_{0}^{t}\xi(X_{s})\mathrm{ d}s}\right), \tag{1.4}\]
where \(X=(X_{t})_{t\geq 0}\) is the continuous-time random walk on the vertices \(V\) with jump rate \(1\) along the edges \(E\), and \(\mathbb{P}_{\mathcal{O}}\) denotes the law of \(X\) given \(X_{0}=\mathcal{O}\). Let \(\langle\cdot\rangle\) denote expectation with respect to \(\xi\). The quantity of interest in this paper is the average total mass at time \(t\):
\[\langle U(t)\rangle=\left\langle\mathbb{E}_{\mathcal{O}}\left(\mathrm{e}^{ \int_{0}^{t}\xi(X_{s})\mathrm{d}s}\right)\right\rangle. \tag{1.5}\]
#### 1.2.2 Assumption on the potential
Throughout the paper we assume that the random potential \(\xi=(\xi(x))_{x\in V}\) consists of i.i.d. random variables with a marginal distribution whose cumulant generating function
\[H(u)=\log\left\langle\mathrm{e}^{u\xi(\mathcal{O})}\right\rangle \tag{1.6}\]
satisfies the following:
**Assumption 1.1**.: **[Asymptotic double-exponential potential]**
There exists a \(\varrho\in(0,\infty)\) such that
\[\lim_{u\to\infty}uH^{\prime\prime}(u)=\varrho. \tag{1.7}\]
\(\spadesuit\)
**Remark 1.2**.: **[Double-exponential potential]** A special case of (1.7) is when \(\xi({\cal O})\) has the double-exponential distribution in (1.2), in which case
\[H(u)=\log\Gamma(\varrho u+1)\]
with \(\Gamma\) the gamma function.
\(\spadesuit\)
By Stirling's approximation, (1.7) implies
\[H(u)=\varrho u\log(\varrho u)-\varrho u+o(u),\qquad u\to\infty. \tag{1.8}\]
Assumption 1.1 is more than enough to guarantee existence and uniqueness of the non-negative solution to (1.3) on any discrete graph with at most exponential growth (as can be inferred from the proof in [7], [8] for the case \(G=\mathbb{Z}^{d}\)). Since \(\xi\) is assumed to be i.i.d., we have from (1.5) that
\[\langle U(t)\rangle=\mathbb{E}_{\cal O}\left(\exp\bigg{[}\sum_{x\in V}H(\ell_ {t}(x))\bigg{]}\right), \tag{1.9}\]
where
\[\ell_{t}(x)=\int_{0}^{t}\mathds{1}\{X_{s}=x\}\,\mathrm{d}s,\qquad x\in V,\,t \geq 0,\]
is the local time of \(X\) at vertex \(x\) up to time \(t\).
#### 1.2.3 Variational formula
The following _characteristic variational formula_ is important for the description of the asymptotics of \(\langle U(t)\rangle\). Denote by \({\cal P}(V)\) the set of probability measures on \(V\). For \(p\in{\cal P}(V)\), define
\[I_{E}(p)=\sum_{\{x,y\}\in E}\left(\sqrt{p(x)}-\sqrt{p(y)}\,\right)^{2},\qquad J _{V}(p)=-\sum_{x\in V}p(x)\log p(x), \tag{1.10}\]
and set
\[\chi_{G}(\varrho)=\inf_{p\in{\cal P}(V)}[I_{E}(p)+\varrho J_{V}(p)],\qquad \varrho\in(0,\infty). \tag{1.11}\]
The first term in (1.11) is the quadratic form associated with the Laplacian, which is the large deviation rate function for the _empirical distribution_
\[L_{t}=\frac{1}{t}\int_{0}^{t}\delta_{X_{s}}\,\mathrm{d}s=\frac{1}{t}\sum_{x \in V}\ell_{t}(x)\delta_{x}\in{\cal P}(V) \tag{1.12}\]
(see e.g. [10, Section IV]). The second term in (1.11) captures the second order asymptotics of \(\sum_{x\in V}H(tp(x))\) as \(t\to\infty\) via (1.8) (see e.g. [10, Section VIII]).
#### 1.2.4 Reformulation
The following lemma pulls the leading order term out of the expansion and shows that the second order term is controlled by the large deviation principle for the empirical distribution.
**Lemma 1.3**.: **[Key object for the expansion]** _If \(G=(V,E)\) is finite, then_
\[\langle U(t)\rangle=\mathrm{e}^{H(t)+o(t)}\,\mathbb{E}_{\mathcal{O}}\left( \mathrm{e}^{-\varrho tJ_{V}(L_{t})}\right),\qquad t\to\infty.\]
_where \(J_{V}\) is the functional in (1.10) and \(L_{t}\) is the empirical distribution in (1.12)._
Proof.: Because \(\sum_{x\in V}\ell_{t}(x)=t\), we can rewrite (1.9) as
\[\langle U(t)\rangle =\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V}H(\ell_{t }(x))\bigg{]}\right)\] \[=\mathrm{e}^{H(t)}\,\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{\{}t \sum_{x\in V}\frac{1}{t}\left[H(\frac{\ell_{t}(x)}{t}t)-\frac{\ell_{t}(x)}{t}H (t)\right]\bigg{\}}\right).\]
Assumption 1.1 implies that \(H\) has the following scaling property (see [6]):
\[\lim_{t\to\infty}\frac{1}{t}[H(ct)-cH(t)]=\varrho c\log c\quad\text{uniformly in }c\in[0,1].\]
Hence the claim follows.
The PAM on an unrooted regular tree: annealed total mass for large times and key variational formula
In this section we specialise to the case where \(G=\mathcal{T}=(E,V)\), an unrooted regular tree of degree \(d+1\) with \(d\geq 2\) (see Fig. 1). The main theorem of our paper is the following expansion.
**Theorem 1.4**.: **[Growth rate of the total mass]** _For any \(d\geq 4\), subject to Assumption 1.1,_
\[\frac{1}{t}\log\langle U(t)\rangle=\varrho\log(\varrho t)-\varrho-\chi_{ \mathcal{T}}(\varrho)+o(1),\qquad t\to\infty, \tag{1.13}\]
_where \(\chi_{\mathcal{T}}(\varrho)\) is the variational formula in (1.11) with \(G=\mathcal{T}\)._
Figure 1: An unrooted tree with degree \(3\) (= offspring size \(2\)).
The proof of Theorem 1.4 is given in Sections 2-3 and makes use of technical computations collected in Appendices A-E.
The main properties of the key quantity
\[\chi_{\mathcal{T}}(\varrho)=\inf_{p\in\mathcal{P}(V)}[I_{E}(p)+\varrho J_{V}(p)], \qquad\varrho\in(0,\infty), \tag{1.14}\]
are collected in the following theorem (see Fig. 2).
**Theorem 1.5**.: **[Properties of the variational formula]** _For any \(d\geq 2\) the following hold: (a) The infimum in (1.14) may be restricted to the set_
\[\mathcal{P}^{\downarrow}_{\mathcal{O}}(V)=\big{\{}p\in\mathcal{P}(V)\colon \operatorname{argmax}p=\mathcal{O},\,p\text{ is non-increasing in the distance to }\mathcal{O}\big{\}}. \tag{1.15}\]
_(b) For every \(\varrho\in(0,\infty)\), the infimum in (1.14) restricted to \(\mathcal{P}^{\downarrow}_{\mathcal{O}}(V)\) is attained, every minimiser \(\bar{p}\) is such that \(\bar{p}>0\) on \(V\), and \(\partial S_{R}=\sum_{\partial B_{R}(\mathcal{O})}\bar{p}(x)\), \(R\in\mathbb{N}_{0}\), satisfies_
\[\sum_{R\in\mathbb{N}_{0}}\partial S_{R}\log(R+1)\leq\frac{d+1}{\varrho},\]
_where \(B_{R}(\mathcal{O})\) is the ball of radius \(R\) centred at \(\mathcal{O}\). (c) The function \(\varrho\mapsto\chi_{\mathcal{T}}(\varrho)\) is strictly increasing and globally Lipschitz continuous on \((0,\infty)\), with_
\[\lim_{\varrho\downarrow 0}\chi_{\mathcal{T}}(\varrho)=d-1,\qquad\lim_{\varrho \rightarrow\infty}\chi_{\mathcal{T}}(\varrho)=d+1.\]
The proof of Theorem 1.5 is given in Appendix C (see Fig. 2).
### Discussion
**1.** Theorem 1.4 identifies the scaling of the total mass up to and including terms that are exponential in \(t\). The first two terms in the right-hand side of (1.13) are the same as those of \(\frac{1}{t}H(t)\) (recall (1.8)). The third term is a correction that comes from the cost for \(X\) in the Feynman-Kac formula in (1.4) to create an _optimal local time profile_ somewhere in \(\mathcal{T}\), which is captured by the minimiser(s) of the variational formula in (1.14).
**2.** For the quenched model on a _rooted Galton-Watson tree_\(\mathcal{GW}\) we found in [11], [12] that
\[\frac{1}{t}\log U(t)=\varrho\log\left(\frac{\varrho t\vartheta}{\log\log t} \right)-\varrho-\chi(\varrho)+o(1),\qquad t\to\infty,\qquad\text{P}\times \mathfrak{P}\text{-a.s.}, \tag{1.16}\]
where P is the law of the potential, \(\mathfrak{P}\) is the law of \(\mathcal{GW}\), \(\vartheta\) is the logarithm of the mean of the offspring distribution, and
\[\chi_{\mathcal{T}}(\varrho)=\inf_{\mathcal{S}\subset\mathcal{GW}}\chi_{ \mathcal{S}}(\varrho) \tag{1.17}\]
with \(\chi_{\mathcal{S}}(\varrho)\) given by (1.11) and the infimum running over all subtrees of \(\mathcal{GW}\). This result was shown to be valid as soon as the offspring distribution has support in \(\mathbb{N}\backslash\{1\}\) (i.e., all degrees are at least 3) and has a sufficiently thin tail. The extra terms in (1.16) come from the cost for \(X\) in the Feynman-Kac formula in (1.4) to travel in a time of order \(o(t)\) to an optimal finite subtree with an optimal profile of the potential, referred to as _intermittent islands_, located at a distance of order \(\varrho t/\log\log t\) from \(\mathcal{O}\), and to subsequently spend most of its time on that subtree. In this cost the parameter \(\vartheta\) appears, which is absent in (1.13). It was shown in [11] that if \(\varrho\geq 1/\log(d_{\min}+1)\), with \(d_{\min}\) the minimum of the support of the offspring distribution, then the infimum in (1.17) is attained at the unrooted regular tree with degree \(d_{\min}+1\), i.e., the _minimal unrooted regular tree contained in \(\mathcal{GW}\)_, for which \(\vartheta=\log d_{\min}\). Possibly the bound on \(\varrho\) is redundant.
**3.** In view of Lemma 1.3 and the fact that Assumption 1.1 implies (1.8), we see that the proof of Theorem 1.4 amounts to showing that, on \(\mathcal{T}=(V,E)\),
\[\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}_{\mathcal{O}}\left(\mathrm{e}^{- \varrho tJ_{V}(L_{t})}\right)=-\chi_{\mathcal{T}}(\varrho).\]
We achieve this by deriving asymptotically matching upper and lower bounds. These bounds are obtained by truncating \(\mathcal{T}\) outside a ball of radius \(R\), to obtain a finite tree \(\mathcal{T}_{R}\), deriving the \(t\to\infty\) asymptotics for finite \(R\), and letting \(R\to\infty\) afterwards. For the lower bound we can use the standard truncation technique based on _killing_\(X\) when it exits \(\mathcal{T}_{R}\) and applying the large deviation principle for the empirical distribution of _Markov processes_ on finite graphs derived in [3]. For the upper bound, however, we cannot use the standard truncation technique based on _periodisation_ of \(X\) beyond radius \(R\), because \(\mathcal{T}\) is an expander graph (see [14, Chapter IV] for a list of known techniques on \(\mathbb{Z}^{d}\) and \(\mathbb{R}^{d}\)). Instead, we follow a route in which \(\mathcal{T}\) is approximated in successive stages by a version of \(\mathcal{T}_{R}\) with a _specific boundary condition_, based on monitoring \(X\) relative to its backbone to infinity. This route allows us to use the large deviation principle for the empirical distribution of _Markov renewal processes_ on finite graphs derived in [15], but we need the condition \(d\geq 4\) to _control_ the specific boundary condition in the limit as \(R\to\infty\) (see Remark E.1 for more details). The reason why the approximation of \(\mathcal{T}\) by finite subtrees is successful is precisely because in the parabolic Anderson model the _total mass tends to concentrate on intermittent islands_.
**4.** Theorem 1.5 shows that, modulo translations, the optimal strategy for \(L_{t}\) as \(t\to\infty\) is to be close to a minimiser of the variational formula in (1.14) restricted to \(\mathcal{P}_{\mathcal{O}}^{\downarrow}(V)\). Any minimiser is centred at \(\mathcal{O}\), strictly positive everywhere, non-increasing in the distance to \(\mathcal{O}\), and rapidly tending to zero. The following questions remain open:
1. Is the minimiser \(\bar{p}\) unique modulo translation?
2. Does \(\bar{p}(x)\) satisfy \(\lim_{|x|\to\infty}|x|^{-1}\log\bar{p}(x)=-\infty\), with \(|x|\) the distance between \(x\) and \(\mathcal{O}\)?
3. Is \(\bar{p}\) radially symmetric?
4. Is \(\varrho\mapsto\chi_{\mathcal{T}}(\varrho)\) analytic on \((0,\infty)\)?
We expect the answer to be yes for (1) and (2), and to be no for (3) and (4).
## 2 Proof of the main theorem: lower bound
In this section we prove the lower bound in Theorem 1.4, which is standard and straightforward. In Section 2.1 we obtain a lower bound in terms of a variational formula by _killing_ the random walk when it exits \(\mathcal{T}_{R}\). In Section 2.2 we derive the lower bound of the expansion by letting \(R\to\infty\) in the variational formula.
### Killing and lower variational formula
Fix \(R\in\mathbb{N}\). Let \(\mathcal{T}_{R}\) be the subtree of \(\mathcal{T}=(V,E)\) consisting of all the vertices that are within distance \(R\) of the root \(\mathcal{O}\) and all the edges connecting them. Put \(V_{R}=V_{R}(\mathcal{T}_{R})\) and \(E_{R}=E(\mathcal{T}_{R})\). Let \(\tau_{R}=\inf\{t\geq 0\colon X_{t}\notin V_{R}\}\) denote the first time that \(X\) exits \(\mathcal{T}_{R}\). It follows from (1.9) that
\[\langle U(t)\rangle\geq\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V _{R}}H(\ell_{t}(x))\bigg{]}\mathds{1}\big{\{}\tau_{R}>t\big{\}}\right).\]
Since \(\mathcal{T}_{R}\) is finite, Lemma 1.3 gives
\[\langle U(t)\rangle\geq\mathrm{e}^{H(t)+o(t)}\,\mathbb{E}_{\mathcal{O}}\left[ \mathrm{e}^{-\varrho tJ_{V}(L_{t})}\mathds{1}\big{\{}\tau_{R}>t\big{\}}\right]\]
with \(J_{V}\) the functional defined in (1.10). As shown in [5] (see also [8]), the family of sub-probability distributions \(\mathbb{P}_{\mathcal{O}}(L_{t}\in\cdot\,,\tau_{R}>t)\), \(t\geq 0\), satisfies the LDP on \(\mathcal{P}^{R}(V)=\{p\in\mathcal{P}(V)\colon\operatorname{supp}(p)\subset V_ {R}\}\) with rate function \(I_{E}\), with \(I_{E}\) the functional defined in (1.10). This is the _standard_ LDP for the empirical distribution of _Markov processes_. Therefore, by Varadhan's Lemma,
\[\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}_{\mathcal{O}}\left[\mathrm{e}^{- \varrho tJ_{V}(L_{t})}\mathds{1}\big{\{}\tau_{R}>t\big{\}}\right]=-\chi_{R}^{ -}(\varrho)\]
with
\[\chi_{R}^{-}(\varrho)=\inf_{p\in\mathcal{P}^{R}(V)}[I_{E}(p)+\varrho J_{V}(p)], \tag{2.1}\]
where we use that \(p\mapsto J_{V}(p)\) is bounded and continuous (in the discrete topology) on \(\mathcal{P}^{R}(V)\). Note that
\[\lim_{t\to\infty}\frac{1}{t}\log\mathbb{P}_{\mathcal{O}}(\tau_{R}>t)=-\inf_{p \in\mathcal{P}^{R}(V)}I_{E}(p)<0,\]
which is non-zero because any \(p\in\mathcal{P}^{R}(V)\) is non-constant on \(V\). The expression in (2.1) is the same as (1.11) with \(G=\mathcal{T}\), except that \(p\) is restricted to \(V_{R}\).
### Limit of the lower variational formula
Clearly, \(R\mapsto\chi_{R}^{-}(\varrho)\) is non-increasing. To complete the proof of the lower bound in Theorem 1.4, it remains is to show the following.
**Lemma 2.1**.: \(\limsup_{R\to\infty}\chi_{R}^{-}(\varrho)\leq\chi_{\mathcal{T}}(\varrho)\)_._
Proof.: Pick any \(p\in\mathcal{P}(V)\) such that \(I_{E}(p)<\infty\) and \(J_{V}(p)<\infty\). Let \(p^{{}^{R}}\) be the projection of \(p\) onto \(V_{R}\), i.e.,
\[p^{{}^{R}}(x)=\left\{\begin{array}{ll}p(x),&x\in\operatorname{int}(V_{R}), \\ \sum_{y\geq x}p(y),&x\in\partial V_{R},\end{array}\right.\]
where \(y\geq x\) means that \(y\) is an element of the progeny of \(x\) in \(\mathcal{T}\). Since \(p^{{}^{R}}\in\mathcal{P}^{R}(V)\), we have from (2.1) that \(\chi_{R}^{-}(\varrho)\leq I_{E}(p^{{}^{R}})+\varrho J_{V}(p^{{}^{R}})\). Trivially, \(\lim_{R\to\infty}I_{E}(p^{{}^{R}})=I_{E}(p)\) and \(\lim_{R\to\infty}J_{V}(p^{{}^{R}})=J_{V}(p)\), and so we have \(\limsup_{R\to\infty}\chi_{R}^{-}(\varrho)\leq I_{E}(p)+\varrho J_{V}(p)\). Since this bound holds for arbitrary \(p\in\mathcal{P}(V)\), the claim follows from (2.1).
## 3 Proof of the main theorem: upper bound
In this section we prove the upper bound in Theorem 1.4, which is more laborious and requires a more delicate approach than the standard periodisation argument used on \(\mathbb{Z}^{d}\). In Section 3.1 we obtain an upper bound in terms of a variational formula on a version of \(\mathcal{T}_{R}\) with a specific boundary condition. The argument comes in four steps, encapsulated in Lemmas 3.1-3.6 below:
1. _Condition on the backbone_ of \(X\) (Section 3.1.1).
2. _Project_\(X\) onto a concatenation of finite subtrees attached to this backbone that are _rooted_ versions of \(\mathcal{T}_{R}\) (Section 3.1.2).
3. _Periodise_ the projected \(X\) to obtain a _Markov renewal process_ on a single finite subtree and show that the periodisation can be chosen such that the local times at the vertices on the _boundary_ of the finite subtree are negligible (Section 3.1.3).
4. Use the _large deviation principle_ for the empirical distribution of _Markov renewal processes_ derived in [15] to obtain a variational formula on a single subtree (Section 3.1.4).
In Section 3.2 we derive the upper bound of the expansion by letting \(R\to\infty\) in the variational formula.
### Backbone, projection, periodisation and upper variational formula
#### 3.1.1 Backbone
For \(r\in\mathbb{N}_{0}\), let \(\tau_{r}\) be the last time when \(X\) visits \(\partial B_{r}(\mathcal{O})\), the boundary of the ball of radius \(r\) around \(\mathcal{O}\). Then the sequence \(\operatorname{BB}=(X_{\tau_{r}})_{r\in\mathbb{N}_{0}}\) forms the backbone of \(X\), running from \(\mathcal{O}\) to infinity.
**Lemma 3.1**.: **[Condition on a backbone]** _For every backbone \(\mathrm{bb}\) and every \(t\geq 0\),_
\[\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V(\mathcal{T})}H(\ell_{t}(x ))\bigg{]}\right)=\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V( \mathcal{T})}H(\ell_{t}(x))\bigg{]}\bigg{]}\ \mathrm{BB}=\mathrm{bb}\right).\]
Proof.: By symmetry, the conditional expectation in the right-hand side does not depend on the choice of \(\mathrm{bb}\). Indeed, permutations of the edges away from the root do not affect the law of \(\sum_{x\in V(\mathcal{T})}H(\ell_{t}(x))\).
Turn the one-sided backbone into a two-sided backbone by adding a second backbone from \(\mathcal{O}\) to infinity. By symmetry, the choice of this second backbone is arbitrary, say \(\mathrm{bb}^{\prime}\). Redraw \(\mathcal{T}\) by representing \(\mathrm{bb}^{\prime}\cup\mathrm{bb}\) as \(\mathbb{Z}\) and representing the rest of \(\mathcal{T}\) as a sequence of _rooted trees_\(\mathcal{T}^{*}=(\mathcal{T}^{*}_{x})_{x\in\mathbb{Z}}\) hanging off \(\mathbb{Z}\) (see Fig. 3). In \(\mathcal{T}^{*}_{x}\), the root sits at \(x\) and has \(d-1\) downward edges, while all lower vertices have \(d\) downward edges.
Let \(X^{\mathbb{Z}}=(X^{\mathbb{Z}}_{t})_{t\geq 0}\) be the random walk on \(\mathcal{T}^{\mathbb{Z}}\) and \((\ell^{\mathbb{Z}}_{t}(x))_{x\in\mathcal{T}^{\mathbb{Z}}}\) the local times of \(X^{\mathbb{Z}}\) at time \(t\).
**Lemma 3.2**.: **[Representation of \(\mathcal{T}\) as a backbone with rooted trees]** _For every \(\mathrm{bb}\) and \(t\geq 0\),_
\[\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V(\mathcal{T})}H(\ell_{t }(x))\bigg{]}\bigg{]}\ \mathrm{BB}=\mathrm{bb}\right)=\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[} \sum_{x\in V(\mathcal{T}^{\mathbb{Z}})}H(\ell^{\mathbb{Z}}_{t}(x))\bigg{]} \bigg{]}\ X^{\mathbb{Z}}_{\infty}=+\infty\right).\]
Proof.: Simply redraw \(\mathcal{T}\) as \(\mathcal{T}^{\mathbb{Z}}\).
Note that \(X^{\mathbb{Z}}\) is a Markov process whose sojourn times have distribution \(\mathrm{EXP}(d+1)\) and whose steps are drawn uniformly at random from the \(d+1\) edges that are incident to each vertex.
#### 3.1.2 Projection
For \(R\in\mathbb{N}\backslash\{1\}\), cut \(\mathbb{Z}\) into slices of length \(R\), i.e.,
\[\mathbb{Z}=\cup_{k\in\mathbb{Z}}(z+(kR+I)),\qquad I=\{0,1,\ldots,R-1\},\]
where \(z\) is to be chosen later. Apply the following two maps to \(\mathcal{T}^{\mathbb{Z}}\) (in the order presented):
Figure 3: Redrawing of \(\mathcal{T}\) as \(\mathcal{T}^{\mathbb{Z}}\): a two-sided backbone \(\mathbb{Z}\) with a sequence \(\mathcal{T}^{*}=(\mathcal{T}^{*}_{x})_{x\in\mathbb{Z}}\) of rooted trees hanging off. The upper index \(*\) is used to indicate that the tree is rooted.
1. For each \(k\in\mathbb{Z}\), fold \(\mathcal{T}^{*}_{z+(kR+(R-1))}\) onto \(\mathcal{T}^{*}_{z+(k+1)R}\) by folding the \(d-1\) edges downwards from the root on top of the edge in \(\mathbb{Z}\) connecting \(z+(kR+(R-1))\) and \(z+(k+1)R\), and putting the \(d\) infinite rooted trees hanging off each of these \(d-1\) edges on top of the rooted tree \(\mathcal{T}^{*}_{z+(k+1)R}\) hanging off \(z+(k+1)R\). Note that each of the \(d\) infinite rooted trees is a copy of \(\mathcal{T}^{*}_{z+(k+1)R}\).
2. For each \(k\in\mathbb{Z}\) and \(m\in\{0,1,\ldots,R-2\}\), cut off all the infinite subtrees trees in \(\mathcal{T}^{*}_{z+(kR+m)}\) whose roots are at depth \((R-1)-m\). Note that the total number of leaves after the cutting equals \[(d-1)\sum_{m=0}^{R-2}d^{(R-2)-m}=(d-1)d^{R-2}\,\frac{1-d^{-(R-1)}}{1-d^{-1}}=d^ {R-1}-1,\] which is the same as the total number of leaves of the rooted tree \(\mathcal{T}^{*}_{R}\) of depth \(R-1\) (i.e., with \(R\) generations) minus \(1\) (a fact we will need below).
By doing so we obtain a _concatenation_ of finite units
\[\mathcal{U}_{R}=(\mathcal{U}_{R}[k])_{k\in\mathbb{Z}}\]
that are rooted trees of depth \(R-1\) (see Fig. 4). Together with the two maps that turn \(\mathcal{T}^{\mathbb{Z}}\) into \(\mathcal{U}_{R}\), we apply two maps to \(X^{\mathbb{Z}}\):
1. All excursions of \(X^{\mathbb{Z}}\) in the infinite subtrees that are _folded to the right and on top_ are projected accordingly.
2. All excursions of \(X^{\mathbb{Z}}\) in the infinite subtrees that are _cut off_ are replaced by a sojourn of \(X^{\mathcal{U}_{R}}\) in the _tadpoles_ that replace these subtrees (see Fig. 4)
The resulting path, which we call \(X^{\mathcal{U}_{R}}=(X^{\mathcal{U}_{R}}_{t})_{t\geq 0}\), is a _Markov renewal process_ with the following properties:
* The sojourn times in all the vertices that are not tadpoles have distribution \(\mathrm{EXP}(d+1)\).
* The sojourn times in all the tadpoles have distribution \(\psi\), defined as the conditional distribution of the _return time_\(\tau\) of the random walk on the infinite rooted tree \(\mathcal{T}^{*}\)_given_ that \(\tau<\infty\) (see [13] for a proper definition).
* The transitions into the tadpoles have probability \(\frac{d}{d+1}\), the transitions out of the tadpoles have probability \(1\) (because of the condition \(X^{\mathbb{Z}}_{\infty}=+\infty\)).
* The transitions from \(z+(kR+(R-1))\) to \(z+(k+1)R\) have probability \(\frac{d}{d+1}\), while the reverse transitions have probability \(\frac{1}{d+1}\).
Write \((\ell^{\mathcal{U}_{R}}_{t}(x))_{x\in\mathcal{U}_{R}}\) to denote the local times of \(X^{\mathcal{U}_{R}}\) at time \(t\).
**Lemma 3.3**.: **[Projection onto a concatenation of finite subtrees]** _For every \(R\in\mathbb{N}\setminus\{1\}\) and \(t\geq 0\),_
\[\mathbb{E}_{\mathcal{O}} \left(\exp\left[\sum_{x\in V(\mathcal{T}^{\mathbb{Z}})}H(\ell^{ \mathbb{Z}}_{t}(x))\right]\ \bigg{|}\ X^{\mathbb{Z}}_{\infty}=+\infty\right)\] \[\leq\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V( \mathcal{U}_{R})}H(\ell^{\mathcal{U}_{R}}_{t}(x))\bigg{]}\ \bigg{|}\ X^{\mathcal{U}_{R}}_{\infty}=+\infty\right).\]
Proof.: The maps that are applied to turn \(X^{\mathbb{Z}}\) into \(X^{\mathcal{U}_{R}}\) are such that local times are _stacked on top of each other_. Since \(H\) defined in (1.6) is convex and \(H(0)=0\), we have \(H(\ell)+H(\ell^{\prime})\leq H(\ell+\ell^{\prime})\) for all \(\ell,\ell^{\prime}\in\mathbb{N}_{0}\), which implies the inequality.
#### 3.1.3 Periodisation
Our next observation is that the condition \(\{X^{\mathcal{U}_{R}}_{\infty}=+\infty\}\) is _redundant_.
**Lemma 3.4**.: **[Condition redundant]** _For every \(R\in\mathbb{N}\backslash\{1\}\) and \(t\geq 0\),_
\[\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V(\mathcal{U}_{R})}H( \ell^{\mathcal{U}_{R}}_{t}(x))\bigg{]}\ \left|\ X^{\mathcal{U}_{R}}_{\infty}=+\infty\right)=\mathbb{E}_{\mathcal{O}} \left(\exp\bigg{[}\sum_{x\in V(\mathcal{U}_{R})}H(\ell^{\mathcal{U}_{R}}_{t}(x ))\bigg{]}\right.\right).\]
Proof.: The event \(\{X^{\mathcal{U}_{R}}_{\infty}=+\infty\}\) has probability \(1\) because on the edges connecting the units of \(\mathcal{U}_{R}\) (see Fig. 4) there is a drift downwards. To see why, note that \(\frac{1}{d+1}<\frac{1}{2}<\frac{d}{d+1}\) because \(d\geq 2\), and use that a one-dimensional random walk with drift is transient to the right [16].
Since \(\mathcal{U}_{R}\) is periodic, we can _fold_\(X^{\mathcal{U}_{R}}\) onto a single unit \(\mathcal{W}_{R}\), to obtain a Markov renewal process \(X^{\mathcal{W}_{R}}\) on \(\mathcal{W}_{R}\) (see Fig. 5) in which the transition from the top vertex to the right-most bottom vertex has probability \(\frac{1}{d+1}\), while the reverse transition has probability \(\frac{d}{d+1}\). Clearly, the sojourn time distributions are not affected by the folding and therefore remain as above. Write \((\ell^{\mathcal{W}_{R}}_{t}(x))_{x\in V(\mathcal{W}_{R})}\) to denote the local times of \(X^{\mathcal{W}_{R}}\) at time \(t\).
**Lemma 3.5**.: **[Periodisation to a single finite subtree]** _For every \(R\in\mathbb{N}\backslash\{1\}\) and \(t\geq 0\),_
\[\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V(\mathcal{U}_{R})}H( \ell^{\mathcal{U}_{R}}_{t}(x))\bigg{]}\right)\leq\mathbb{E}_{\mathcal{O}} \left(\exp\bigg{[}\sum_{x\in V(\mathcal{W}_{R})}H(\ell^{\mathcal{W}_{R}}_{t}( x))\bigg{]}\right).\]
Figure 4: A unit in \(\mathcal{U}_{R}\). Inside is a rooted tree \(\mathcal{T}_{R}^{*}\) of depth \(R-1\), of which only the root and the leaves are drawn. Hanging off the leaves at depth \(R-1\) from the root are tadpoles, except for the right-most bottom vertex, which has a downward edge that connects to the root of the next unit. The vertices marked by a bullet form the boundary of \(\mathcal{U}_{R}\), the vertices marked by a square box form the tadpoles of \(\mathcal{U}_{R}\).
Proof.: The periodisation again stacks local time on top of each other.
Before we proceed we make a _crucial observation_, namely, we may still choose the shift \(z\in\{0,1,\dots,R-1\}\) of the cuts of the two-sided backbone \(\mathbb{Z}\) (recall Fig. 3). We will do so in such a way that the _local time_ up to time \(t\) spent in the set \(\partial_{\mathcal{U}_{R}}\) defined by
\[\begin{split}\partial_{\mathcal{U}_{R}}&=\text{all vertices at the top or at the bottom of a unit in }\mathcal{U}_{R}\\ &=\text{all vertices marked by $\bullet$ in Fig. \ref{fig:localtime}}\end{split} \tag{3.1}\]
is at most \(t/R\). After the periodisation these vertices are mapped to the set \(\partial_{\mathcal{W}_{R}}\) defined by
\[\begin{split}\partial_{\mathcal{W}_{R}}&=\text{all vertices at the top or at the bottom of }\mathcal{W}_{R}\\ &=\text{all vertices marked by $\bullet$ in Fig. \ref{fig:localtime}}.\end{split}\]
**Lemma 3.6**.: **[Control on the time spent at the boundary]** _For every \(R\in\mathbb{N}\backslash\{1\}\) and \(t\geq 0\),_
\[\begin{split}\mathbb{E}_{\mathcal{O}}&\left(\exp \bigg{[}\sum_{x\in V(\mathcal{U}_{R})}H(\ell_{t}^{\mathcal{U}_{R}}(x))\bigg{]} \right)\\ &\leq\mathbb{E}_{\mathcal{O}}\left(\exp\bigg{[}\sum_{x\in V( \mathcal{W}_{R})}H(\ell_{t}^{\mathcal{W}_{R}}(x))\bigg{]}\,1_{\left\{\frac{1}{ t}\sum_{x\in\partial_{\mathcal{W}_{R}}}\ell_{t}^{\mathcal{W}_{R}}(x)\leq 1/R\right\}} \right).\end{split}\]
Proof.: For different \(z\) the sets of vertices making up \(\partial_{R}\) correspond to _disjoint_ sets of vertices in \(\mathcal{T}^{\mathbb{Z}}\) (see Fig. 4). Since \(\sum_{x\in\mathcal{T}^{\mathbb{Z}}}\ell_{t}^{\mathbb{Z}}(x)=t\) for all \(t\geq 0\), it follows that there exists a \(z\) for which \(\sum_{x\in\partial_{R}}\ell_{t}^{\mathbb{Z}}(x)\leq t/R\). Therefore the upper bound in Lemma 3.3 can be strengthened to the one that is claimed.
#### 3.1.4 Upper variational formula
Lemmas 3.1-3.6 provide us with an upper bound for the average total mass (recall ((1.9)) on the _infinite_ tree \(\mathcal{T}\) in terms of the same quantity on the _finite_ tree-like unit \(\mathcal{W}_{R}\) with a _specific boundary condition_. Along the way we have paid a price: the sojourn times in the tadpoles are _no longer_ exponentially distributed, and the transition probabilities into and out
Figure 5: A unit \(\mathcal{W}_{R}\) with the top vertex and the right-most bottom vertex connected by an edge.
of the tadpoles and between the top vertex and the right-most bottom vertex are _biased_. We therefore need the large deviation principle for the empirical distribution of Markov renewal processes derived in [15], which we can now apply to the upper bound.
Since \(\mathcal{W}_{R}\) is finite, Lemma 1.3 gives
\[\langle U(t)\rangle\leq\mathrm{e}^{H(t)+o(t)}\,\mathbb{E}_{\mathcal{O}}\left( \mathrm{e}^{-\varrho J_{V(\mathcal{W}_{R})}(L_{t}^{\mathcal{W}_{R}})}\,1_{ \left\{L_{t}^{\mathcal{W}_{R}}(\partial_{\mathcal{W}_{R}})\leq 1/R\right\}}\right)\]
with \(J_{V}\) the functional defined in (1.10). The following lemma controls the expectation in the right-hand side.
**Lemma 3.7**.: **[Scaling of the key expectation]** _For every \(R\in\mathbb{N}\backslash\{1\}\),_
\[\lim_{t\to\infty}\frac{1}{t}\log\mathbb{E}_{\mathcal{O}}\left(\mathrm{e}^{- \varrho tJ_{V(\mathcal{W}_{R})}(L_{t}^{\mathcal{W}_{R}})}\,1_{\left\{L_{t}^{ \mathcal{W}_{R}}(\partial_{\mathcal{W}_{R}})\leq 1/R\right\}}\right)=-\chi_{R}^{+}( \varrho),\]
_where_
\[\chi_{R}^{+}(\varrho)=\inf_{\begin{subarray}{c}p\in\mathcal{P}(V(\mathcal{W} _{R})):\\ p(\mathcal{W}_{R})\leq 1/R\end{subarray}}\,\left\{I_{E(\mathcal{W}_{R})}^{ \dagger}(p)+\varrho J_{V(\mathcal{W}_{R})}(p)\right\}, \tag{3.2}\]
_with_
\[I_{E(\mathcal{W}_{R})}^{\dagger}(p)=\inf_{\beta\in(0,\infty)}\inf_{q\in \mathcal{P}(V(\mathcal{W}_{R}))}\big{[}\widehat{K}(\beta q)+\widetilde{K}(p \mid\beta q)\big{]}, \tag{3.3}\]
_where_
\[\widehat{K}(\beta q) = \sup_{\bar{q}\in\mathcal{P}(V(\mathcal{W}_{R}))}\sum_{x\in V( \mathcal{W}_{R})}\beta q(x)\log\left(\tfrac{\widehat{q}(x)}{\sum_{y\in V( \mathcal{W}_{R})}\pi_{x,y}\bar{q}(y)}\right), \tag{3.4}\] \[\widetilde{K}(p\mid\beta q) = \sum_{x\in V(\mathcal{W}_{R})}\beta q(x)\left(\mathcal{L}\lambda _{x}\right)\left(\tfrac{p(x)}{\beta q(x)}\right), \tag{3.5}\]
_with_
\[(\mathcal{L}\lambda_{x})(\alpha) = \sup_{\theta\in\mathbb{R}}[\alpha\theta-\lambda_{x}(\theta)], \qquad\alpha\in[0,\infty), \tag{3.6}\] \[\lambda_{x}(\theta) = \log\int_{0}^{\infty}\mathrm{e}^{\theta\tau}\psi_{x}(\mathrm{d} \tau),\quad\theta\in\mathbb{R}, \tag{3.7}\]
_where \(\psi_{x}=\psi\) when \(x\) is a tadpole, \(\psi_{x}=\mathrm{EXP}(d+1)\) when \(x\) is not a tadpole, and \(\pi_{x,y}\) is the transition kernel of the discrete-time Markov chain on \(V(\mathcal{W}_{R})\) embedded in \(X^{\mathcal{W}_{R}}\)._
Proof.: Apply the large deviation principle derived in [15], which we recall in Proposition A.1 in Appendix A.
The expression in (3.2) is similar to (1.11) with \(G=\mathcal{W}_{R}\), expect that the rate function \(I_{E(\mathcal{W}_{R})}\) in (3.3) is more involved than the rate function \(I_{E}\) in (1.10).
### Limit of the upper variational formula
The prefactor \(\mathrm{e}^{H(t)+o(1)}\) in Lemma 1.3 accounts for the terms \(\varrho\log(\varrho t)-\varrho\) in the right-hand side of (1.13) (recall 1.8). In view of Lemma 3.7, in order to complete the proof of the upper bound in Theorem 1.4 it suffices to prove the following lemma.
**Lemma 3.8**.: _For any \(d\geq 4\), \(\liminf_{R\to\infty}\chi_{R}^{+}(\varrho)\geq\chi_{\mathcal{T}}(\varrho)\)._
Proof.: The proof is given in Appendix E and relies on two steps:
* Show that, for \(d\geq 4\), \[I_{E(\mathcal{W}_{R})}^{\dagger}(p)\geq I_{E(\mathcal{W}_{R})}^{+}(p)+O(1/R)\] (3.8) with \(I_{E(\mathcal{W}_{R})}^{+}\) a rate function similar to the _standard rate function_\(I_{E(\mathcal{W}_{R})}\) given by (1.10).
* Show that, \(d\geq 2\), \[\widehat{\chi}_{R}^{+}(\varrho)=\inf_{\begin{subarray}{c}p\in\mathcal{P}(V( \mathcal{W}_{R})):\\ p(\partial_{\mathcal{W}_{R}})\leq 1/R\end{subarray}}\left\{I_{E(\mathcal{W}_{R})}^{+}(p)+ \varrho J_{V(\mathcal{W}_{R})}(p)\right\}\] satisfies \[\liminf_{R\to\infty}\widehat{\chi}_{R}^{+}(\varrho)\geq\chi_{\mathcal{T}}( \varrho).\] (3.9)
## Appendix A Large deviation principle for the local times of Markov renewal processes
The following LDP, which was used in the proof of Lemma 3.7, was derived in [15, Proposition 1.2], and generalises the LDP for the empirical distribution of a Markov proceses on a finite state space derived in [3]. See [10, Chapter III] for the definition of the LDP.
**Proposition A.1**.: _Let \(Y=(Y_{t})_{t\geq 0}\) be the Markov renewal process on the finite graph \(G=(V,E)\) with transition kernel \((\pi_{x,y})_{\{x,y\}\in E}\) and with sojourn times whose distributions \((\psi_{x})_{x\in V}\) have support \((0,\infty)\). For \(t>0\), let \(I_{t}^{Y}\) denote the empirical distribution of \(Y\) at time \(t\) (see (1.12)). Then the family \((\mathbb{P}(L_{t}^{Y}\in\cdot))_{t>0}\) satisfies the LDP on \(\mathcal{P}(V)\) with rate \(t\) and with rate function \(I_{E}^{\dagger}\) given by_
\[I_{E}^{\dagger}(p)=\inf_{\beta\in(0,\infty)}\inf_{q\in\mathcal{P}(V)}\left[ \widehat{K}(\beta q)+\widetilde{K}(p\mid\beta q)\right]\]
_with_
\[\widehat{K}(\beta q) = \sup_{\bar{q}\in\mathcal{P}(V)}\sum_{x\in V}\beta q(x)\log\left( \tfrac{\widehat{q}(x)}{\sum_{y\in V}\pi_{x,y}\bar{q}(y)}\right),\] (A.1) \[\widetilde{K}(p\mid\beta q) = \sum_{x\in V}\beta q(x)\left(\mathcal{L}\lambda_{x}\right)\left( \tfrac{p(x)}{\beta q(x)}\right),\] (A.2)
_where_
\[(\mathcal{L}\lambda_{x})(\alpha) = \sup_{\theta\in\mathbb{R}}[\alpha\theta-\lambda_{x}(\theta)],\ \ \alpha\in[0,\infty),\] \[\lambda_{x}(\theta) = \log\int_{0}^{\infty}\mathrm{e}^{\theta\tau}\psi_{x}(\mathrm{d} \tau),\ \ \ \ \theta\in\mathbb{R}.\]
The rate function \(I_{E}\) consist of two parts: \(\widetilde{K}\) in (A.1) is the rate function of the LDP on \(\mathcal{P}(V)\) for the empirical distribution of the discrete-time Markov chain on \(V\) with transition kernel \((\pi_{x,y})_{\{x,y\}\in E}\) (see [10, Theorem IV.7]), while \(\widetilde{K}\) in (A.2) is the rate function of the LDP on \(\mathcal{P}(0,\infty)\) for the empirical mean of the sojourn times, given the empirical distribution of the discrete-time Markov chain. Moreover, \(\lambda_{x}\) is the cumulant generating function associated with \(\psi_{x}\), and \(\mathcal{L}\lambda_{x}\) is the Legendre transform of \(\lambda_{x}\), playing the role of the Cramer rate function for the empirical mean of the i.i.d. sojourn times at \(x\). The parameter \(\beta\) plays the role of the ratio between the continuous time scale and the discrete time scale.
## Appendix B Sojourn times: cumulant generating functions and Legendre tranforms
In Appendix B.1 we recall general properties of cumulant generating functions and Legendre transforms, in Appendices B.2 and B.3 we identify both for the two sojourn time distributions arising in Lemma 3.7, respectively.
### General observations
Let \(\lambda\) be the cumulant generating function of a non-degenerate sojourn time distribution \(\phi\), and \(\mathcal{L}\lambda\) be the Legendre transform of \(\lambda\) (recall (3.7)). Both \(\lambda\) and \(\mathcal{L}\lambda\) are strictly convex, are analytic in the interior of their domain, and achieve a unique zero at \(\theta=0\), respectively, \(\alpha=\alpha_{c}\) with \(\alpha_{c}=\int_{0}^{\infty}\tau\phi(\mathrm{d}\tau)\). Furthermore, \(\lambda\) diverges at some \(\theta_{c}\in(0,\infty]\) and has slope \(\alpha_{c}\) at \(\theta=0\). Moreover, if the slope of \(\lambda\) diverges at \(\theta_{c}\), then \(\mathcal{L}\lambda\) is finite on \((0,\infty)\).
The supremum in the Legendre transform defining \((\mathcal{L}\lambda)(\alpha)\) is uniquely taken at \(\theta=\theta(\alpha)\) solving the equation
\[\lambda^{\prime}(\theta(\alpha))=\alpha.\]
The tangent of \(\lambda\) with slope \(\alpha\) at \(\theta(\alpha)\) intersects the vertical axis at \((-\mathcal{L}\lambda)(\alpha)\), i.e., putting
\[\mu(\alpha)=\lambda(\theta(\alpha))\] (B.1)
we have
\[\mu(\alpha)=\alpha(\mathcal{L}\lambda)^{\prime}(\alpha)-(\mathcal{L}\lambda)( \alpha).\] (B.2)
(See Fig. 6.) Note that by differentiating (B.2) we get
\[\mu^{\prime}(\alpha)=\alpha(\mathcal{L}\lambda)^{\prime\prime}(\alpha),\]
which shows that \(\alpha\mapsto\mu(\alpha)\) is strictly increasing and hence invertible, with inverse function \(\mu^{-1}\). Note that by differentiating the relation \((\mathcal{L}\lambda)(\alpha)=\alpha\theta(\alpha)-\lambda(\theta(\alpha))\) we get
\[(\mathcal{L}\lambda)^{\prime}(\alpha)=\theta(\alpha).\] (B.3)
A further relation that is useful reads
\[(\mathcal{L}\lambda)^{\prime}\circ\mu^{-1}=\lambda^{-1},\] (B.4)
which follows because \(\mu=\lambda\circ\theta\) by (B.1) and \((\mathcal{L}\lambda)^{\prime}=\theta\) by (B.3).
### Exponential sojourn time
If \(\phi=\mathrm{EXP}(d+1)\), then the cumulant generating function \(\lambda(\theta)=\log\int_{0}^{\infty}\mathrm{e}^{\theta\tau}\psi(\mathrm{d}\tau)\) is given by
\[\lambda(\theta)=\begin{cases}\log\left(\frac{d+1}{d+1-\theta}\right),&\theta<d+1,\\ \infty,&\theta\geq d+1.\end{cases}\]
To find \((\mathcal{L}\lambda)(\alpha)\), we compute
\[\frac{\partial}{\partial\theta}[\alpha\theta-\log(\tfrac{d+1}{d+1-\theta})]= \alpha-\frac{1}{d+1-\theta},\qquad\frac{\partial^{2}}{\partial\theta^{2}}[ \alpha\theta-\log(\tfrac{d}{d+1-\theta})]=-\frac{1}{(d+1-\theta)^{2}}<0.\]
Hence the supremum in (3.6) is uniquely taken at
\[\theta(\alpha)=d+1-\tfrac{1}{\alpha},\qquad\alpha>0,\]
so that
\[(\mathcal{L}\lambda)(\alpha)=\alpha(d+1)-1-\log[\alpha(d+1)],\qquad\alpha>0.\] (B.5)
Thus, \(\lambda\) and \(\mathcal{L}\lambda\) have the shape in Fig. 7, with \(\theta_{c}=d+1\) and \(\alpha_{c}=\frac{1}{d+1}\), and with \(\lim_{\theta\uparrow\theta_{c}}\lambda(\theta)=\infty\) and \(\lim_{\theta\uparrow\theta_{c}}\lambda^{\prime}(\theta)=\infty\).
Note that \(\mu\) has domain \((0,\infty)\) and range \(\mathbb{R}\).
### Non-exponential sojourn time
For \(\phi=\psi\) the computations are more involved. Let \(\mathcal{T}^{*}=(E,V)\) be the infinite rooted regular tree of degree \(d+1\). Write \(\mathcal{O}\) for the root. Let \(X=(X_{n})_{n\in\mathbb{N}_{0}}\) be the discrete-time simple random walk on \(\mathcal{T}^{*}=(E,V)\) starting from \(\mathcal{O}\). Write \(\tau_{\mathcal{O}}\) to denote the time of the _first return_ of \(X\) to \(\mathcal{O}\). Define \(r=\mathbb{P}_{\mathcal{O}}(\tau_{\mathcal{O}}<\infty)\). It is easy to compute \(r\) by projecting \(X\) on \(\mathbb{N}_{0}\): \(r\) is the return probability to the origin of the random walk on \(\mathbb{N}_{0}\) that jumps to the right with
probability \(p=\frac{d}{d+1}\) and to the left with probability \(q=\frac{1}{d+1}\), which equals \(\frac{p}{q}\) (see [16, Section 8]). Thus, \(r=\frac{1}{d}\).
For \(y\in\mathcal{T}^{*}\), define \(h_{y}=\mathbb{P}_{y}(\tau_{\mathcal{O}}<\infty)\). Then \(h_{y}\) can be explicitly calculated, namely,
\[h_{y}=\begin{cases}d^{-|y|},&y\in\mathcal{T}^{*}\setminus\{\mathcal{O}\},\\ 1,&y=\mathcal{O}.\end{cases}\]
Note that \(h\) is a harmonic function on \(\mathcal{T}^{*}\setminus\mathcal{O}\), i.e., \(h_{y}=\sum_{z\in\mathcal{T}^{*}}\widehat{\pi}_{y,z}h_{z}\), \(y\in\mathcal{T}^{*}\setminus\mathcal{O}\). We can therefore consider the Doob-transform of \(X\), which is the random walk with transition probabilities away from the root given by
\[\check{\sigma}_{y,z}=\begin{cases}\frac{d}{d+1},&z=y^{\uparrow},\\ \frac{1}{d}\frac{1}{d+1},&z\neq y^{\uparrow},\{y,z\}\in E,\\ 0,&\text{else},\end{cases}\qquad\qquad y\in\mathcal{T}^{*}\setminus\{ \mathcal{O}\},\]
and transition probabilities from the root are given by
\[\check{\sigma}_{\mathcal{O},z}=\begin{cases}\frac{1}{d},&\{\mathcal{O},z\} \in E,\\ 0,&\text{else}.\end{cases}\]
Thus, the Doob-transform reverses the upward and the downward drift of \(X\).
Recall from Lemma 3.7 that \(\psi\) is the distribution of \(\tau_{\mathcal{O}}\)_conditional_ on \(\{\tau_{\mathcal{O}}<\infty\}\) and on \(X\) leaving \(\mathcal{O}\) at time \(0\).
**Lemma B.1**.: _Let \(\lambda(\theta)=\log\int_{0}^{\infty}\mathrm{e}^{\theta\tau}\psi(\mathrm{d}\tau)\). Then_
\[\mathrm{e}^{\lambda(\theta)}=\begin{cases}\frac{d+1-\theta}{2}\,\left[1-\sqrt {1-\frac{4d}{(d+1-\theta)^{2}}}\,\,\right],&\theta\in(-\infty,\theta_{c}],\\ \infty,&\text{else},\end{cases}\] (B.6)
_with \(\theta_{c}=(\sqrt{d}-1)^{2}\). The range of \(\exp\circ\lambda\) is \((0,\sqrt{d}\,]\), with the maximal value is uniquely taken at \(\theta=\theta_{c}\)._
Proof.: To compute the moment-generating function of \(\tau_{\mathcal{O}}\), we consider the Doob-transform of \(X\) and its projection onto \(\mathbb{N}_{0}\). Let \(p_{2k}=P(\tau_{\mathcal{O}}=2k)\). It is well-known that (see [16, Section 8])
\[G^{p,q}(s)=\mathbb{E}(s^{\tau_{\mathcal{O}}}\mid\tau_{\mathcal{O}}<\infty)=\sum _{k\in\mathbb{N}}s^{2k}p_{2k}=\frac{1}{2p}\left[1-\sqrt{1-4pqs^{2}}\right], \qquad|s|\leq 1.\] (B.7)
Therefore we have
\[\begin{split}\mathrm{e}^{\lambda(\theta)}=\mathbb{E}(\mathrm{e} ^{\theta\tau_{\mathcal{O}}})&=\sum_{k\in\mathbb{N}}p_{2k}\,\left[ \mathbb{E}\left(\mathrm{e}^{\theta\,\mathrm{EXP}(d+1)}\right)\right]^{2k-1}\\ &=\sum_{k\in\mathbb{N}}p_{2k}\left(\frac{d+1}{d+1-\theta}\right) ^{2k-1}=\left(\frac{d+1-\theta}{d+1}\right)G^{p,q}(s)\end{split}\] (B.8)
with
\[p=\tfrac{1}{d+1},\qquad q=\tfrac{d}{d+1},\qquad s=\frac{d+1}{d+1-\theta}.\]
Inserting (B.7) into (B.8), we get the formula for \(\lambda(\theta)\). From the term in the square root we see that \(\lambda(\theta)\) is finite if and only if \(\theta\leq\theta_{c}=d+1-2\sqrt{d}=(\sqrt{d}-1)^{2}\).
There is no easy closed form expression for \((\mathcal{L}\lambda)(\alpha)\), but it is easily checked that \(\lambda\) and \(\mathcal{L}\lambda\) have the shape in Fig. 8, with \(\theta_{c}=(\sqrt{d}-1)^{2}\) and \(\alpha_{c}=\int_{0}^{\infty}\tau\psi(\mathrm{d}\tau)<\infty\), and with \(\lambda(\theta_{c})=\log\sqrt{d}<\infty\) and \(\lambda^{\prime}(\theta_{c})=\infty\), i.e., there is a _cusp_ at the threshold \(\theta_{c}\), implying that \(\mathcal{L}\lambda\) is finite on \((0,\infty)\). It follows from (B.3) that
\[\lim_{\alpha\to\infty}\frac{1}{\alpha}(\mathcal{L}\lambda)(\alpha)=\lim_{ \alpha\to\infty}\theta(\alpha)=\theta_{c}.\] (B.9)
**Lemma B.2**.: _The function \(\lambda^{-1}\circ\log=(\exp\circ\lambda)^{-1}\) is given by_
\[(\exp\circ\lambda)^{-1}(\beta)=d+1-\beta-\frac{d}{\beta},\qquad\beta\in(0, \sqrt{d}\,].\] (B.10)
_The range of \((\exp\circ\lambda)^{-1}\) is \((-\infty,\theta_{c}]\), with the maximal value \(\theta_{c}\) uniquely taken at \(\beta=\sqrt{d}\)._
Proof.: We need to invert \(\exp\circ\lambda\) in (B.6). Abbreviate \(\chi=\frac{d+1-\theta}{2}\). Then
\[\beta=\chi\left[1-\sqrt{1-\frac{d}{\chi^{2}}}\,\right]\quad\Longrightarrow\quad \chi=\frac{\beta^{2}+d}{2\beta}\quad\Longrightarrow\quad\theta=d+1-\frac{ \beta^{2}+d}{\beta}.\]
Note that \((\sqrt{d},\infty)\) is not part of the domain of \((\exp\circ\lambda)^{-1}\), even though the right-hand side of (B.10) still makes sense (as a second branch). Note that \(\mu\) has domain \((0,\infty)\) and range \((-\infty,\sqrt{d}\,]\) (see Fig. 6).
## Appendix C Analysis of the variational problem on the infinite regular tree
In this appendix we prove Theorem 1.5. Appendix C.1 formulates two theorems that imply Theorem 1.5, Appendix C.2 provides the proof of these theorems. Recall the definition of \(\mathcal{P}(V)\), \(I_{E}(p)\) and \(J_{V}(p)\) from (1.10). Set
\[\chi\mathcal{T}(\varrho)=\inf_{p\in\mathcal{P}_{\mathcal{O}}(V)}[I_{E}(p)+ \varrho J_{V}(p)],\qquad\varrho\in(0,\infty),\] (C.1)
where \(\mathcal{P}_{\mathcal{O}}(V)=\{p\in\mathcal{P}(V)\colon\operatorname{argmax}p =\mathcal{O}\}\). Since \(\mathcal{P}(V)\), \(I_{E}\) and \(J_{V}\) are invariant under translations, the centering at \(\mathcal{O}\) is harmless.
### Two properties
**Theorem C.1**.: _For every \(\varrho\in(0,\infty)\) the infimum in (C.1) is attained, and every minimiser \(\bar{p}\) is strictly positive, non-increasing in the distance to the root, and such that_
\[\sum_{N\in\mathbb{N}_{0}}\partial S_{R}\log(R+1)\leq\frac{d+1}{\varrho},\qquad \partial S_{R}=\sum_{\partial B_{R}(\mathcal{O})}\bar{p}(x),\]
_where \(B_{R}(\mathcal{O})\) is the ball of radius \(R\) around \(\mathcal{O}\)._
**Theorem C.2**.: _The function \(\varrho\mapsto\chi_{\mathcal{T}}(\varrho)\) is strictly increasing and globally Lipschitz continuous on \((0,\infty)\), with \(\lim_{\varrho\downarrow 0}\chi_{\mathcal{T}}(\varrho)=d-1\) and \(\lim_{\varrho\to\infty}\chi_{\mathcal{T}}(\varrho)=d+1\)._
Theorems C.1-C.2 settle Theorem 1.5. Their proof uses the following two lemmas.
**Lemma C.3**.: _For every \(\varrho\in(0,\infty)\), the infimum in (C.1) may be restricted to \(p\in\mathcal{P}_{\mathcal{O}}(V)\) such that \(J_{V}(p)\leq\frac{d+1}{\varrho}\)._
Proof.: Let \(\delta_{\mathcal{O}}\in\mathcal{P}_{\mathcal{O}}(V)\) denote the point measure at \(\mathcal{O}\). Then, for all \(\varrho\in(0,\infty)\),
\[\chi_{\mathcal{T}}(\varrho)\leq I_{E}(\delta_{\mathcal{O}})+\varrho J_{V}( \delta_{\mathcal{O}})=(d+1)+\varrho\times 0=d+1.\]
Since \(I_{V}\geq 0\), we may restrict the infimum in (C.1) to \(p\) with \(J_{V}(p)\leq\frac{d+1}{\varrho}\).
**Lemma C.4**.: _For every \(\varrho\in(0,\infty)\), there exists a \(c(\varrho)>0\) such that the infimum in (C.1) may be restricted to \(p\in\mathcal{P}_{\mathcal{O}}(V)\) such that \(J_{V}(p)\geq c(\varrho)\)._
Proof.: Since \(J_{V}(p)=0\) if and only if \(p=\delta_{\mathcal{O}}\) is a point measure, it suffices to show that \(\delta_{\mathcal{O}}\) is not a minimiser of \(\chi_{\mathcal{T}}(\varrho)\). To that end, for \(y\in V\) compute
\[\frac{\partial}{\partial p(y)}[I_{E}(p)+\varrho J_{V}(p)]=1-\sum_{z\sim y} \sqrt{\frac{p(z)}{p(y)}}-\varrho\log p(y)-\varrho.\] (C.2)
Because \(p(\mathcal{O})>0\), it follows that the right-hand side tends to \(-\infty\) as \(p(y)\downarrow 0\) for every \(y\sim\mathcal{O}\). Hence, no \(p\in\mathcal{P}_{\mathcal{O}}(V)\) with \(p(y)=0\) for some \(y\sim\mathcal{O}\) can be a minimiser of (C.1), or be the weak limit point of a minimising sequence. In particular, \(\delta_{\mathcal{O}}\) cannot.
### Proof of the two properties
Proof of Theorem c.1.: First observe that \(\mathcal{P}(V)\) and \(J_{V}\) are invariant under permutations, i.e., for any \(p\in\mathcal{P}(V)\) and any relabelling \(\pi\) of the vertices in \(V\), we have \(\pi p\in\mathcal{P}(V)\) and \(J_{V}(\pi p)=J_{V}(p)\). The same does not hold for \(I_{E}\), but we can apply permutations such that \(I_{E}(\pi p)\leq I_{E}(p)\).
**1.** Pick any \(p\in\mathcal{P}(V)\). Pick any backbone \(\mathrm{bb}=\{x_{0},x_{1},\cdots\}\) that runs from \(x_{0}=\mathcal{O}\) to infinity. Consider a permutation \(\pi\) that _reorders_ the vertices in \(\mathrm{bb}\) such that \(\{(\pi p)(x)\}_{x\in\mathrm{bb}}\) becomes _non-increasing_. Together with the reordering, transport all the trees that hang off bb as well. Since \(\pi p\) is non-increasing along bb, while all the edges that do not lie on bb have the same neighbouring values in \(p\) and in \(\pi p\), we have
\[I_{E}(\pi p)\leq I_{E}(p).\] (C.3)
Indeed,
\[\tfrac{1}{2}\left[I_{E}(p)-I_{E}(\pi p)\right]=\sum_{k\in\mathbb{N}_{0}}\sqrt {(\pi p)(x_{k})(\pi p)(x_{k+1})}-\sum_{k\in\mathbb{N}_{0}}\sqrt{p(x_{k})p(x_{ k+1})},\] (C.4)
where we use that \(p(x_{0})=(\pi p)(x_{0})\) (because \(p(x_{0})\geq p(x_{k})\) for all \(k\in\mathbb{N}\)) and \(\sum_{k\in\mathbb{N}}p(x_{k})=\sum_{k\in\mathbb{N}}(\pi p)(x_{k})\). The right-hand side of (C.4) is \(\geq 0\) by the rearrangement inequality for sums of products of two sequences [9, Section 10.2, Theorem 368]. In fact, strict inequality in (C.4) holds unless \(p\) is constant along bb. But this is impossible possible because it would imply that \(p(\mathcal{O})=0\) and hence \(p(x)=0\) for all \(x\in V\). Thus, \(p\) and bb being arbitrary, it follows from (C.3) that any minimiser or minimising sequence must be non-increasing in the distance to \(\mathcal{O}\). Indeed, if it were not, then there would be a bb along which the reordering would lead to a lower value of \(I_{E}+\varrho J_{V}\). Hence we may replace (C.1) by
\[\chi_{\mathcal{T}}(\varrho)=\inf_{p\in\mathcal{P}_{\mathcal{O}}^{\downarrow}( V)}[I_{E}(p)+\varrho J_{V}(p)],\qquad\varrho\in(0,\infty),\] (C.5)
with \(\mathcal{P}_{\mathcal{O}}^{\downarrow}(V)\) defined in (1.15).
**2.** Let \(p\in\mathcal{P}_{\mathcal{O}}^{\downarrow}(V)\). Estimate
\[J_{V}(p)=\sum_{R\in\mathbb{N}_{0}}\sum_{x\in\partial B_{R}(\mathcal{O})}[-p(x )\log p(x)]\geq\sum_{R\in\mathbb{N}_{0}}\sum_{x\in\partial B_{R}(\mathcal{O}) }\Big{[}-p(x)\log\big{(}\tfrac{1}{R+1}\big{)}\Big{]},\]
where we use that \(p(x)\leq\frac{1}{R+1}\) for all \(x\in\partial B_{R}(\mathcal{O})\). Hence
\[J_{V}(p)\geq\sum_{R\in\mathbb{N}_{0}}\partial S_{R}\log(R+1)\]
with \(\partial S_{R}=\sum_{x\in\partial B_{R}(\mathcal{O})}p(x)\). By Lemma C.3, \(J_{V}(p)\leq\frac{d+1}{\varrho}\), and so
\[\sum_{R\in\mathbb{N}_{0}}\partial S_{R}\log(R+1)\leq\frac{d+1}{\varrho}.\] (C.6)
The computation in (C.2) shows that any \(p\) for which there exist \(z\sim y\) with \(p(z)>0\) and \(p(y)=0\) cannot be minimiser nor a weak limit point of a minimising sequence. Hence all minimisers or weak limit points of minimising sequences are strictly positive everywhere.
**3.** Take any minimising sequence \((p_{n})_{n\in\mathbb{N}}\) of (C.5). By (C.6), \(\lim_{R\to\infty}\sum_{x\notin B_{R}(\mathcal{O})}p_{n}(x)=0\) uniformly in \(n\in\mathbb{N}\), and so \((p_{n})_{n\in\mathbb{N}}\) is tight. By Prokhorov's theorem, tightness is equivalent to \((p_{n})_{n\in\mathbb{N}}\) being relatively compact, i.e., there is a subsequence \((p_{n_{k}})_{k\in\mathbb{N}}\) that converges weakly to a limit \(\bar{p}\in\mathcal{P}_{\mathcal{O}}^{\downarrow}(V)\). By Fatou's lemma, we have \(\liminf_{k\to\infty}I_{E}(p_{n_{k}})\geq I_{E}(\bar{p})\) and \(\liminf_{k\to\infty}J_{V}(p_{n_{k}})\geq J_{V}(\bar{p})\). Hence
\[\chi_{\mathcal{T}}(\varrho)=\lim_{k\to\infty}[I_{E}(p_{n_{k}})+\varrho J_{V}( p_{n_{k}})]\geq I_{E}(\bar{p})+\varrho J_{V}(\bar{p}).\]
Hence \(\bar{p}\) is a minimiser of (C.5).
Proof of Theorem c.2.: The proof uses approximation arguments.
**1.** We first show that \(\varrho\mapsto\chi_{\mathcal{T}}(\varrho)\) is strictly increasing and globally Lipschitz. Pick \(\varrho_{1}<\varrho_{2}\). Let \(\bar{p}_{\varrho_{1}}\) be any minimiser of (C.1) at \(\varrho_{1}\), i.e.,
\[\chi_{\mathcal{T}}(\varrho_{1})=I_{E}(\bar{p}_{\varrho_{1}})+\varrho_{1}J_{V} (\bar{p}_{\varrho_{1}}).\]
Estimate
\[[I_{E}(\bar{p}_{\varrho_{1}})+\varrho_{1}J_{V}(\bar{p}_{\varrho_{1}})]=[I_{E} (\bar{p}_{\varrho_{1}})+\varrho_{2}J_{V}(\bar{p}_{\varrho_{1}})]-(\varrho_{2} -\varrho_{1})J_{V}(\bar{p}_{\varrho_{1}})\]
\[\geq\chi_{\mathcal{T}}(\varrho_{2})-(\varrho_{2}-\varrho_{1})J_{V}(\bar{p}_{ \varrho_{1}})\geq\chi(\varrho_{2})-(\varrho_{2}-\varrho_{1})\tfrac{d+1}{\varrho _{1}},\]
where we use Lemma C.3. Therefore
\[\chi_{\mathcal{T}}(\varrho_{2})-\chi_{\mathcal{T}}(\varrho_{1})\leq(\varrho_ {2}-\varrho_{1})\tfrac{d+1}{\varrho_{1}}.\]
Similarly, let \(\bar{p}_{\varrho_{2}}\) be any minimiser of (C.1) at \(\varrho_{2}\), i.e.,
\[\chi_{\mathcal{T}}(\varrho_{2})=I_{E}(\bar{p}_{\varrho_{2}})+\varrho_{2}J_{V} (\bar{p}_{\varrho_{2}}).\]
Estimate
\[[I_{E}(\bar{p}_{\varrho_{2}})+\varrho_{2}J_{V}(\bar{p}_{\varrho_{2}})]=[I_{E} (\bar{p}_{\varrho_{2}})+\varrho_{1}J_{V}(\bar{p}_{\varrho_{2}})]+(\varrho_{2} -\varrho_{1})J_{V}(\bar{p}_{\varrho_{2}})\]
\[\geq\chi_{\mathcal{T}}(\varrho_{1})+(\varrho_{2}-\varrho_{1})J_{V}(\bar{p}_{ \varrho_{2}})\geq\chi_{\mathcal{T}}(\varrho_{1})+(\varrho_{2}-\varrho_{1})c( \varrho_{2}),\]
where we use Lemma C.4. Therefore
\[\chi_{\mathcal{T}}(\varrho_{2})-\chi_{\mathcal{T}}(\varrho_{1})\geq c( \varrho_{2})(\varrho_{2}-\varrho_{1}).\]
**2.** Because \(\chi_{\mathcal{T}}(\varrho)\leq d+1\) for all \(\varrho\in(0,\infty)\), it follows that \(\lim_{\varrho\to\infty}\chi_{\mathcal{T}}(\varrho)\leq d+1\). To obtain the reverse inequality, let \(\bar{p}_{\varrho}\) be any minimiser of (C.5) at \(\varrho\). By Lemma C.3, we may assume that \(J_{V}(\bar{p}_{\varrho})\leq\frac{d+1}{\varrho}\). Hence \(\lim_{\varrho\to\infty}J_{V}(\bar{p}_{\varrho})=0\), and consequently \(\lim_{\varrho\to\infty}\bar{p}_{\varrho}=\delta_{\mathcal{O}}\) weakly. Therefore, by Fatou's lemma, \(\lim_{\varrho\to\infty}\chi_{\mathcal{T}}(\varrho)=\lim_{\varrho\to\infty}[I_{E} (\bar{p})+\varrho J_{V}(\bar{p})]\geq\liminf_{\varrho\to\infty}I_{E}(\bar{p}_{ \varrho})\geq I_{E}(\delta_{\mathcal{O}})=d+1\).
**3.** To prove that \(\lim_{\varrho\downarrow 0}\chi\tau(\varrho)\leq d-1\), estimate
\[\chi\tau(\varrho)\leq\inf_{\begin{subarray}{c}p\in\mathcal{P}_{\mathcal{O}}^{ \downarrow}(V)\\ \operatorname{supp}(p)\subseteq B_{R}(\mathcal{O})\end{subarray}}[I_{E}(p)+ \varrho J_{V}(p)],\qquad R\in\mathbb{N}_{0}.\]
Because
\[\sup_{\begin{subarray}{c}p\in\mathcal{P}_{\mathcal{O}}^{\downarrow}(V)\\ \operatorname{supp}(p)\subseteq B_{R}(\mathcal{O})\end{subarray}}J_{V}(p)=J_ {V}(p_{R})=\log|B_{R}(\mathcal{O})|,\qquad R\in\mathbb{N}_{0},\]
with
\[p_{R}(x)=\begin{cases}|B_{R}(\mathcal{O})|^{-1},&x\in B_{R}(\mathcal{O}),\\ 0,&\text{else},\end{cases}\]
it follows that
\[\lim_{\varrho\downarrow 0}\chi\tau(\varrho)\leq\inf_{\begin{subarray}{c}p\in \mathcal{P}_{\mathcal{O}}^{\downarrow}(V)\\ \operatorname{supp}(p)\subseteq B_{R}(\mathcal{O})\end{subarray}}I_{E}(p) \leq I_{E}(p_{R}),\qquad R\in\mathbb{N}_{0}.\]
Compute (recall (1.10)),
\[I_{E}(p_{R})=\frac{|\partial B_{R+1}(\mathcal{O})|}{|B_{R}(\mathcal{O})|}, \qquad R\in\mathbb{N}_{0}.\]
Inserting the relations
\[|\partial B_{R}(\mathcal{O})|=\left\{\begin{array}{ll}1,&R=0,\\ (d+1)d^{R-1},&R\in\mathbb{N},\end{array}\right.\] \[|B_{R}(\mathcal{O})|=\sum_{R^{\prime}=0}^{R}|\partial B_{R^{ \prime}}(\mathcal{O})|=1+\frac{d+1}{d-1}(d^{R}-1),\quad R\in\mathbb{N}_{0},\]
we get
\[I_{E}(p_{R})=(d-1)\,\frac{(d+1)d^{R}}{(d+1)d^{R}-2}.\]
Hence \(\lim_{R\to\infty}I_{E}(p_{R})=d-1\), and so \(\lim_{\varrho\downarrow 0}\chi\tau(\varrho)\leq d-1\).
**4.** To prove that \(\lim_{\varrho\downarrow 0}\chi_{\mathcal{T}}(\varrho)\geq d-1\), note that because \(J_{V}\geq 0\) we can estimate
\[\lim_{\varrho\downarrow 0}\chi\tau(\varrho)\geq\inf_{p\in\mathcal{P}_{\mathcal{O} }^{\downarrow}(V)}I_{E}(p).\]
It therefore suffices to show that
\[\inf_{p\in\mathcal{P}_{\mathcal{O}}^{\downarrow}(V)}I_{E}(p)\geq d-1,\]
i.e., \((p_{R})_{R\in\mathbb{N}_{0}}\) is a minimising sequence of the infimum in the left-hand side. The proof goes as follows. Write (recall (1.10))
\[I_{E}(p) =\tfrac{1}{2}\sum_{\begin{subarray}{c}x,y\in V\\ x\sim y\end{subarray}}\left(\sqrt{p(x)}-\sqrt{p(y)}\,\right)^{2}\] \[=\tfrac{1}{2}\sum_{\begin{subarray}{c}x,y\in V\\ x\sim y\end{subarray}}\left[p(x)+p(y)-2\sqrt{p(x)p(y)}\,\right]=(d+1)-\sum_{ \begin{subarray}{c}x,y\in V\\ x\sim y\end{subarray}}\sqrt{p(x)p(y)}.\]
Since \(\mathcal{T}\) is a tree, each edge can be labelled by the end-vertex that is farthest from \(\mathcal{O}\). Hence the sum in the right-hand side can be written as
\[\sum_{x\in V\setminus\mathcal{O}}2\sqrt{p(x)p(x^{\downarrow})},\]
where \(x^{\downarrow}\) is the unique neighbour of \(x\) that is closer to \(\mathcal{O}\) than \(x\). Since \(2\sqrt{p(x)p(x^{\downarrow})}\leq p(x)+p(x^{\downarrow})\), it follows that
\[\sum_{x\in V\setminus\mathcal{O}}2\sqrt{p(x)p(x^{\downarrow})}\leq\sum_{x\in V \setminus\mathcal{O}}p(x)+\sum_{x\in V\setminus\mathcal{O}}p(x^{\downarrow})= [1-p(\mathcal{O})]+1.\]
Therefore
\[I_{E}(p)\geq d-1+p(\mathcal{O}),\]
which settles the claim.
## Appendix D Large deviation estimate for the local time away from the backbone
In this appendix we derive a large deviation principle for the _total local times at successive depths_ of the random walk on \(\mathcal{T}^{\mathbb{Z}}\) (see Fig. 3). This large deviation principle is not actually needed, but serves as a warm up for the more elaborate computations in Appendix E.
For \(k\in\mathbb{N}_{0}\), let \(V_{k}\) be the set of vertices in \(\mathcal{T}^{\mathbb{Z}}\) that are at distance \(k\) from the backbone (see Fig. 3). For \(R\in\mathbb{N}\), define
\[\ell_{t}^{R}(k) = \sum_{x\in V_{k}}\ell_{t}^{\mathbb{Z}}(x),\hskip 28.452756ptk=0,1, \ldots,R,\] \[\ell_{t}^{R} = \sum_{k>R}\sum_{x\in V_{k}}\ell_{t}^{\mathbb{Z}}(x),\hskip 8.535827ptk=R+1,\]
and
\[L_{t}^{R}=\frac{1}{t}\,\Big{(}(\ell_{t}(k))_{k=0}^{R},\ell_{t}^{R}\Big{)}.\]
Abbreviate \(V_{R}^{*}=\{0,1,\ldots,R,R+1\}\),
**Lemma D.1**.: _For every \(R\in\mathbb{N}\), \((L_{t}^{R})_{t\geq 0}\) satisfies the large deviation principle on \(\mathcal{P}(V_{R}^{*})\) with rate \(t\) and with rate function \(I_{R}^{\dagger}\) given by_
\[\begin{split} I_{R}^{\dagger}(p)&=\big{[}\sqrt{(d- 1)p(0)}-\sqrt{dp(1)}\,\big{]}^{2}+\sum_{k=1}^{R-1}\big{[}\sqrt{p(k)}-\sqrt{dp( k+1)}\,\big{]}^{2}\\ &\qquad+\big{[}\sqrt{p(R)+p(R+1)}-\sqrt{dp(R+1)}\,\big{]}^{2}. \end{split}\] (D.1)
Proof.: By monitoring the random walk on the tree in Fig. 3 and projecting its depth on the vertices \(0,1,\ldots,R\), respectively, \(R+1\), we can apply the LDP in Proposition A.1 (see Fig. 9).
**1.** The sojourn times have distribution \(\text{EXP}(d+1)\) at vertices \(k=0,1,\ldots,R\) and distribution \(\psi\) at vertex \(k=R+1\). The transition probabilities are
\[\pi_{0,0}=\tfrac{2}{d+1},\hskip 14.226378pt\pi_{0,1}=\tfrac{d-1}{d+1},\] \[\pi_{k,k+1}=\tfrac{1}{d+1},\hskip 8.535827pt\pi_{k,k-1}=\tfrac{d}{d +1},\hskip 14.226378ptk=1,\ldots,R,\] \[\pi_{R+1,R}=1.\]
Proposition A.1 therefore yields that \((L^{R}_{t})_{t\geq 0}\) satisfies the LDP on on \(\mathcal{P}(V^{*}_{R})\) with rate \(t\) and with rate function \(I^{\dagger}_{R}\) given by
\[I^{\dagger}_{R}(p)=(d+1)\sum_{k=0}^{R}p(k)+\inf_{v\colon\,V^{*}_{R}\to(0, \infty)}\sup_{u\colon\,V^{*}_{R}\to(0,\infty)}L(u,v)\] (D.2)
with
\[L(u,v)=-A-B-C,\] (D.3)
where
\[A =\sum_{k=1}^{R}v(x)\left\{1+\log\left(\frac{du(k-1)+u(k+1)}{u(k)} \,\frac{p(k)}{v(k)}\right)\right\},\] \[B =v(0)\left\{1+\log\left(\frac{2u(0)+(d-1)u(1)}{u(0)}\,\frac{p(0)} {v(0)}\right)\right\},\] \[C =v(R+1)\left\{\log\left(\frac{u(R)}{u(R+1)}\right)-(\mathcal{L} \lambda)\left(\frac{p(R+1)}{v(R+1)}\right)\right\}.\]
Here we use (B.5) to compute \(A\) and \(B\), and for \(C\) we recall that \(\mathcal{L}\lambda\) is the Legendre transform of the cumulant generation function \(\lambda\) of \(\psi\) computed in Lemma B.6.
**2.** We compute the infimum of \(L(u,v)\) over \(v\) for fixed \(u\).
\(\bullet\) For \(k=1,\ldots,R\),
\[\frac{\partial A}{\partial v(k)}=\log\left(\frac{du(k-1)+u(k+1)}{u(k)}\,\frac{ p(k)}{v(k)}\right),\]
\[\implies\bar{v}_{u}(k)=p(k)\,\frac{du(k-1)+u(k+1)}{u(k)}.\]
The second derivative is \(1/v(k)>0\).
\(\bullet\) For \(k=0\),
\[\frac{\partial B}{\partial v(0)}=\log\left(\frac{2u(0)+(d-1)u(1)}{u(0)}\, \frac{p(0)}{v(0)}\right),\]
\[\implies\bar{v}_{u}(0)=p(0)\,\frac{2u(0)+(d-1)u(1)}{u(0)}.\]
The second derivative is \(1/v(0)>0\).
\(\bullet\) For \(k=R+1\), the computation is more delicate. Define (recall (B.2) in Appendix B)
\[\mu(\alpha)=\alpha(\mathcal{L}\lambda)^{{}^{\prime}}(\alpha)-(\mathcal{L} \lambda)(\alpha).\]
Figure 9: Depths \(k=0,1,\ldots,R\) and \(k>R\).
The function \(\mu\) has range \((-\infty,\log\sqrt{d}\,]\), with the maximal value uniquely taken at \(\alpha=\infty\). Therefore there are two cases.
\(\blacktriangleright\)\(u(R+1)/u(R)\leq\sqrt{d}\). Compute
\[\frac{\partial C}{\partial v(R+1)}=\mu\left(\frac{p(R+1)}{v(R+1)} \right)-\log\left(\frac{u(R+1)}{u(R)}\right),\] \[\Longrightarrow\bar{v}(R+1)=\frac{p(R+1)}{\alpha_{u}(R+1)}\]
with \(\alpha_{u}(R+1)\) solving the equation
\[\log\left(\frac{u(R+1)}{u(R)}\right)=\mu\big{(}\alpha_{u}(R+1)\big{)}.\]
Since \(\mu^{\prime}(\alpha)=\alpha(\mathcal{L}\lambda)^{\prime\prime}(\alpha)\) and \(\mathcal{L}\lambda\) is strictly convex (see Fig. 8 in Appendix B), \(\mu\) is strictly increasing and therefore invertible. Consequently,
\[\alpha_{u}(R+1)=\mu^{-1}\left(\log\left(\frac{u(R+1)}{u(R)}\right)\right).\] (D.4)
Putting (D.3)-(D.4) together, we get
\[L(u)=\inf_{v\colon V_{R}^{2}\to(0,\infty)}L(u,v)=-\sum_{k=1}^{R}A_{u}(k)-B_{u }+C_{u}\] (D.5)
with
\[A_{u}(k) =\frac{du(k-1)+u(k+1)}{u(k)}\,p(k),\qquad k=1,\ldots,R,\] \[B_{u} =\frac{2u(0)+(d-1)u(1)}{u(0)}\,p(0),\]
and
\[C_{u} =\frac{p(R+1)}{\alpha_{u}(R+1)}\left[(\mathcal{L}\lambda)\big{(} \alpha_{u}(R+1)\big{)}-\log\left(\frac{u(R+1)}{u(R)}\right)\right]\] \[=\frac{p(R+1)}{\alpha_{u}(R+1)}\big{[}(\mathcal{L}\lambda)\big{(} \alpha_{u}(R+1)\big{)}-\mu\big{(}\alpha_{u}(R+1)\big{)}\big{]}\] \[=p(R+1)\,(\mathcal{L}\lambda)^{{}^{\prime}}\big{(}\alpha_{u}(R+1) \big{)}\] \[=p(R+1)\,((\mathcal{L}\lambda)^{{}^{\prime}}\circ\mu^{-1})\left( \log\left(\frac{u(R+1)}{u(R)}\right)\right).\]
In (B.4) in Appendix B we showed that \((\mathcal{L}\lambda)^{\prime}\circ\mu^{-1}=\lambda^{-1}\). Moreover, in (B.10) in Appendix B we showed that \((\lambda^{-1}\circ\log)=S\) with
\[S(\beta)=d+1-\beta-\frac{d}{\beta},\qquad\beta\in(0,\sqrt{d}\,].\] (D.6)
Since \(S\) has domain \((0,\sqrt{d}\,]\), \(C_{u}(R+1)\) is only defined when \(u(R+1)/u(R)\leq\sqrt{d}\), in which case
\[C_{u}=p(R+1)\,S\left(\frac{u(R+1)}{u(R)}\right).\] (D.7)
\(\blacktriangleright\)\(u(R+1)/u(R)\leq\sqrt{d}\). In this case \(\frac{\partial C}{\partial v(R+1)}>0\), the infimum is taken at \(\bar{v}(R+1)=0\), and hence (recall (B.9))
\[C_{u}=p(R+1)\,(\sqrt{d}-1)^{2}=p(R+1)\,S(\sqrt{d}).\] (D.8)
Note that the right-hand side does not depend on \(u\). The expressions in (D.7)-(D.8) can be summarised as
\[C_{u}=p(R+1)\,S\left(\sqrt{d}\wedge\frac{u(R+1)}{u(R)}\right).\]
**3.** Next we compute the supremum over \(u\) of
\[L(u)=L(u,\bar{v}_{u})=-A_{u}-B_{u}+C_{u}.\] (D.9)
with \(A_{u}=\sum_{k=1}^{R}A_{u}(k)\). We only write down the derivatives that are non-zero.
\(\bullet\) For \(k=2,\ldots,R-1\),
\[-\frac{\partial A_{u}}{\partial u(k)}=-p(k+1)\,\frac{d}{u(k+1)}-p(k-1)\,\frac {1}{u(k-1)}+p(k)\,\frac{du(k-1)+u(k+1)}{u(k)^{2}}.\]
\(\bullet\) For \(k=1\),
\[-\frac{\partial A_{u}}{\partial u(1)} =-p(2)\,\frac{d}{u(2)}+p(1)\,\frac{du(0)+u(2)}{u(1)^{2}},\] \[-\frac{\partial B_{u}}{\partial u(1)} =-p(0)\,\frac{d-1}{u(0)}.\]
\(\bullet\) For \(k=R\),
\[-\frac{\partial A_{u}}{\partial u(R)} =-p(R-1)\,\frac{1}{u(R-1)}+p(R)\,\frac{du(R-1)+u(R+1)}{u(R)^{2}},\] \[\frac{\partial C_{u}}{\partial u(R)} =p(R+1)\,\left[\frac{u(R+1)}{u(R)^{2}}-\frac{d}{u(R+1)}\right]\,1 _{\left\{\frac{u(R+1)}{u(R)}\leq\sqrt{d}\right\}}.\]
\(\bullet\) For \(k=0\),
\[-\frac{\partial A_{u}}{\partial u(0)} =-p(1)\,\frac{d}{u(1)},\] \[-\frac{\partial B_{u}}{\partial u(0)} =p(0)\,\frac{(d-1)u(1)}{u(0)^{2}}.\]
\(\bullet\) For \(k=R+1\),
\[-\frac{\partial A_{u}}{\partial u(R+1)}=-p(R)\,\frac{1}{u(R)},\] \[\frac{\partial C_{u}}{\partial u(R+1)}=p(R+1)\,\left[-\frac{1}{u( R)}+\frac{du(R)}{u(R+1)^{2}}\right]\,1_{\left\{\frac{u(R+1)}{u(R)}\leq\sqrt{d} \right\}}.\]
All the first derivatives of \(A_{u}+B_{u}+C_{u}\) are zero when we choose
\[\begin{split}&\bar{u}(0)=\sqrt{(d-1)p(0)},\qquad\bar{u}(k)=\sqrt {d^{k}p(k)},\quad k=1,\ldots,R,\\ &\bar{u}(R+1)=\sqrt{d^{R+1}\,\frac{p(R)p(R+1)}{p(R)+p(R+1)}}. \end{split}\] (D.10)
All the second derivatives are strictly negative, and so \(\bar{u}\) is the unique maximiser.
**4.** Inserting (D.10) into (D.5), we get
\[L(\bar{u})=L(\bar{u},\bar{v}_{\bar{u}})=-\sum_{k=2}^{R-1}A_{\bar{u }}(k)-\left[A_{\bar{u}}(1)+B_{\bar{u}}\right]-A_{\bar{u}}(R)+C_{\bar{u}}\] \[= -\sum_{k=2}^{R-1}\sqrt{dp(k)}\left[\sqrt{p(k-1)}+\sqrt{p(k+1)}\,\right]\] \[\quad-\left[2\sqrt{d(d-1)p(0)p(1)}+2p(0)+\sqrt{dp(1)p(2)}\,\right]\] \[\quad-\left[\sqrt{dp(R-1)p(R)}+\sqrt{\frac{p(R)}{p(R)+p(R+1)}}\, \sqrt{dp(R)p(R+1)}\,\right]\] \[\quad+p(R+1)\,S\left(\sqrt{\frac{dp(R+1)}{p(R)+p(R+1)}}\,\right).\]
Recalling (D.2), (D.6) and (D.9), and rearranging terms, we find the expression in (D.1).
Note that \(I_{R}^{\dagger}\) has a unique zero at \(p\) given by
\[p(0)=\tfrac{1}{2},\qquad p(k)=\tfrac{1}{2}(d-1)d^{-k},\quad k=1,\dots,R,\qquad p (R+1)=\tfrac{1}{2}d^{-R}.\]
This shows that the fraction of the local time typically spent a distance \(k\) away from the backbone decays exponentially fast in \(k\).
## Appendix E Analysis of the upper variational formula
In this appendix we carry out the proof of the claims in Section 3.2, namely, we settle (3.8) in Appendix E.1 and (3.9) in Appendix E.2. The computations carried out in Appendix D guide us along the way.
### Identification of the rate function for the local times on the truncated tree
To identify the rate function \(I_{E(\mathcal{W}_{R})}^{\dagger}\) in Lemma 3.7, we need to work out the two infima between braces in (3.2). The computation follows the same line of argument as in Appendix D, but is more delicate. We will only end up with a lower bound. However, this is sufficient for the upper variational formula.
To simplify the notation we write (recall Fig. 5):
\[\begin{array}{lll}(V_{R},E_{R})&=&\text{vertex and edge set of $\mathcal{W}_{R}$ without the tadpoles},\\ \mathcal{O}&=&top\text{ vertex of $V_{R}$},\\ \star&=&\text{\emph{right-most bottom} vertex of $V_{R}$},\\ \partial V_{R}&=&\text{set of vertices at the $bottom$ of $V_{R}$},\\ \Box&=&\text{set of $tadpoles$},\\ \Box_{x}&=&\text{tadpole attached to $x\in\partial V_{R}\backslash\star$}.\end{array}\]
Note that \(\partial V_{R}\) consists of \(\star\) and the vertices to which the tadpoles are attached. Note that \(\operatorname{int}(V_{R})=V_{R}\setminus\partial V_{R}\)_includes_\(\mathcal{O}\).
**1.** Inserting (B.5) in Appendix B into (3.4)-(3.5), we get
\[I^{\dagger}_{E(\mathcal{W}_{R})}(p)=(d+1)\sum_{x\in V_{R}}p(x)+\inf_{\beta\in(0,\infty)}\inf_{q\in\mathcal{P}(V_{R})}\sup_{\widehat{q}\in\mathcal{P}(V_{R})} L(\beta,q,\widehat{q}\mid p)\]
with
\[L(\beta,q,\widehat{q}\mid p)=-A-B-C-D,\]
where
\[A =\sum_{x\in\operatorname{int}(V_{R})}\beta q(x)\left\{1+\log \left(\frac{\sum_{y\sim x}\widehat{q}(y)}{\widehat{q}(x)}\frac{p(x)}{\beta q( x)}\right)\right\},\] \[B =\sum_{x\in\partial V_{R}\setminus\star}\beta q(x)\left\{1+\log \left(\frac{\widehat{q}(x^{\uparrow})+d\widehat{q}(\Box_{x})}{\widehat{q}(x) }\frac{p(x)}{\beta q(x)}\right)\right\},\] \[C =\beta q(\star)\left\{1+\log\left(\frac{\widehat{q}(\star^{ \uparrow})+d\widehat{q}(\mathcal{O})}{\widehat{q}(\star)}\frac{p(\star)}{\beta q (\star)}\right)\right\},\] \[D =\sum_{x\in\Box}\beta q(x)\left\{\log\left(\frac{\widehat{q}(x^{ \uparrow})}{\widehat{q}(x)}\right)-(\mathcal{L}\lambda)\left(\frac{p(x)}{\beta q (x)}\right)\right\},\]
with \(\mathcal{L}\lambda\) the Legendre transform of the cumulant generating function of \(\psi\) (recall (3.7)) and \(x^{\uparrow}\) the unique vertex to which \(x\) is attached upwards. (Recall that \(y\sim x\) means that \(x\) and \(y\) are connected by an edge in \(E_{R}\).) Note that \(A,B,C\) each combine two terms, and that \(A,B,C,D\) depend on \(p\). We suppress this dependence because \(p\) is fixed.
**2.** Inserting the parametrisation \(\widehat{q}=u/\|u\|_{1}\) and \(q=v/\|v\|_{1}\) with \(u,v\colon V_{R}\to(0,\infty)\) and putting \(\beta q=v\), we may write
\[I^{\dagger}_{E(\mathcal{W}^{R})}(p)=(d+1)\sum_{x\in V_{R}}p(x)+\inf_{v\colon V _{R}\to(0,\infty)}\sup_{u\colon V_{R}\to(0,\infty)}L(u,v)\] (E.1)
with
\[L(u,v)=-A-B-C-D,\]
where
\[A =\sum_{x\in\operatorname{int}(V_{R})}v(x)\left\{1+\log\left( \frac{\sum_{y\sim x}u(y)}{u(x)}\frac{p(x)}{v(x)}\right)\right\},\] (E.2) \[B =\sum_{x\in\partial V_{R}\setminus\star}v(x)\left\{1+\log\left( \frac{u(x^{\uparrow})+du(\Box_{x})}{u(x)}\frac{p(x)}{v(x)}\right)\right\},\] \[C =v(\star)\left\{1+\log\left(\frac{u(\star^{\uparrow})+du( \mathcal{O})}{u(\star)}\frac{p(\star)}{v(\star)}\right)\right\},\] \[D =\sum_{x\in\Box}v(x)\left\{\log\left(\frac{u(x^{\uparrow})}{u(x)} \right)-(\mathcal{L}\lambda)\left(\frac{p(x)}{v(x)}\right)\right\}.\]
Our task is to carry out the supremum over \(u\) and the infimum over \(v\) in (E.1).
**3.** First, we compute the infimum over \(v\) for fixed \(u\). (Later we will make a judicious choice for \(u\) to obtain a lower bound.) Abbreviate
\[A_{u}(x) =\frac{\sum_{y\sim x}u(y)}{u(x)}\,p(x),\qquad\quad x\in\text{int}(V _{R}),\] \[B_{u}(x) =\frac{u(x^{\dagger})+du(\Box_{x})}{u(x)}\,p(x),\quad x\in \partial V_{R}\backslash\star,\] (E.3) \[C_{u}(\star) =\frac{u(\star^{\dagger})+du(\mathcal{O})}{u(\star)}\,p(\star).\]
\(\bullet\) For \(z\in V_{R}\), the first derivatives of \(L\) are
\[z\in\text{int}(V_{R})\colon \quad\frac{\partial L(u,v)}{\partial v(z)}=-\log\left(\frac{A_{u }(z)}{v(z)}\right),\] \[z\in\partial V_{R}\backslash\star\colon \quad\frac{\partial L(u,v)}{\partial v(z)}=-\log\left(\frac{B_{u }(z)}{v(z)}\right),\] \[z=\star\colon \quad\frac{\partial L(u,v)}{\partial v(z)}=-\log\left(\frac{C_{ u}(z)}{v(z)}\right),\]
while the second derivatives of \(L\) equal \(1/v(z)>0\). Hence the infimum is uniquely taken at
\[x\in\text{int}(V_{R})\colon \quad\bar{v}(x)=A_{u}(x),\] \[x\in V_{R}\backslash\star\colon \quad\bar{v}(x)=B_{u}(x),\] \[x=\star\colon \quad\bar{v}(x)=C_{u}(x).\]
\(\bullet\) For \(z\in\Box\), the computation is more delicate. Define (see (B.2) in Appendix B)
\[\mu(\alpha)=\alpha(\mathcal{L}\lambda)^{{}^{\prime}}(\alpha)-(\mathcal{L} \lambda)(\alpha).\]
The function \(\mu\) has range \((-\infty,\log\sqrt{d}\,]\), with the maximal value uniquely taken at \(\alpha=\infty\). Therefore there are two cases.
\(\blacktriangleright\)\(u(x)/u(x^{\dagger})\leq\sqrt{d}\): Abbreviate \(\alpha_{u}(z)=p(z)/v(z)\). For \(z\in\Box\),
\[\frac{\partial L(u,v)}{\partial v(z)} =\log\left(\frac{u(z)}{u(z^{\dagger})}\right)+(\mathcal{L} \lambda)\left(\frac{p(z)}{v(z)}\right)-\frac{p(z)}{v(z)}(\mathcal{L}\lambda)^ {{}^{\prime}}\left(\frac{p(z)}{v(z)}\right)\] \[=\log\left(\frac{u(z)}{u(z^{\dagger})}\right)-\mu(\alpha_{u}(z)),\] \[\frac{\partial^{2}L(u,v)}{v(z)^{2}} =\frac{p^{2}(z)}{v^{3}(z)}(\mathcal{L}\lambda)^{{}^{\prime\prime} }\left(\frac{p(z)}{v(z)}\right)>0,\]
where we use that \(\mathcal{L}\lambda\), being a Legendre transform, is strictly convex. Hence the infimum is uniquely taken at
\[\bar{v}(x)=\frac{p(x)}{\alpha_{u}(x)},\qquad x\in\Box,\]
with \(\alpha_{u}(x)\) solving the equation
\[\log\left(\frac{u(x)}{u(x^{\dagger})}\right)=\mu(\alpha_{u}(x)),\qquad x\in \Box.\]
Since \(\mu^{\prime}(\alpha)=\alpha({\cal L}\lambda)^{\prime\prime}(\alpha)\) and \({\cal L}\lambda\) is strictly convex (see Fig. 8 in Appendix B), \(\mu\) is strictly increasing and therefore invertible. Consequently,
\[\alpha_{u}(x)=\mu^{-1}\left(\log\left(\frac{u(x)}{u(x^{\uparrow})}\right) \right),\qquad x\in\Box.\]
Putting the above formulas together, we arrive at (recall (E.3))
\[\begin{split} L(u)&=\inf_{v:\ V_{R}\to(0,\infty)}L(u,v)\\ &=-\sum_{x\in{\rm int}(V_{R})}A_{u}(x)\quad-\sum_{x\in\partial V_{ R}\backslash\star}B_{u}(x)\quad-C_{u}(\star)\quad+\sum_{x\in\Box}D_{u}(x)\end{split}\] (E.4)
with (recall (E.2))
\[\begin{split} D_{u}(x)&=-\frac{p(x)}{\alpha_{u}(x)} \left[\log\left(\frac{u(x^{\uparrow})}{u(x)}\right)-({\cal L}\lambda)(\alpha_{u }(x))\right]\\ &=\frac{p(x)}{\alpha_{u}(x)}\big{[}({\cal L}\lambda)(\alpha_{u}( x))-\mu(\alpha_{u}(x))\big{]}\\ &=p(x)\,({\cal L}\lambda)^{{}^{\prime}}(\alpha_{u}(x))=p(x)\, \big{(}({\cal L}\lambda)^{{}^{\prime}}\circ\mu^{-1}\big{)}\left(\log\left( \frac{u(x)}{u(x^{\uparrow})}\right)\right).\end{split}\]
In (B.4) in Appendix B we show that \(({\cal L}\lambda)^{\prime}\circ\mu^{-1}=\lambda^{-1}\). Moreover In (B.10) in Appendix B we show that \((\lambda^{-1}\circ\log)=S\) with
\[S(\beta)=d+1-\beta-\frac{d}{\beta},\qquad\beta\in(0,\sqrt{d}\,].\]
Since \(S\) has domain \((0,\sqrt{d}\,]\), \(D_{u}(x)\) is only defined when \(u(x)/u(x^{\uparrow})\leq\sqrt{d}\), in which case
\[D_{u}(x)=p(x)\,S\left(\frac{u(x)}{u(x^{\uparrow})}\right),\qquad x\in\Box.\] (E.5)
\(\blacktriangleright\)\(u(x)/u(x^{\uparrow})>\sqrt{d}\): In this case \(\frac{\partial L(u,v)}{\partial v(z)}>0\), the infimum is uniquely taken at \(\bar{v}(x)=0\), and
\[D_{u}(x)=p(x)\,(\sqrt{d}-1)^{2}=p(x)\,S(\sqrt{d}),\qquad x\in\Box,\]
where we use (B.9). Note that the right-hand side does not depend on \(u\).
**4.** Next, we compute the supremum over \(u\). The first derivatives of \(L\) are
\[\begin{split} z&\in{\rm int}(V_{R})\backslash{\cal O }\colon\quad\frac{\partial L(u)}{\partial u(z)}=\frac{\sum_{y\sim z}u(y)}{u^{2} (z)}\,p(z)-\sum_{y\sim z}\frac{1}{u(y)}\,p(y),\\ z&={\cal O}\colon\qquad\qquad\frac{\partial L(u)}{ \partial u({\cal O})}=\frac{\sum_{y\sim{\cal O}}u(y)}{u({\cal O})^{2}}\,p({ \cal O})-\sum_{y:y^{\uparrow}={\cal O}}\frac{1}{u(y)}p(y)-\frac{d}{u(\star)} \,p(\star),\\ z&=\star\colon\qquad\qquad\frac{\partial L(u)}{ \partial u(\star)}=-\frac{1}{u({\cal O})}\,p({\cal O})+\frac{u(\star^{ \uparrow})+du({\cal O})}{u(\star)^{2}}\,p(\star),\\ z&\in\partial V_{R}\backslash\star\colon\qquad\frac{ \partial L(u)}{\partial u(z)}=-\frac{1}{u(z^{\uparrow})}\,p(z^{\uparrow})+ \frac{u(z^{\uparrow})+du(\Box_{z})}{u(z)^{2}}\,p(z)\\ &\qquad\qquad\qquad+\left[\frac{u(\Box_{z})}{u(z)^{2}}-\frac{d}{ u(\Box_{z})}\right]p(\Box_{z})1_{\big{\{}\frac{u(z)}{u(z^{\uparrow})}\leq \sqrt{d}\big{\}}},\\ z\in\Box\colon\qquad\qquad\frac{\partial L(u)}{\partial u(z)}=- \frac{d}{u(z^{\uparrow})}\,p(z^{\uparrow})+\left[-\frac{1}{u(z^{\uparrow})}+ \frac{du(z^{\uparrow})}{u(z)^{2}}\right]\,p(z)\,1_{\big{\{}\frac{u(z)}{u(z^{ \uparrow})}\leq\sqrt{d}\big{\}}}.\end{split}\] (E.6)
The second derivates of \(L\) are all \(<0\). The first line in (E.6) can be rewritten as
\[\sum_{y\sim z}u(y)\left[\frac{p(z)}{u^{2}(z)}-\frac{p(y)}{u^{2}(y)}\right],\]
which is zero when
\[\bar{u}(x)=\sqrt{p(x)},\qquad x\in V_{R}.\] (E.7)
Given the choice in (E.7), the fifth line in (E.6) is zero when
\[\bar{u}(x)=\sqrt{\frac{dp(x^{\uparrow})p(x)}{dp(x^{\uparrow})+p(x)}},\qquad x \in\Box.\] (E.8)
Indeed, the derivative is strictly negative when the indicator is \(0\) and therefore the indicator must be \(1\). But the latter is guaranteed by (E.7)-(E.8), which imply that
\[\frac{\bar{u}(x)}{\bar{u}(x^{\uparrow})}=\sqrt{\frac{dp(x)}{dp(x^{\uparrow})+p (x)}}\leq\sqrt{d},\qquad x\in\Box.\]
Given the choice in (E.7)-(E.8), also the fourth line in (E.6) is zero. Thus, only the second and third line in (E.6) are non-zero, but this is harmless because \(\mathcal{O},\star\) carry a negligible weight in the limit as \(R\to\infty\) because of the constraint \(p(\partial V_{R}\cup\mathcal{O})\leq 1/R\) in Lemma 3.7 (recall (3.1)).
Inserting (E.7)-(E.8) into (E.4) and using (E.3), (E.5), we get the following lower bound:
\[\sup_{u\colon V_{R}\to(0,\infty)}L(u)\] \[\geq-\sum_{x\in\mathrm{int}(V_{R})}A_{\bar{u}}(x)\quad-\sum_{x\in \partial V_{R}\setminus\star}B_{\bar{u}}(x)\quad-C_{\bar{u}}(\star)+\sum_{x \in\Box}D_{\bar{u}}(x)\] \[=-\sum_{x\in\mathrm{int}(V_{R})}\sum_{y\sim x}\sqrt{p(y)p(x)}- \sum_{x\in\partial V_{R}\setminus\star}\sqrt{p(x)}\left(\sqrt{p(x^{\uparrow} )}+d\sqrt{\frac{dp(x)p(\Box_{x})}{dp(x)+p(\Box_{x})}}\right)\] \[\quad-\sqrt{p(\star)}\left(\sqrt{p(\star^{\uparrow})}+d\sqrt{p( \mathcal{O})}\right)\] \[\quad+\sum_{x\in\Box}p(x)\,\left(d+1-\sqrt{d}\left[\sqrt{\frac{p( x)}{dp(x^{\uparrow})+p(x)}}+\sqrt{\frac{dp(x^{\uparrow})+p(x)}{p(x)}}\,\right] \right).\]
**5.** Using the relation \((d+1)p(x)=\sum_{y\sim x}p(x)\), \(x\in\mathrm{int}(V_{R})\), we get from (E.1) that
\[I^{\dagger}_{E(\mathcal{W}^{R})}(p)\geq K^{1}_{R}(p)+K^{2}_{R}(p)\]
with
\[K^{1}_{R}(p) =\sum_{x\in\mathrm{int}(V_{R})}\sum_{y\sim x}\left[p(x)-\sqrt{p(x )p(y)}\,\right]\] \[=\sum_{\{x,y\}\in\hat{E}_{R}}\left(\sqrt{p(x)}-\sqrt{p(y)}\, \right)^{2}+\left[p(\mathcal{O})-\sqrt{p(\mathcal{O})p(\star)}\,\right]-\sum_ {x\in\partial V_{R}}\left[p(x)-\sqrt{p(x)p(x^{\uparrow})}\,\right]\]
\[K_{R}^{2}(p)= \sum_{x\in\partial V_{R}\setminus\star}\left[(d+1)p(x)-\sqrt{p(x)} \left(\sqrt{p(x^{\uparrow})}+d\sqrt{\frac{dp(x)p(\Box_{x})}{dp(x)+p(\Box_{x})}} \right)\right]\] \[\quad+(d+1)p(\star)-\sqrt{p(\star)}\left(\sqrt{p(\star^{\uparrow}) }+d\sqrt{p(\mathcal{O})}\right)\] \[\quad+\sum_{x\in\Box}p(x)\,\left[d+1-\sqrt{d}\,\left(\sqrt{\frac{ p(x)}{dp(x^{\uparrow})+p(x)}}+\sqrt{\frac{dp(x^{\uparrow})+p(x)}{p(x)}}\, \right)\right].\]
The first sum in the right-hand side of \(K_{R}^{1}(p)\) equals the _standard rate function_\(I_{\widehat{E}_{R}}(p)\) given by (1.10), with
\[\widehat{E}_{R}=E_{R}\setminus\{\mathcal{O},\star\}\]
the set of edges in the unit \(\mathcal{W}_{R}\)_without the tadpoles_ and _without the edge_\(\{\mathcal{O},\star\}\) (i.e., \(\widehat{E}_{R}=E(\mathcal{T}_{R}^{*})\); recall Fig. 4). Rearranging and simplifying terms, we arrive at
\[I_{E(\mathcal{W}^{R})}^{\dagger}(p)\geq I_{\widehat{E}_{R}}(p)+K_{R}^{3}(p)\] (E.9)
with
\[K_{R}^{3}(p)=S_{\partial V_{R}\setminus\star}(p)+S_{\mathcal{O},\star}(p)+S_ {(\partial V_{R}\setminus\star)\cup\Box}(p),\]
where
\[S_{\partial V_{R}\setminus\star}(p) =d\sum_{x\in\partial V_{R}\setminus\star}p(x),\] \[S_{\mathcal{O},\star}(p) =\left(\sqrt{p(\mathcal{O})}-\sqrt{p(\star)}\right)^{2}+(d-1) \big{[}p(\star)-\sqrt{p(\mathcal{O})p(\star)}\,\big{]},\] \[S_{(\partial V_{R}\setminus\star)\cup\Box}(p) =-\sum_{x\in\partial V_{R}\setminus\star}p(x)\,d\sqrt{\frac{dp( \Box_{x})}{dp(x)+p(\Box_{x})}}\] \[\quad+\sum_{x\in\partial V_{R}\setminus\star}p(\Box_{x})\,\left( d+1-\sqrt{d}\,\left[\sqrt{\frac{p(\Box_{x})}{dp(x)+p(\Box_{x})}}+\sqrt{\frac{dp(x)+p( \Box_{x})}{p(\Box_{x})}}\,\right]\right).\] (E.10)
**6.** Since \(\sqrt{p(\mathcal{O})p(\star)}\leq\frac{1}{2}[p(\mathcal{O})+p(\star)]\), the boundary constraint \(\sum_{x\in\partial V_{R}\cup\mathcal{O}}p(x)\leq 1/R\) implies that \(S_{\partial V_{R}\setminus\star}(p)+S_{\mathcal{O},\star}(p)=O(1/R)\). The same constraint implies that the first sum in \(S_{(\partial V_{R}\setminus\star)\cup\Box}(p)\) is \(O(1/R)\). Hence
\[K_{R}^{3}(p)=O(1/R)+\sum_{x\in\partial V_{R}\setminus\star}p(x)\,F\left(\tfrac {p(\Box_{x})}{p(x)}\right)\]
with
\[F(w)=w\left(d+1-\sqrt{d}\,\left[\sqrt{\frac{w}{d+w}}+\sqrt{\frac{d+w}{w}}\, \right]\right).\]
The map \(w\mapsto F(w)\) is continuous on \((0,\infty)\) with
\[F(w)=\left\{\begin{array}{rl}&\sqrt{w}+(d+1)w+O(w^{3/2}),\quad\quad w\downarrow 0,\\ &[(d+1)-2\sqrt{d}\,]\,w+O(w^{-1}),\,\,\,\,w\rightarrow\infty.\end{array}\right.\]
From this we see that if \(d\geq 4\), then there exists a \(C\in(1,\infty)\) such that
\[F(w)+C\geq\left(1-\sqrt{w}\,\right)^{2},\qquad w\in[0,\infty).\] (E.11)
Hence we have the lower bound
\[K_{R}^{3}(p) \geq O(1/R)+\sum_{x\in\partial V_{R}\setminus\star}p(x)\left[-C+ \left(1-\sqrt{\frac{p(\Box_{x})}{p(x)}}\,\right)^{2}\right]\] \[=O(1/R)+\sum_{x\in\partial V_{R}\setminus\star}\left(\sqrt{p(x)} -\sqrt{p(\Box_{x})}\,\right)^{2}.\]
Via (E.9)-(E.10), it follows that
\[I_{E(\mathcal{W}^{R})}^{\dagger}(p)\geq O(1/R)+I_{\widetilde{E}_{R}}(p), \qquad R\in\mathbb{N},\] (E.12)
with \(I_{\widetilde{E}_{R}}(p)\) the _standard rate function_ given by (1.10), with
\[\widetilde{E}_{R}=\widehat{E}_{R}\cup\big{[}\cup_{x\in\partial V_{R}\setminus \star}\{x,\Box_{x}\}\big{]}\]
the set of edges in the unit \(\widetilde{\mathcal{W}}_{R}\) that is obtained from the unit \(\mathcal{W}_{R}\) by _removing the edge_\(\{\mathcal{O},\star\}\) (i.e., \(\widetilde{E}_{R}=E(\widetilde{\mathcal{W}}_{R})\); recall Fig. 5). This completes the proof of (3.8).
**Remark E.1**.: The condition \(d\geq 4\) is needed only in (E.11). For \(d=2,3\) we have \(F(w)+C\geq\theta_{c}(1-\sqrt{w}\,)^{2}\) with \(\theta_{c}=d+1-2\sqrt{d}\in(0,1)\). Consequently, the edges \(\{x,\Box_{x}\}\), \(x\in\partial V_{R}\setminus\star\), carry a weight that is smaller than that of the edges in \(\mathcal{T}\), which may cause the optimal \(p\) to stick to the boundary as \(R\to\infty\), in which case we do not have (E.12). \(\spadesuit\)
### Limit of the upper variational formula
Note that
\[\widetilde{\mathcal{W}}_{R}\subseteq\mathcal{T},\]
with \(\mathcal{T}\) the infinite tree. Consequently,
\[I_{\widetilde{E}_{R}}(p)=I_{E(\mathcal{T})}(p)-(d-1)\sum_{x\in\partial V_{R} \setminus\star}p(x),\qquad\forall\,p\in\mathcal{P}(V(\mathcal{T}))\colon \operatorname{supp}(p)=V(\widetilde{\mathcal{W}}_{R}),\]
where the sum compensates for the contribution coming from the edges in \(\mathcal{T}\) that link the vertices in \(\partial V_{R}\setminus\star\) to the vertices one layer deeper in \(\mathcal{T}\) that are not tadpoles. Since this sum is \(O(1/R)\), we obtain (recall (3.2))
\[\chi_{R}^{+}(\varrho) =\inf_{p\in\mathcal{P}(V(\mathcal{W}_{R}))\colon\atop p(\partial V _{R})\leq 1/R}\,\left\{I_{E(\mathcal{W}_{R})}^{\dagger}(p)+\varrho J_{V( \mathcal{W}_{R})}(p)\right\}\] \[\geq O(1/R)+\inf_{p\in\mathcal{P}(V(\mathcal{T}))\colon\atop \operatorname{supp}(p)=V(\widetilde{\mathcal{W}}_{R}),\,p(\partial_{ \widetilde{\mathcal{W}}_{R}})\leq 1/R}\left\{I_{E(\mathcal{T})}(p)+\varrho J_{V( \mathcal{T})}(p)\right\}\] \[\geq O(1/R)+\chi_{\mathcal{T}}(\rho),\]
where the last inequality follows after dropping the constraint under the infimum and recalling (1.14). This completes the proof of (3.9). |
2305.04698 | Direct Constructions of Multiple Shift Complementary Sets of Flexible
Lengths | Golay complementary set (GCS) plays a vital role in reducing peak-to-mean
envelope power ratio (PMEPR) in orthogonal frequency division multiplexing
(OFDM). A more general version of GCS is a multiple shift complementary set
(MSCS), where by relaxing the condition of zero auto-correlation sum throughout
all the non-zero time shifts to the integer multiples of some fixed time shift,
more sequence sets can be made available. In this paper, we propose direct
constructions of MSCSs with flexible and arbitrary lengths and flexible set
sizes, by using multivariable functions, which have not been reported before. | Abhishek Roy, Sudhan Majhi | 2023-05-08T13:30:21Z | http://arxiv.org/abs/2305.04698v2 | # Direct Constructions of Multiple Shift Complementary Sets of Flexible Lengths
###### Abstract
Golay complementary set (GCS) plays a vital role in reducing peak-to-mean envelope power ratio (PMEPR) in orthogonal frequency division multiplexing (OFDM). A more general version of GCS is a multiple shift complementary set (MSCS), where by relaxing the condition of zero auto-correlation sum throughout all the non-zero time shifts to the integer multiples of some fixed time shift, more sequence sets can be made available. In this paper, we propose direct constructions of MSCSs with flexible and arbitrary lengths and flexible set sizes, by using multivariable functions, which have not been reported before.
Golay complementary set (GCS), orthogonal frequency division multiplexing (OFDM), multiple shift complementary set (MSCS), multivariable function, peak-to-mean envelope power ratio (PMEPR).
## I Introduction
Golay complementary pair (GCP) was first conceptualized in 1951 by Marcel J. E. Golay [1]. It is a pair of sequences which has zero aperiodic auto-correlation function (AACF) sum for all non-zero time shifts. Because of its ideal AACF property, it has been widely used in orthogonal frequency division multiplexing (OFDM) [2, 3], radar [4], channel estimation [5] etc. In OFDM system, GCP carries out the role of reducing peak-to-mean envelope power ratio (PMEPR) [6]. The first direct construction of GCP appears in [6], where \(2^{h}\)-ary (\(h\geq 1\) is an integer) GCPs were constructed by generalized Boolean functions (GBFs). The idea of GCP was extended to Golay complementary set (GCS) which is a set of more than two sequences having zero AACF sum for all non-zero time shifts [7]. Although there are many constructions of GCS in the literature [8, 9, 10, 11, 12, 13], Paterson _et al._ first proposed a direct construction of GCS using GBFs [14]. Like GCPs, GCSs are also used in OFDM system to reduce PMEPR, where it is upper bounded by the number of sequences in the GCS.
The multiple shift complementary set (MSCS) is a more general version of GCS, where the AACF sum is zero for multiples of some fixed time shift. It was first introduced by Xin and Fair as an alternative to GCS [15]. Later Chen _et al._ provided a direct construction of MSCS using GBFs [16] with sequence length of the form of power-of-two. In [17], the authors studied even-shift complementary pairs, which is a special case of MSCS, where AACF sum equals zero when the time shift is even. Also, in this case the number of constituent sequences in the set or the set size is \(2\). Recently, Chen _et al._ proposed a direct construction of MSCS of non-power-of-two length by using GBFs [18]. But the length is of the form \(2^{m-1}+2^{t}\), where \(m\geq 2\) and \(1\leq t\leq m-1\). To the best of authors' knowledge, there is no direct construction of MSCS of arbitrary lengths and set sizes.
Motivated by this, in this paper, we propose constructions of MSCSs with flexible and arbitrary lengths by using multivariable functions. The lengths of the proposed MSCSs are of the form \(p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{k}^{m_{k}}\) and \(p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{k}^{m_{k}}p_{k+1}\), where \(p_{i}\)'s are prime numbers and \(m_{i}\geq 1\) are integers \(\forall i=1,2,\ldots,k\). The set sizes of the MSCSs are of the form \(p_{1}p_{2}\ldots p_{k}\) and \(p_{k+1}\).
The rest of the paper is organized as follows. In Section II, some definitions are provided. Later in Section III, the main construction of the MSCSs are proposed. The PMEPRs of the proposed constructions are also investigated in this section. Finally, in Section IV, concluding remarks have been given.
## II Preliminaries
**Definition 1**: _Let \(\mathbf{a}=(a_{0},a_{1},\ldots,a_{L-1})\) and \(\mathbf{b}=(b_{0},b_{1},\ldots,b_{L-1})\) be two complex-valued sequence of length \(L\). Then the aperiodic cross-correlation function (ACCF) at time shift \(\tau\) is defined by_
\[\rho(\mathbf{a},\mathbf{b})(\tau)=\begin{cases}\sum_{i=0}^{L-1-\tau}a_{i}b_{ i+\tau}^{*},&0\leq\tau<L;\\ \sum_{i=0}^{L-1+\tau}a_{i-\tau}b_{i}^{*},&-L<\tau<0,\end{cases} \tag{1}\]
_where \((\cdot)^{*}\) denotes the complex conjugate. When \(\mathbf{a}=\mathbf{b}\), then it is called AACF and denoted by \(\rho(\mathbf{a})(\tau)\)._
**Definition 2** (GCS): _A set of sequences \(\{\mathbf{a}_{0},\mathbf{a}_{1},\ldots,\mathbf{a}_{M-1}\}\) with length \(L\) is called a GCS if they satisfy_
\[\sum_{i=0}^{M-1}\rho(\mathbf{a}_{i})(\tau)=0,\forall\tau\neq 0. \tag{2}\]
If \(M=2\), then it is called a GCP.
**Definition 3** (MSCs): _A set of sequences \(\{\mathbf{a}_{0},\mathbf{a}_{1},\ldots,\mathbf{a}_{M-1}\}\) with length \(L\) is called a MSCS if for some positive number \(S\) they satisfy_
\[\sum_{i=0}^{M-1}\rho(\mathbf{a}_{i})(\tau)=0,\tau\neq 0\ \&\ \tau\ \text{mod}\ S=0. \tag{3}\]
It is denoted by \((M,L,S)\)-MSCS, where \(M\) is called the set size. It should be noted that when \(S=1\), then it becomes a GCS.
**Definition 4** (Type-II ZCS): _A set of sequences \(\{\mathbf{a}_{0},\mathbf{a}_{1},\ldots,\mathbf{a}_{M-1}\}\) with length \(L\) is called a type-II Z-complementary set (ZCS) if for some positive number \(Z\) they satisfy_
\[\sum_{i=0}^{M-1}\rho(\mathbf{a}_{i})(\tau)=0,L-Z<|\tau|<L. \tag{4}\]
_Here \(Z\) is called the zero correlation zone (ZCZ) width. The set may be denoted as type-II \((M,L,Z)\)-ZCS._
### _Multivariable Function_
Let \(\mathbb{Z}_{p}=\{0,1,\ldots,p-1\}\) be the set of integers modulo \(p\). A multivariable function can be defined as
\[f:\mathbb{Z}_{p_{1}}^{m_{1}}\times\mathbb{Z}_{p_{2}}^{m_{2}}\times\cdots\times \mathbb{Z}_{p_{k}}^{m_{k}}\rightarrow\mathbb{Z}_{\lambda}\]
where \(p_{1},p_{2},\ldots,p_{k}\) are prime numbers, \(m_{i}\geq 1\), \(\forall i=1,2,\ldots,k\) and \(\lambda\) is a positive integer. \(v_{p_{\alpha},1},v_{p_{\alpha},2},\ldots,v_{p_{\alpha},m_{\alpha}}\) are the \(m_{\alpha}\) variables which takes values from \(\mathbb{Z}_{p_{\alpha}}\) for \(\alpha=1,2,\ldots,k\). The set of monomials of degree at most \(r\) is given by \(A^{r}=\left\{\prod_{\alpha=1}^{k}\prod_{\beta=1}^{m_{\alpha}}v_{p_{\alpha}, \beta}^{j_{\beta}^{\alpha}}:0\leq\sum_{\alpha=1}^{k}\sum_{\beta=1}^{m_{\alpha} }j_{\beta}^{\alpha}\leq r\right\}\). A multivariable function of order \(r\) is a \(\mathbb{Z}_{\lambda}\)-linear combination of the monomials from \(A^{r}\). Let \(\mathbf{u}_{p_{\alpha},i_{\alpha}}\) be the vector representation of \(i_{\alpha}\) with base \(p_{\alpha}\), i.e., \(\mathbf{u}_{p_{\alpha},i_{\alpha}}=(i_{\alpha,1},i_{\alpha,2},\ldots,i_{\alpha,m_{\alpha}})\) where \(i_{\alpha}=\sum_{\gamma=1}^{m_{\alpha}}p_{\alpha}^{\gamma}i_{\alpha,\gamma}\). The function rule for a multivariable function is defined as \((\mathbf{u}_{p_{1},i_{1}},\mathbf{u}_{p_{2},i_{2}},\ldots,\mathbf{u}_{p_{k},i_ {k}})\)\(\longrightarrow f(\mathbf{u}_{p_{1},i_{1}},\mathbf{u}_{p_{2},i_{2}},\ldots, \mathbf{u}_{p_{k},i_{k}})\) mod \(\lambda\). One can associate a \(\mathbb{Z}_{\lambda}\)-valued sequence of length \(p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{k}^{m_{k}}\) corresponding to a multivariable function \(f\) as \(\mathbf{f}=\left(f(\mathbf{u}_{p_{1},i_{1}},\mathbf{u}_{p_{2},i_{2}},\ldots, \mathbf{u}_{p_{k},i_{k}}):i_{1}=0,1,\ldots,p_{1}^{m_{1}}-1;\ldots;i_{k}=0,1, \ldots,p_{k}^{m_{k}}-1\right)\). Also, one can associate a complex-valued sequence of length \(p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{k}^{m_{k}}\) in a similar manner as \(\psi(\mathbf{f})=\left(\omega_{\lambda}^{f(\mathbf{u}_{p_{1},i_{1}},\mathbf{u }_{p_{2},i_{2}},\ldots,\mathbf{u}_{p_{k},i_{k}})}:i_{1}=0,1,\ldots,p_{1}^{m_{1 }}-1;\ldots;i_{k}=0,1,\ldots,p_{k}^{m_{k}}-1\right)\), where \(\omega_{\lambda}=\exp(2\pi\sqrt{-1}/\lambda)\)._
_._
## III Construction
In this section, we propose constructions of MSCSs by using multivariable functions.
**Theorem 1**: _Let \(\pi\) be a permutation on the set \(\{s,s+1,\ldots,m\}\) for integers \(m\geq 1\) and \(s\geq 1\). Let \(p\) be a prime and \(\lambda\) be a positive integer such that \(p\mid\lambda\). Let \(f:\mathbb{Z}_{p}^{m}\rightarrow\mathbb{Z}_{\lambda}\) be defined such that_
\[\begin{split} f(v_{1},v_{2},\ldots,v_{m})=&\frac{ \lambda}{p}\sum_{i=s}^{m-1}v_{\pi(i)}v_{\pi(i+1)}+\sum_{i=1}^{m}g_{i}v_{i}+g \\ &+h(v_{1},v_{2},\ldots,v_{s-1}),\end{split} \tag{5}\]
_where \(g_{i},g\in\mathbb{Z}_{\lambda}\) and \(h(v_{1},v_{2},\ldots,v_{s-1})\) is any function \(h:\mathbb{Z}_{p}^{s-1}\rightarrow\mathbb{Z}_{\lambda}\). For \(s=1\), we define \(h=0\). Define \(a^{\gamma}:\mathbb{Z}_{p}^{m}\rightarrow\mathbb{Z}_{\lambda}\) such that_
\[a^{\gamma}=f(v_{1},v_{2},\ldots,v_{m})+\frac{\lambda}{p}v_{\pi(s)}\gamma.\]
_Then \(\{\omega_{\lambda}^{a^{\gamma}}:\gamma\in\mathbb{Z}_{p}\}\) is a \((p,p^{m},p^{s-1})\)-MSCS._
Proof:: As, \(\rho(\mathbf{a})(-\tau)=\rho^{s}(\mathbf{a})(\tau)\), we shall only prove for \(\tau\geq 0\). We have to show that
\[\sum_{\gamma=0}^{p-1}\sum_{i=0}^{p^{m}-1-\tau}\omega_{\lambda}^{(a^{\gamma})_{i }-(a^{\gamma})_{i+\tau}}=0, \tag{6}\]
whenever \(\tau\mod p^{s-1}=0\), where \((a^{\gamma})_{i}=a^{\gamma}(i_{1},i_{2},\ldots,i_{m})\), and \((i_{1},i_{2},\ldots,i_{m})\) is the \(p\)-ary vector representation of \(i\). Let \(j=i+\tau\), where \(\tau\mod p^{s-1}=0\), i.e., \(\tau\) is a multiple of \(p^{s-1}\). We have
\[(a^{\gamma})_{i}-(a^{\gamma})_{j}=(f_{i}-f_{j})+\frac{\lambda}{p}\big{(}i_{\pi( s)}-j_{\pi(s)}\big{)}\gamma. \tag{7}\]
Now, we have two cases.
**Case I**: _Let, \(i_{\pi(s)}\neq j_{\pi(s)}\). In this case, we have_
\[\sum_{\gamma=0}^{p-1}\omega_{\lambda}^{\frac{\lambda}{p}(i_{\pi(s)}-j_{\pi(s)}) \gamma}=\sum_{\gamma=0}^{p-1}\omega_{p}^{(i_{\pi(s)}-j_{\pi(s)})\gamma}=0, \tag{8}\]
_as \(\omega_{p}^{(i_{\pi(s)}-j_{\pi(s)})\gamma}\) are the \(p\)-th roots of \(1\) for \(\gamma=0,1,\ldots,p-1\). So, we have_
\[\sum_{\gamma=0}^{p-1}\omega_{\lambda}^{(a^{\gamma})_{i}-(a^{\gamma})_{j}}=0. \tag{9}\]
**Case II**: _Let, \(i_{\pi(s)}=j_{\pi(s)}\). But \(i\neq j\) and \(j=i+\tau\), where \(\tau\) is a multiple of \(p^{s-1}\), i.e., \(\tau=kp^{s-1}\) for some \(1\leq k\leq p^{m-s+1}-1\). Any \(k\) in this range can be written as a \((m-s+1)\)-tuple vector representation form \((k_{1},k_{2},\ldots,k_{m-s+1})\) with base \(p\), where \(k=\sum_{i=1}^{m-s+1}k_{i}p^{i-1}\). So, \(\tau=\sum_{i=1}^{m-s+1}k_{i}p^{i+s-2}\), which implies \(i_{l}=j_{l}\) for \(l=1,2,\ldots,s-1\). Now, \(i\neq j\) implies that \(\exists\) some \(l\in\{s,s+1,\ldots,m\}\) such that \(i_{l}\neq j_{l}\). Let \(\phi\) be the smallest number such that \(i_{\pi(\phi)}\neq j_{\pi(\phi)}\). Let \(i^{\delta}\) be the integer whose \(p\)-ary vector representation is
\[(i_{1},i_{2
we take \(j^{\delta}\). Now, for \(i_{\pi(\phi-1)}-\delta\geq 0\) and \(j_{\pi(\phi-1)}-\delta\geq 0\), we have
\[(a^{\gamma})_{i^{\delta}}-(a^{\gamma})_{i}=-\delta\bigg{(}\frac{ \lambda}{p}i_{\pi(\phi-2)}+\frac{\lambda}{p}i_{\pi(\phi)}+g_{\pi(\phi-1)}\bigg{)} \tag{10}\]
and
\[(a^{\gamma})_{j^{\delta}}-(a^{\gamma})_{j}=-\delta\bigg{(}\frac{ \lambda}{p}j_{\pi(\phi-2)}+\frac{\lambda}{p}j_{\pi(\phi)}+g_{\pi(\phi-1)} \bigg{)}. \tag{11}\]
But, we have
\[\begin{split}&\big{(}(a^{\gamma})_{i^{\delta}}-(a^{\gamma})_{j^{ \delta}}\big{)}-((a^{\gamma})_{i}-(a^{\gamma})_{j})\\ &=((a^{\gamma})_{i^{\delta}}-(a^{\gamma})_{i})-\big{(}(a^{\gamma })_{j^{\delta}}-(a^{\gamma})_{j}\big{)}\\ &=-\delta\frac{\lambda}{p}\big{(}i_{\pi(\phi)}-j_{\pi(\phi)} \big{)}.\end{split} \tag{12}\]
Also, for \(i_{\pi(\phi-1)}-\delta<0\) and \(j_{\pi(\phi-1)}-\delta<0\), we have
\[\begin{split}&\big{(}(a^{\gamma})_{i^{\delta}}-(a^{\gamma})_{j^{ \delta}}\big{)}-((a^{\gamma})_{i}-(a^{\gamma})_{j})\\ &=(p-\delta)\frac{\lambda}{p}\big{(}i_{\pi(\phi)}-j_{\pi(\phi)} \big{)}.\end{split} \tag{13}\]
But \(\omega_{\lambda}^{(p-\delta)\frac{\lambda}{p}\big{(}i_{\pi(\phi)}-j_{\pi( \phi)}\big{)}}=\omega_{\lambda}^{-\delta\frac{\lambda}{p}\big{(}i_{\pi(\phi)} -j_{\pi(\phi)}\big{)}}\). So, considering all the possibilities, we have
\[\begin{split}&\sum_{\delta=1}^{p-1}\omega_{\lambda}^{((a^{ \gamma})_{i^{\delta}}-(a^{\gamma})_{j^{\delta}})-((a^{\gamma})_{i}-(a^{\gamma })_{j})}=\sum_{\delta=1}^{p-1}\omega_{p}^{\delta(j_{\pi(\phi)}-i_{\pi(\phi)})} \\ &\implies\sum_{\delta=1}^{p-1}\omega_{\lambda}^{((a^{\gamma})_{i ^{\delta}}-(a^{\gamma})_{j^{\delta}})-((a^{\gamma})_{j})}=-1\\ &\implies\sum_{\delta=1}^{p-1}\omega_{\lambda}^{((a^{\gamma})_{i ^{\delta}}-(a^{\gamma})_{j^{\delta}})}+\omega_{\lambda}^{((a^{\gamma})_{i}-(a^ {\gamma})_{j})}=0.\end{split} \tag{14}\]
Hence, we have the result.
In the following example, we illustrate the _Theorem 1_.
**Example 1**: _Let, \(p=3\), \(m=3\), \(s=2\), \(q=6\) and \(\pi\) be a permutation on \(\{2,3\}\) such that \(\pi(2)=2\), \(\pi(3)=3\). From Theorem 1, we can construct the function \(f:\mathbb{Z}_{3}^{3}\to\mathbb{Z}_{6}\), where \(f(v_{1},v_{2},v_{3})=2v_{2}v_{3}+5\). Then \(\{\omega_{3}^{a^{\gamma}}:\gamma\in\mathbb{Z}_{3}\}\) is a \((3,27,3)\)-MSCS, where \(a^{\gamma}=f+2v_{2}\gamma\)._
We can show that the set \(\{\omega_{\lambda}^{a^{\gamma}}:\gamma=0,1,\ldots,p-1\}\) constructed in _Theorem 1_ has some interesting property which is pretty straightforward and we show it in the next corollary.
**Corollary 1**: _Let \(a^{\gamma}\) be the function defined in Theorem 1. Then we have_
\[\sum_{\gamma=0}^{p-1}\rho(\omega_{\lambda}^{a^{\gamma}})(\tau)=0, \tag{15}\]
_when \(|\tau|>p^{s-1}\)._
We shall prove for \(\tau>0\) as \(\rho(\mathbf{a})(-\tau)=\rho^{*}(\mathbf{a})(\tau)\). For \(\tau>p^{s-1}\), let \(j=i+\tau\). As \(\tau>p^{s-1}\), it is not possible that \(i_{k}=j_{k},\forall k\in\{s,s+1,\ldots,m\}\). Now, \((a^{\gamma})_{i}-(a^{\gamma})_{j}=(f_{i}-f_{j})+\frac{\lambda}{p}\big{(}i_{\pi(s )}-j_{\pi(s)}\big{)}\gamma\). So, if \(i_{\pi(\mathbf{a})}\neq j_{\pi(s)}\), then we have
\[\sum_{\gamma=0}^{p-1}\omega_{\lambda}^{(a^{\gamma})_{i}-(a^{\gamma})_{j}}= \omega_{\lambda}^{f_{i}-f_{j}}\sum_{\gamma=0}^{p-1}\omega_{\lambda}^{\frac{ \lambda}{p}\big{(}i_{\pi(s)}-j_{\pi(s)}\big{)}\gamma}=0. \tag{16}\]
If \(i_{\pi(s)}=j_{\pi(s)}\), then in a similar manner described in _Theorem 1_, we find \(i^{\delta}\) and \(j^{\delta}\) for \(\delta=1,2,\ldots,p-1\). Now, arguing similar to _Theorem 1_, we have
\[\sum_{\gamma=0}^{p-1}\Bigg{(}\!\sum_{\delta=1}^{p-1}\!\omega_{\lambda}^{((a^{ \gamma})_{i^{\delta}}-(a^{\gamma})_{j^{\delta}})}+\omega_{\lambda}^{((a^{\gamma}) _{i}-(a^{\gamma})_{j})}\Bigg{)}=0. \tag{17}\]
Hence, the result.
**Remark 1**: _Corollary 1_ shows that the set \(\{a^{\gamma}:\gamma=0,1,\ldots,p-1\}\) is in fact a type-II \((p,p^{m},p^{m}-p^{s-1})\)-ZCS. Type-II ZCS has application in wireless communication. For example, it can be applied in wideband wireless communication system having large minimum interfering signal delay (ISD) for removing asynchronous interference [19, 20]. Type-II ZCSs are preferable than type-I ZCSs in these scenarios to mitigate inter-symbol interference [21]. Although, there are some indirect [22] and direct [20] constructions of type-II ZCS in the literature, the proposed construction is direct, as well as provides flexible ZCZ width when the length is power-of-prime.
Next, we generalize the _Theorem 1_ for MSCs.
**Theorem 2**: _Let \(p_{1},p_{2},\ldots,p_{k}\) be \(k\) primes and \(\lambda\) be a positive integer such that \(p_{\alpha}\mid\lambda,\forall\alpha=1,2,\ldots,k\); and \(\pi_{\alpha}\) be permutations on the sets \(I_{\alpha}=\{s_{\alpha},s_{\alpha}+1,\ldots,m_{\alpha}\}\) for \(s_{\alpha}\geq 1\) and \(m_{\alpha}\geq 1\), where \(\alpha=1,2,\ldots,k\). Let \(f_{\alpha}:\mathbb{Z}_{p_{\alpha}}^{m_{\alpha}}\to\mathbb{Z}_{\lambda}\) be defined by_
\[\begin{split}& f_{\alpha}(v_{p_{\alpha},1},v_{p_{\alpha},2},\ldots,v_{p_ {\alpha},m_{\alpha}})\\ &=\frac{\lambda}{p_{\alpha}}\sum_{i=s_{\alpha}}^{m_{\alpha}-1}v_{p _{\alpha},\pi_{\alpha}(i)}v_{p_{\alpha},\pi_{\alpha}(i+1)}+\sum_{i=s_{\alpha}}^{m_ {\alpha}}g_{p_{\alpha},i}v_{p_{\alpha},i}+g_{p_{\alpha}}\\ &\quad+h_{\alpha}(v_{p_{\alpha},1}v_{p_{\alpha},2},\ldots,v_{p_ {\alpha},s_{\alpha}-1}),\end{split} \tag{18}\]
_where \(g_{p_{\alpha},i},g_{p_{\alpha}}\in\mathbb{Z}_{\lambda}\), \(h_{\alpha}(v_{p_{\alpha},1}v_{p_{\alpha},2},\ldots,v_{p_{\alpha},s_{\alpha}-1})\) is any function \(h_{\alpha}:\mathbb{Z}_{p_{\alpha}}^{s_{\alpha}-1}\to\mathbb{Z}_{\lambda}\) and \(h_{\alpha}=0\) when \(s_{\alpha}=1\). We define_
\[a^{\gamma}:\mathbb{Z}_{p_{1}}^{m_{1}}\times\mathbb{Z}_{p_{2}}^{m_{2}}\times \cdots\times\mathbb{Z}_{p_{k}}^{m_{k}}\to\mathbb{Z}_{\lambda},\]
_for \(\boldsymbol{\gamma}=(\gamma_{1},\gamma_{2},\ldots,\gamma_{k})\in\mathbb{Z}_{p_{1}} \times\mathbb{Z}_{p_{2}}\times\cdots\times\mathbb{Z}_{p_{k}}\) such that_
\[a^{\gamma}=\sum_{\alpha=1}^{k}f_{\alpha}+\sum_{\alpha=1}^{k}\frac{\lambda}{p_{ \alpha}}v_{p_{\
where
\[R=\sum_{\alpha=1}^{n}f_{\alpha}+\sum_{\alpha=1}^{n}\frac{\lambda}{p_{\alpha}}v_{p_ {\alpha},\pi_{\alpha}(s_{\alpha})}\gamma_{\alpha}, \tag{21}\]
and
\[S=f_{n+1}+\frac{\lambda}{p_{n+1}}v_{p_{n+1},\pi_{n+1}(s_{n+1})\gamma_{n+1}}. \tag{22}\]
Now, \(\omega_{\lambda}^{a^{\gamma}}=\omega_{\lambda}^{S}\otimes\omega_{\lambda}^{R}\), where \(\otimes\) denotes the Kronecker product. It is easy to observe that the length of each sequence in \(\{\omega_{\lambda}^{a^{\gamma}}:\mathbf{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{ Z}_{p_{2}}\times\cdots\times\mathbb{Z}_{p_{k}}\}\) is \(L=\prod_{\alpha=1}^{n+1}p_{\alpha}^{m_{\alpha}}\) and number of sequence is \(M=\prod_{\alpha=1}^{n+1}p_{\alpha}\). We need to show that
\[\sum_{\mathbf{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{Z}_{p_{2}}\times\cdots \times\mathbb{Z}_{p_{n+1}}}\rho(\omega_{\lambda}^{a^{\gamma}})(\tau)=0 \tag{23}\]
when \(\tau\mod\prod_{\alpha=1}^{n+1}p^{s_{\alpha}-1}=0\). We note that for two sequences \(\mathbf{a}\) and \(\mathbf{b}\) having length \(L_{1}\) and \(L_{2}\), respectively, we have
\[\rho(\mathbf{a}\otimes\mathbf{b})(\tau)= \rho(\mathbf{a})\bigg{(}\bigg{\lfloor}\frac{\tau}{L_{2}}\bigg{\rfloor} \bigg{)}\rho(\mathbf{b})(\tau\mod L_{2})\] \[+ \Delta_{L_{2}}\rho(\mathbf{a})\bigg{(}\bigg{\lfloor}\frac{\tau}{ L_{2}}\bigg{\rfloor}+1\bigg{)}\rho(\mathbf{b})(\tau\mod L_{2}-L_{2}), \tag{24}\]
where
\[\Delta_{L_{2}}=\begin{cases}0,&\tau\mod L_{2}=0;\\ 1,&\text{otherwise},\end{cases} \tag{25}\]
and \(\lfloor\cdot\rfloor\) is the floor function. So, we can write
\[\sum_{\mathbf{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{Z}_{p_{2}} \times\cdots\times\mathbb{Z}_{p_{n+1}}}\rho(\omega_{\lambda}^{a^{\gamma}})(\tau)\] \[=\bigg{[}\sum_{\gamma_{n+1}\in\mathbb{Z}_{p_{n+1}}}\rho(\omega_{ \lambda}^{S})\bigg{(}\bigg{\lfloor}\frac{\tau}{L_{2}}\bigg{\rfloor}\bigg{)}\] \[\quad\times\sum_{\mathbf{\gamma}^{\prime}\in\mathbb{Z}_{p_{1}}\times \cdots\times\mathbb{Z}_{p_{n}}}\rho(\omega_{\lambda}^{R})(\tau\mod L_{2})\bigg{]} \tag{26}\] \[+\bigg{[}\Delta_{L_{2}}\times\sum_{\gamma_{n+1}\in\mathbb{Z}_{p_ {n+1}}}\rho(\omega_{\lambda}^{S})\bigg{(}\bigg{\lfloor}\frac{\tau}{L_{2}} \bigg{\rfloor}+1\bigg{)}\] \[\quad\times\sum_{\mathbf{\gamma}^{\prime}\in\mathbb{Z}_{p_{1}}\times \cdots\times\mathbb{Z}_{p_{n}}}\rho(\omega_{\lambda}^{R})(\tau\mod L_{2}-L_{2 })\bigg{]}\]
where \(\mathbf{\gamma}=(\mathbf{\gamma}^{\prime},\gamma_{n+1})\),\(\mathbf{\gamma}^{\prime}=(\gamma_{1},\gamma_{2},\ldots,\gamma_{n})\) and \(L_{2}=\prod_{\alpha=1}^{n}p_{\alpha}^{m_{\alpha}}\). Now, let \(\tau\mod\prod_{\alpha=1}^{n+1}p_{\alpha}^{s_{\alpha}-1}=0\), i.e., \(\tau=\beta\prod_{\alpha=1}^{n+1}p_{\alpha}^{s_{\alpha}-1}\) for some \(\beta\in\{1,2,\ldots,\prod_{\alpha=1}^{n+1}p_{\alpha}^{m_{\alpha}-s_{\alpha}+1 }-1\}\). But either \(\beta=\beta_{0}\prod_{\alpha=1}^{n}p_{\alpha}^{m_{\alpha}-s_{\alpha}+1}\) for some integer \(\beta_{0}\), or \(\prod_{\alpha=1}^{n}p_{\alpha}^{m_{\alpha}-s_{\alpha}+1}\nmid\beta\). Now, we have two cases.
_Case I:_ Let, \(\beta=\beta_{0}\prod_{\alpha=1}^{n}p_{\alpha}^{m_{\alpha}-s_{\alpha}+1}\) for some integer \(\beta_{0}\). Then \(\tau\mod L_{2}=0\) and \(\Delta_{L_{2}}=0\). Also, in this case, \(\Big{\lfloor}\frac{\tau}{L_{2}}\Big{\rfloor}=\beta_{0}p_{n+1}^{s_{\alpha}+1 -1}\), i.e., \(\Big{\lfloor}\frac{\tau}{L_{2}}\Big{\rfloor}\mod p_{n+1}^{s_{\alpha}+1-1}=0\). So, from our assumption, we have
\[\sum_{\gamma_{n+1}\in\mathbb{Z}_{p_{n+1}}}\rho(\omega_{\lambda}^{S})\bigg{(} \bigg{\lfloor}\frac{\tau}{L_{2}}\bigg{\rfloor}\bigg{)}=0. \tag{27}\]
_Case II:_ Let, \(\prod_{\alpha=1}^{n}p_{\alpha}^{m_{\alpha}-s_{\alpha}+1}\) does not divide \(\beta\). Then \(\tau\mod L_{2}\neq 0\) and \(\Delta_{L_{2}}=1\). Let \(\tau\mod L_{2}=\tau_{1}\), where \(\tau_{1}=\tau-\beta_{1}L_{2}\) for some integer \(\beta_{1}\) and \(0<\tau_{1}<L_{2}\). But \(\prod_{\alpha=1}^{n}p_{\alpha}^{s_{\alpha}-1}\) divides both \(\tau\) and \(L_{2}\). Hence \(\tau_{1}\mod\prod_{\alpha=1}^{n}p_{\alpha}^{s_{\alpha}-1}=0\), i.e., \((\tau\mod L_{2})\mod\prod_{\alpha=1}^{n}p_{\alpha}^{s_{\alpha}-1}=0\). But from our assumption, we have
\[\sum_{\mathbf{\gamma}^{\prime}\in\mathbb{Z}_{p_{1}}\times\cdots\times\mathbb{Z}_{p_ {n}}}\rho(\omega_{\lambda}^{R})(\tau\mod L_{2})=0. \tag{28}\]
Arguing in a similar manner, we can say
\[\sum_{\mathbf{\gamma}^{\prime}\in\mathbb{Z}_{p_{1}}\times\cdots\times\mathbb{Z}_{p_ {n}}}\rho(\omega_{\lambda}^{R})(\tau\mod L_{2}-L_{2})=0. \tag{29}\]
Hence, the case for \(k=n+1\) is proved and we have the result.
**Remark 2**: _If \(s_{\alpha}=1,\forall\alpha=1,2,\ldots,k\), then the set \(\{\omega_{\lambda}^{a^{\gamma}}:\mathbf{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{ Z}_{p_{2}}\times\cdots\times\mathbb{Z}_{p_{k}}\}\) becomes a GCS of length \(L=p_{1}^{m_{1}}p_{2}^{m_{2}}\ldots p_{k}^{m_{k}}\) and set size \(M=p_{1}p_{2}\ldots p_{k}\), which is given in [13]. So, the construction of GCS in [13] is a special case of the proposed construction._
Now, we propose another construction of MSCs.
**Theorem 3**: _Let \(p_{1},p_{2},\ldots,p_{k},p_{k+1}\) be \((k+1)\) distinct primes and \(\lambda\) be a positive integer such that \(p_{\alpha}|\lambda,\forall\alpha\). Let \(a^{\gamma}:\mathbb{Z}_{p_{1}}^{m_{1}}\times\mathbb{Z}_{p_{2}}^{m_{2}}\times \cdots\times\mathbb{Z}_{p_{k}}^{m_{k}}\rightarrow\mathbb{Z}_{\lambda}\) be the same function given in Theorem 2 for \(s_{\alpha}=1,\forall\alpha=\{1,2,\ldots,k\}\). Then the set \(\{b^{\gamma}:\mathbf{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{Z}_{p_{2}}\times \cdots\times\mathbb{Z}_{p_{k}}\}\) is an \((\prod_{\alpha=1}^{k}p_{\alpha},p_{k+1}\prod_{\alpha=1}^{k}p_{\alpha}^{m_{\alpha}},p _{k+1})\)-MSCS, where \(b^{\gamma}:\mathbb{Z}_{p_{1}}^{m_{1}}\times\mathbb{Z}_{p_{2}}^{m_{2}}\times \cdots\times\mathbb{Z}_{p_{k+1}}^{m_{k+1}}\rightarrow\mathbb{Z}_{\lambda}\) is defined by \(b^{\gamma}=a^{\gamma}+b_{p_{k+1}}v_{p_{k+1}}+g_{p_{k+1}}\), where \(g_{p_{k+1},1},g_{p_{k+1}}\in\mathbb{Z}_{\lambda}\)._
_Proof:_ The proof follows form the fact that \(b^{\gamma}=f_{p_{k+1}}\otimes a^{\gamma}\), where \(f_{p_{k+1}}=g_{p_{k+1},1}v_{p_{k+1},1}+g_{p_{k+1}}\), and using (24) and _Remark 2_, we can get
\[\sum_{\boldsymbol{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{Z}_ {p_{2}}\times\ldots\times\mathbb{Z}_{p_{k}}}\rho(\omega_{\lambda}^{a^{\gamma}})( \tau\mod L_{2})= 0, \tag{30}\] \[\sum_{\boldsymbol{\gamma}\in\mathbb{Z}_{p_{1}}\times\mathbb{Z}_ {p_{2}}\times\ldots\times\mathbb{Z}_{p_{k}}}\rho(\omega_{\lambda}^{a^{\gamma}})( \tau\mod L_{2}-L_{2})= 0,\]
when \(\tau\mod p_{k+1}=0\) and \(L_{2}=\prod_{\alpha=1}^{k}p_{\alpha}^{m_{\alpha}}\). \(\blacksquare\)
### _Peak-to-Mean Envelope Power Ratio (PMEPR)_
Let \(\mathbf{x}=(x_{0},x_{1},\ldots,x_{L-1})\) be a \(\mathbb{Z}_{\lambda}\)-valued sequence. Then the OFDM signal is the real part of the complex envelope
\[P_{\mathbf{x}}(t)=\sum_{i=0}^{L-1}\omega_{\lambda}^{x_{i}+\lambda f_{i}t} \tag{31}\]
where \(f_{i}=f+i\Delta f\), \(f\) is a constant frequency, \(\Delta f\) is a integer multiple of OFDM symbol rate. The term \(\frac{|P_{\mathbf{x}}(t)|^{2}}{L}\) is called instantaneous-to-average power ratio (IAPR). PMEPR of the sequence \(\mathbf{x}\) is defined as
\[\text{PMEPR}(\mathbf{x})=\sup_{0\leq\Delta f\leq 1}\frac{|P_{\mathbf{x}}(t)|^{2} }{L}. \tag{32}\]
We shall use \(\text{PMEPR}(\mathbf{x})\) and \(\text{PMEPR}(\psi(\mathbf{x}))\) invariably whenever the context is clear. Similarly, for a set \(A=\{\mathbf{x}_{0},\mathbf{x}_{1},\ldots,\mathbf{x}_{M-1}\}\) of sequences PMEPR can be defined as
\[\text{PMEPR}(A)=\max\{\text{PMEPR}(\mathbf{x}_{i}):i=0,1,\ldots,M-1\}. \tag{33}\]
### _Bound for PMEPR_
In this subsection, we calculate the PMEPR of the constructed MSCS. First we state a result regarding the PMEPR which is available in the literature.
**Lemma 1** ([15]): _If \(\mathbf{x}\) is a sequence of a \((2,L,S)\)-MSCS, then \(\text{PMEPR}(\mathbf{x})\) is upper bounded by \(2S\)._
A similar statement can be made for a \((M,L,S)\)-MSCS using methods similar to [16], which we provide in the following lemma.
**Lemma 2**: _If \(\mathbf{x}\) is a sequence of a \((M,L,S)\)-MSCS, then \(\text{PMEPR}(\mathbf{x})\) is upper bounded by \(MS\)._
_Proof:_ We briefly sketch the proof here. Let \(\{\mathbf{a}_{0},\mathbf{a}_{1},\ldots,\mathbf{a}_{M-1}\}\) be an \((M,L,S)\)-MSCS, where \(\mathbf{a}_{i}=(a_{i,0}a_{i,1},\ldots,a_{i,L-1})\) for all \(i\). We let \(\zeta=\exp(\frac{2\pi\sqrt{-1}}{S})\) be the \(S\)-th primitive root of \(1\). We define the set of sequences \(\mathbf{a}_{i}^{u}=\big{(}a_{i,0}\zeta^{0u},a_{i,1}\zeta^{1u},\ldots,a_{i,(L-1 )}\zeta^{(L-1)u}\big{)}\) for \(u=0,1,\ldots,S-1\) and \(i=0,1,\ldots,M-1\). Then we have
\[\sum_{u=0}^{S-1}\bigl{|}P_{\mathbf{a}_{i}^{u}}(t)\bigr{|}^{2}= \sum_{u=0}^{S-1}\biggl{|}\sum_{k=0}^{L-1}a_{i,k}\zeta^{ku}z^{f_{k }}\biggr{|}^{2} \tag{34}\] \[= LS+2\mathcal{R}\biggl{(}\sum_{k=0}^{L-1}\rho(\mathbf{a}_{i})(k)z ^{f_{k}}\sum_{u=0}^{S-1}\zeta^{ku}\biggr{)},\]
where \(z=\exp(2\pi t\sqrt{-1})\) and \(\mathcal{R}(\cdot)\) is the real part of a complex number. As \(\sum_{u=0}^{S-1}\zeta^{ku}\) equals \(0\), when \(k\bmod S\neq 0\) and it equals \(S\), otherwise, we have
\[\sum_{i=0}^{M-1}\sum_{u=0}^{S-1}\bigl{|}P_{\mathbf{a}_{i}^{u}}(t)\bigr{|}^{2}= MLS. \tag{35}\]
Hence, \(\text{PMEPR}(\mathbf{a}_{i}^{u})\leq MS\). For \(u=0\), we have \(\mathbf{a}_{i}^{u}=\mathbf{a}_{i}\) and it implies \(\text{PMEPR}(\mathbf{a}_{i})\) is upper bounded by \(MS\). \(\blacksquare\)
**Example 2**: _Let \(k=2\), \(p_{1}=3\), \(p_{2}=2\), \(m_{1}=3\), \(m_{2}=1\), \(s_{1}=1\). Then, using Theorem 3, we can directly construct a \((3,54,2)\)-MSCS which has not been reported before. We take \(f:\mathbb{Z}_{3}^{3}\times\mathbb{Z}_{2}\rightarrow\mathbb{Z}_{6}\), where \(f_{1}=2(v_{3,2}v_{3,3}+v_{3,3}v_{3,1})+2v_{3,1}+5v_{3,2}+v_{3,3}\) and \(f_{2}=3v_{2,1}\). In Fig. 1, we have shown plot of IAPR with respect to \(\Delta ft\) for all sequences of the MSCS. The PMEPR value is 5.9465 which agree to the theoretical upper bound \(6\) from _Lemma 2_.
### _Comparison with existing works_
In the TABLE I, we have compared the proposed construction with the existing ones [16, 17, 18] with respect to their parameters. It can be seen from TABLE I that the proposed constructions provide flexible lengths and set sizes compared to the existing constructions.
## IV Conclusion
In this paper, we have proposed new constructions of MSCSs, which can be used as alternatives to the conventional GCSs in OFDM. Although there are several direct constructions of MSCSs in the literature, the sequence lengths and set sizes are limited in those. The proposed constructions provide flexible as well as arbitrary sequence lengths and flexible set sizes. For \(k=1\), one of the constructed MSCS reduces to type-II ZCS, which has applications in wideband wireless communication systems.
Fig. 1: IAPRs of constituent sequences of MSCS having length \(54\) |
2310.18007 | LHAASO J2108+5157 as a Molecular Cloud Illuminated by a Supernova
Remnant | The search for Galactic PeVatrons - astrophysical accelerators of cosmic rays
to PeV energies - has entered a new phase in recent years with the discovery of
the first Ultra-High-Energy (UHE, $E>100$ TeV) gamma-ray sources by the HAWC
and LHAASO experiments. Establishing whether the emission is leptonic or
hadronic in nature, however, requires multiwavelength data and modelling
studies. Among the currently known UHE sources, LHAASO J2108+5157 is an
enigmatic source without clear association to a plausible accelerator, yet
spatially coincident with molecular clouds. We investigate the scenario of a
molecular cloud illuminated by cosmic rays accelerated in a nearby supernova
remnant (SNR) as an explanation for LHAASO J2108+5157. We aim to constrain the
required properties of the SNR as well as which of the clouds identified in the
vicinity is the most likely association. We use a model for cosmic ray
acceleration in SNRs, their transport through the interstellar medium and
subsequent interaction with molecular material, to predict the corresponding
gamma-ray emission. The parameter space of SNR properties is explored to find
the most plausible parameter combination that can account for the gamma-ray
spectrum of LHAASO J2108+5157. In the case that a SNR is illuminating the
cloud, we find that it must be young ($<10$ kyr) and located within $40-60$ pc
of the cloud. A SN scenario with a low Sedov time is preferred, with a maximum
proton energy of 3 PeV assumed. No SNRs matching these properties are currently
known, although an as yet undetected SNR remains feasible. The galactic CR sea
is insufficient to solely account for the observed flux, such that a PeVatron
accelerator must be present in the vicinity. | A. M. W. Mitchell | 2023-10-27T09:33:21Z | http://arxiv.org/abs/2310.18007v1 | # LHAASO J2108+5157 as a Molecular Cloud Illuminated by a Supernova Remnant
###### Abstract
Context:The search for Galactic PeVatrons - astrophysical accelerators of cosmic rays to PeV energies - has entered a new phase in recent years with the discovery of the first Ultra-High-Energy (UHE, \(E>100\) TeV) gamma-ray sources by the HAWC and LHAASO experiments. Establishing whether the emission is leptonic or hadronic in nature, however, requires multiwavelength data and modelling studies. Among the currently known UHE sources, LHAASO J2108+5157 is an enigmatic source without clear association to a plausible accelerator, yet spatially coincident with molecular clouds.
Aims:We investigate the scenario of a molecular cloud illuminated by cosmic rays accelerated in a nearby supernova remnant (SNR) as an explanation for LHAASO J2108+5157. We aim to constrain the required properties of the SNR as well as which of the clouds identified in the vicinity is the most likely association.
Methods:We use a model for cosmic ray acceleration in SNRs, their transport through the interstellar medium and subsequent interaction with molecular material, to predict the corresponding gamma-ray emission. The parameter space of SNR properties is explored to find the most plausible parameter combination that can account for the gamma-ray spectrum of LHAASO J2108+5157.
Results:In the case that a SNR is illuminating the cloud, we find that it must be young (\(<10\) kyr) and located within \(40-60\) pc of the cloud. A SN scenario with a low Sedov time is preferred, with a maximum proton energy of 3 PeV assumed. No SNRs matching these properties are currently known, although an as yet undetected SNR remains feasible. The galactic CR sea is insufficient to solely account for the observed flux, such that a PeVatron accelerator must be present in the vicinity.
Conclusions:
## 1 Introduction
Cosmic Rays (CRs) are energetic particles originating from astrophysical accelerators and continuously arriving at Earth. The all-particle CR spectrum exhibits a spectral softening at \(\sim 1\)-3 PeV, known as the 'knee', generally understood to indicate the start of the transition from galactic to extragalactic accelerators being responsible for the bulk of CRs (Ginzburg & Syrovatskii, 1964; Hillas, 2006; Apel et al., 2013; Parizot, 2014). Astrophysical sources capable of accelerating particles to PeV energies are known colloquially as 'PeVatrons'. Gamma rays produced as a consequence of particle interactions at the source typically have energies a factor \(\sim 1/10\) that of the parent particle population (Blumenthal & Gould, 1970). Hence, the detection of gamma rays with \(E>100\) TeV indicates the presence of particles with PeV energies, corresponding to the CR 'knee'.
Definitive evidence for the presence of PeVatrons in our galaxy has, however, proven elusive. Although diffusive shock acceleration of CRs at supernova remnants (SNRs) can account for the energy budget of CRs in our galaxy, their gamma-ray spectra cut-off at energies below \(100\) TeV (Bell, 1978; Lagage & Cesarsky, 1983; Bell, 2013; H.E.S.S. Collaboration et al., 2018). This indicates that active acceleration of particles to PeV energies is not occurring at these SNRs, although the detection of the characteristic 'pion-bump' signature of neutral pion decay in several SNRs indicates that the emission is hadronic in origin (Bell et al., 2013; Ackermann et al., 2013; Jogler & Funk, 2016).
Indications for PeVatron activity from the galactic centre were found (HESS Collaboration et al., 2016), yet only in recent years have experimental facilities been capable of measuring gamma-rays with energies \(>100\) TeV. Water Cherenkov and particle detector based facilities in particular, such as HAWC (Abeysekara et al., 2017), LHAASO (Aharonian et al., 2021) and Tibet-ASy (Amenomori et al., 2009) have contributed significantly to this advance.
Until 2023, the ultra-high-energy (UHE, \(>100\) TeV) gamma-ray sky comprised a mere \(\sim 15\) sources, with the Crab nebula one of the first identified (Amenomori et al., 2019; LHAASO Collaboration et al., 2021). A further 31 UHE sources were announced in the first LHAASO catalogue (Cao et al., 2023). Twelve UHE sources were reported by LHAASO in 2021 (Cao et al., 2021c), the majority of which are spatially coincident with known Very-High-Energy (VHE, \(\gtrsim 100\) TeV) gamma-ray sources. In particular, several associations with energetic pulsar wind nebulae from which the emission is understood to be dominantly leptonic in origin. Despite Klein-Nishina suppression of inverse Compton scattering at the highest energies, this suppression is relaxed in the case of high radiation field environments, and a leptonic scenario remains a viable interpretation for the UHE sources associated with known energetic pulsars (Vannoni et al., 2009; Breuhaus et al., 2021, 2022).
There is, however, one source reported in Cao et al. (2021c), for which the gamma-ray emission was first discovered at UHE and without any known counterpart accelerators, such as pulsars or supernova remnants (SNRs). LHAASO J2108+5157 is an enigmatic source, spatially coincident with molecular clouds yet with the accelerator mechanism remaining unidentified (Cao et al., 2021b). In the wake of the LHAASO discovery, follow-up observations were conducted by several facilities, including in the radio and X-ray bands as well as by gamma-ray experiments. The Fermi-LAT source 4FGL J2108.0+5155 is spatially coincident with the UHE emission, but due to the differing spectral properties a physical association remains unclear (Abdollahi et al., 2020). A re-analysis of the Fermi-LAT data found a potential spatial extension of 0.48\({}^{\circ}\) angular size of the source, designated 4FGL J2108.0+5155e (Cao et al., 2021b).
A 3.7\(\sigma\) signal of gamma-ray emission was measured at \(E>3\) TeV by the Large Sized Telescope (LST-1), a prototype telescope for the forthcoming Cherenkov Telescope Array (CTA) (Acharya et al., 2013). They derive upper limits in the energy range 0.32 TeV to 100 TeV that considerably constrain model scenarios for the origin of the emission (Abe et al., 2023).
The HAWC observatory recently reported a \(\sim 7\,\sigma\) detection in \(\sim 2400\) days of data (Kumar et al., 2023). However, observations and analysis by the VERITAS IACT array did not result in a detection, with constraining upper limits being reported, consistent with those from the LST-1 (Kumar et al., 2023; Abe et al., 2023).
Although there is little observational evidence for SNRs currently acting as PeVatrons, it remains feasible that SNRs act as BeVatrons only for a comparatively short period during their lifetimes, such that the rate of currently detectable SNR PeVatrons is low (Cristofari et al., 2020). Particle escape from the shock region occurs in an energy-dependent manner, such that the most energetic particles will also be the first to leave the shock region (Celli et al., 2019a). Evidence for PeV particles may therefore be found not at the location of the accelerator, but rather from subsequent interactions of these particles with target material in the ambient medium, such as nearby molecular clouds (Gabici & Aharonian, 2007; Morlino et al., 2009; Inoue et al., 2012; Celli et al., 2019b). This scenario has been proposed as a possible explanation for the UHE emission from LHAASO J2108+5157 (Cao et al., 2021b; Kar & Gupta, 2022; De Sarkar, 2023; Abe et al., 2023).
In contrast to previous models for LHAASO J2108+5157, in this study we scan the parameter space in two free variables, namely SNR age and the distance between the cloud and the SNR, to determine the range of plausible values for the required properties of the responsible SNR. We investigate the influence of uncertainties in the cloud properties on the resulting gamma-ray flux for the best-matched models. The corresponding expected neutrino flux is estimated, and the plausibility of the best-matched models is discussed.
## 2 Method
We adopt the model of Mitchell et al. (2021, 2023), based on Aharonian & Atoyan (1996) and Kelner et al. (2006), to investigate the scenario of a SNR illuminating molecular clouds as a possible explanation for LHAASO J2108+5157. Whilst there are several clouds identified in the vicinity, we focus on clouds that are spatially coincident with the \(\gamma\)-ray emission and located closest to the best-fit centroid of LHAASO J2108+5157 at (\(l=92.2148^{\circ}\),\(b=2.9359^{\circ}\)). Cloud 4607 from the Miville-Deschenes et al. (2017) catalogue based on data from the \({}^{12}\)CO survey of Dame et al. (2001) has been considered in previous models of the region (Kar & Gupta, 2022; De Sarkar, 2023), whilst recently a newly identified cloud has been detected in the region (de la Fuente et al., 2023). Adopting the convention of prior works, we henceforth refer to these two clouds as MML[2017]4607 and FKT[2022] respectively. Table 1 summarises the key physical properties of the clouds relevant for this study.
For convenience, we summarise here the key features of the model from Mitchell et al. (2021, 2023) adopted for this work.
* Protons are accelerated impulsively with a power-law spectrum of slope \(\alpha\).
* The particle probability density function \(f(E,r^{\prime},r^{\prime})\) is taken from equation (3) of Aharonian & Atoyan (1996), and is a function of the particle energy \(E\), distance travelled from the SNR \(r^{\prime}\) and time since escape from the SNR \(t^{\prime}\).
* The SNR radius, \(R_{\rm SNR}\), expands with time (t) adiabatically during the Sedov-Taylor phase as \(R_{\rm SNR}\propto t^{2/5}\)(Truelove & McKee, 1999).
* Particles escape from the SNR at a time \(t_{\rm esc}\) in a momentum-dependent manner, following \(t_{\rm esc}\propto(p/p_{M})^{1/\beta}\) where \(p_{M}\) is the maximum particle energy reached, assumed to be 3 PeV/c at the Sedov time, \(t_{\rm sed}\)(Celli et al., 2019a).
* Particles are either transported diffusively through the ISM to reach the cloud or are injected directly into the cloud if the SNR is sufficiently expanded.
* Diffusion within the intervening ISM is assumed to be slow with respect to the Galactic average value due to the local accelerator activity (Gabici et al., 2007).
* Within the cloud, diffusion is suppressed with respect to the ISM by a factor \(\chi\) that relates to local turbulence.
For our default scenario, the Sedov time is assumed to commence at 1.6\(\,\)kyr corresponding to the case of type II supernovae. In the case of type Ia supernovae, the Sedov time commences at a mere 234 yr, which is considered as an alternative scenario (Celli et al., 2019a).
Details of the SNR forward shock interaction with the cloud, in the case that the SNR is in close proximity and sufficiently evolved, are neglected. The diffusion coefficient \(D(E)\) is considered to have a power-law dependence on the energy \(E\) as:
\[D(E)=\chi D_{0}\left(\frac{E/\rm GeV}{B(n)/3\mu\rm G}\right)^{\delta}\,, \tag{1}\]
where \(\delta\) and \(\chi\) relate to the properties of the magnetic field turbulence in the region, and \(B(n)\) describes the dependence of the magnetic field strength \(B\) on the cloud density \(n\)(see Mitchell et al. (2021)). Values adopted for \(\delta\), \(\chi\) and the diffusion coefficient normalisation \(D_{0}\) at 1 GeV are listed in table 2. Within the ISM, \(\chi\) is taken to be 1, whilst a value of 0.1 is adopted to account for suppressed diffusion within the clouds. From the above ingredients, the particle spectrum as a function of energy \(E\), age \(t\) and distance from the accelerator \(r\) is obtained, \(f(E,r^{\prime},t^{\prime})\).
\begin{table}
\begin{tabular}{l|c c} Cloud & MML[2017]4607 & FKT[2022] \\ \hline (\(l\) (deg), \(b\) (deg)) & (92.272, 2.775) & (92.4, 3.2) \\ \(d\) (kpc) & 3.28 & 1.7 \(\pm\) 0.6 \\ \(n\) (cm\({}^{-3}\)) & 30 & 37 \(\pm\) 14 \\ diameter (deg) & 0.5 & 1.1 \(\pm\) 0.2 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the molecular clouds considered in this study. \(d\) is the distance to the cloud and \(n\) is the total number density of hydrogen gas.
Experimental measurements are, however, bound to neutral messengers such as \(\gamma\)-rays and neutrinos as the signatures for the presence of energetic hadronic particles. For comparison to data, the proton spectrum can then be converted into a gamma-ray emissivity \(\Phi_{\gamma}(E_{\gamma},r^{\prime},t^{\prime})\) (in ph cm\({}^{-3}\) s\({}^{-1}\) TeV\({}^{-1}\)) by using the expressions from Kelner et al. (2006):
\[\Phi_{\gamma}(E_{\gamma},r^{\prime},t^{\prime})=cn\int_{E_{\gamma}}^{\infty} \sigma_{\rm inst}(E)f(E,r^{\prime},t^{\prime})F_{\gamma}\left(\frac{E_{\gamma} }{E},E\right)\frac{dE}{E}\, \tag{2}\]
for which we adopt the parameterisation of the inelastic cross-section for proton-proton interactions \(\sigma_{\rm inst}(E)\) from Kafexhiu et al. (2014), noting that due to high uncertainties below \(\sim 100\) GeV, we take this as an energy threshold and restrict our model predictions to energies \(>100\) GeV only.
Lastly, we obtain the \(\gamma\)-ray flux \(F(E_{\gamma},t)\) at a distance \(d\) away from the cloud (i.e. at Earth) taking into account the volume of the molecular cloud traversed by particles \(V_{c}\) via:
\[F(E_{\gamma},t)=\Phi_{\gamma}(E_{\gamma},t)V_{c}/(4\pi d^{2}). \tag{3}\]
The diffusive galactic CR flux permeates the entire Galaxy, and as such will also contribute to the total particle flux interacting with the molecular clouds. To take this contribution into account, we include the proton flux as measured by the Alpha Magnetic Spectrometer on the International Space Station (Aguilar et al. 2015). This flux is added to the particle flux arriving at the cloud, \(f\) in equation (2), enabling the relative contributions of accelerator and the diffuse CR sea to be evaluated.
Kelner et al. (2006) also provide expressions for the neutrino production via charged pion and muon decay via \(F_{\gamma}\left(\frac{E_{\gamma}}{E},E\right)\) for the total production of electron and muon neutrinos from the same proton interactions. By analogy with equations (2) and (3) the corresponding total neutrino flux can be obtained.
In the next section, we use this model to generate predictions for the gamma ray flux arising from a hypothetical SNR illuminating the molecular clouds identified in the vicinity of the LHAASO J2108+5157. Additionally, we consider the contribution from the galactic CR sea, to establish whether it is sufficient to account for the observed gamma-ray emission without requiring a nearby accelerator. The model is compared to measurements from LHAASO and HAWC, and upper limits from the LST-1 and VERITAS (Cao et al. 2021b; Abe et al. 2023; Kumar et al. 2023).
## 3 Results
### Scan over SNR parameter space
As the properties of the molecular clouds are known (table 1), we vary the properties of a hypothetical SNR to investigate the required values to account for the \(\gamma\)-ray flux of LHAASO J2108+5157. We assume that the SNR is located at the same distance from Earth as the cloud. The SNR age is varied in ten logarithmically spaced steps between 1 kyr and 500 kyr, for a fixed separation distance between the SNR and cloud of 24 pc. Similarly, the separation distance is independently varied in ten logarithmically spaced steps between 10 pc and 500 pc for a fixed SNR age of 4 kyr. For type Ia supernovae, the fixed reference values were reduced to 10 pc and 1 kyr. These curves are shown in figures 1 and 2 for type II and type Ia supernovae respectively.
In the case of type II supernova remnants shown in figure 1, the predicted flux is comparable to the data for cloud FKT[2022], yet the flux predicted for MML[2017]4607 consistently falls below the measured flux. Younger ages are preferred, with the flux at energies \(\lesssim 1\) TeV becoming over predicted between \(\sim\)7 kyr and 30 kyr for FKT[2022]. The key features of the model are that the highest energy particles escape the SNR at earlier times and are first to arrive at the cloud. The spectral energy distribution hence rises at the highest energies at earlier times (and for shorter distances). The particle distribution then cools as a function of age, with the peak shifting towards lower energies.
In the case of type Ia supernova remnants shown in figure 2, a separation distance larger than 24 pc is required for FKT[2022] to avoid over predicting the flux in the \(<10\) TeV range. MKT[2022]4607 is better able to account for the gamma-ray flux under the type Ia scenario, yet only for an optimum combination of low distance and young age. As \(t_{\rm sed}\) is lower for the type Ia scenario, the spectral energy distribution is more highly populated at an earlier stage.
### Contribution from the galactic CR sea
As described above, the contribution from diffusive galactic CRs is included in the model, assuming that the particle flux is comparable to that measured at Earth (Aguilar et al. 2015). From the parameter scan, we find that the contribution from the nearby SNR dominates over that from galactic CR sea in most cases. Indeed, the contribution from diffusive galactic CRs only exceeds that from the SNR if either the cloud-SNR distance is \(\gtrsim 200\) pc (for young \(\lesssim 10\) kyr SNRs), or if the SNR is old, \(\gtrsim 400\) kyr (for nearby \(\lesssim 50\) pc SNRs).
In order to test whether the diffuse galactic CR sea could be solely responsible for the measured gamma-ray flux, the normalisation of the galactic flux contribution was varied, in the absence of considering any hypothetical SNR. To match the observed emission at TeV energies using the molecular clouds considered, the normalisation must be of order \(\sim 10^{3}\) higher than that measured at Earth. This enhancement is unlikely to be achieved without the presence of an accelerator nearby.
Next, we consider all possible combinations of SNR age and separation distance within the aforementioned ranges. A chisquare evaluation of the model curve to the LHAASO data points only is used to establish which model curves provide the closest match to the data. Due to the large number of free parameters entering into the model, we do not perform a minimisation, as there will be multiple local minima in the parameter space able to account for the data. Rather, we aim provide a plausible
\begin{table}
\begin{tabular}{l c c} Description & Parameter & Value \\ \hline Proton power law spectral index & \(\alpha\) & 2.0 \\ Index characterising energy & \(\delta\) & 0.5 \\ -dependence of diffusion & & \\ Index characterising momentum & \(\beta\) & 2.5 \\ -dependence of particle escape & & \\ Diffusion suppression factor & \(\chi\) & 0.1 or 1 \\ due to turbulence & & \\ Diffusion coefficient & \(D_{0}\) & \(3\times 10^{26}\frac{\rm cm^{2}}{\rm s}\) \\ normalisation at 1 GeV & & \\ ISM particle density & \(n_{\rm ISM}\) & 1 cm\({}^{-3}\) \\ Maximum particle momentum & \(p_{M}\) & 3 PeV/c \\ Sedov time (type II SNR) & \(t_{\rm sed}\) & 1.6 kyr \\ Sedov time (type Ia SNR) & \(t_{\rm sed}\) & 234 yr \\ \end{tabular}
\end{table}
Table 2: Assumed parameters of the model. The value adopted for \(D_{0}\) is taken from Gabici et al. (2007).
range of allowed values for the specific case of this model, with assumed fixed parameters as in table 2.
### Best-matched models
#### 3.3.1 Clouds MML[2017]4607 and FKT[2022]
For each cloud, model curves corresponding to the two best matching combinations of SNR age and separation distance are shown in figure 3. Model curves for MML[2017]4607 were consistently below the data points for \(\chi=0.1\) within the cloud, as seen in figures 1 and 2. To obtain parameter values within comparable agreement to the data as for FKT[2022], we neglected the suppressed diffusion within the cloud for MML[2017]4607 (and for this section only) by setting \(\chi=1\). This corresponds to the most optimistic case in which CRs can freely penetrate the cloud, although we note that \(\delta\) was kept fixed to 0.5 and we did not investigate the effect of altering the energy dependence of the diffusion coefficient in equation (1).
The best matching combinations are summarised in table 3. In general, the SNR age was found to have a stronger influence on the curve shape and hence quality of the match to LHAASO data than the separation distance. FKT[2022] yielded more parameter combinations with a lower \(\chi^{2}\) than MML[2017]4607, where model curves for the same age yet for smaller distances were essentially consistent. This is supported by figures 1 and 2 - for a fixed age, provided the distance is sufficiently small that CRs have had time to traverse the cloud, the gamma-ray flux remains constant with decreasing distance. (Equivalently, the gamma-ray flux drops with increasing distance.) Overall, the type Ia scenario (i.e. a lower \(t_{\rm sed}\)) is preferred.
One might ask whether or not a finer-resolution of values covering the reasonable parameter space would lead to a model that better matched the data. Whilst this may be the case, we first consider the effect of propagating the uncertainties in the measured properties of the clouds (table 1) through the model. An upper bound to the flux is obtained by adopting the 1 \(\sigma\) deviation \(d-\sigma_{d}\) and \(n+\sigma_{n}\), whilst a lower bound is similarly obtained from the model evaluated with \(d+\sigma_{d}\) and \(n-\sigma_{n}\), where we intrinsically assume that the uncertainties are Gaussian distributed. Increasing \(n\) will increase the target material and hence flux as per equation (2), whilst increasing \(d\) will decrease the flux as per equation (3). For FKT[2022] uncertainties are reported in de la Fuente et al. (2023), whilst for MML[2017]4607 uncertainties
Figure 1: Dependence of gamma-ray flux on properties of a type II supernova. Left: Variation in supernova age for a fixed distance of 24 pc. Right: Variation with distance between the SNR and the cloud for a fixed age of 4 kyr. Solid lines correspond to MML[2017]4607 and dashed lines correspond to FKT[2022].
Figure 2: Dependence of gamma-ray flux on properties of a type Ia supernova. Left: Variation in supernova age for a fixed distance of 10 pc. Right: Variation with distance between the SNR and the cloud for a fixed age of 1 kyr. Solid lines correspond to MML[2017]4607 and dashed lines correspond to FKT[2022].
are not provided in the case that near and far estimates agree (Miville-Deschenes et al. 2017). We therefore adopt a 20% uncertainty in \(d\) and \(n\) for MML[2017]4607 as a rough estimate, given that the true uncertainty and subsequent variation in the model is unknown. Resulting uncertainty bands corresponding to the parameter combinations reported in table 3 are shown in figure 3.
Figure 3 clearly illustrates that the uncertainty introduced to the model from experimental measurements (or the adopted 20% uncertainty) on the cloud properties leads to variation in predicted flux comparable to that seen by varying the input age and distance of the parameter scan. Therefore, a more finely-grained exploration of the SNR parameter space is not well-motivated.
#### 3.3.2 Corresponding neutrino flux
For two of the best matching models from table 3, we show the corresponding total neutrino flux in figure 4. For MML[2017]4607 this is for \(t=1\) kyr and \(\Delta d=37\) pc, whilst for FKT[2022] we show \(t=4\) kyr and \(\Delta d=57\) pc, both for the SN Ia case. Although \(\Delta d=37\) pc yielded a lower \(\chi^{2}\) for FKT[2022] with respect to the LHAASO data, this curve is disfavoured as it exceeds the upper limits provided by LST-1 (upper dashed curve in figure 3). These curves essentially scale with the \(\gamma\)-ray flux, yet still lie at least an order of magnitude in flux below the sensitivity of current neutrino experiments suited for the detection of astrophysical neutrinos, such as IceCube (Aartsen et al. 2019).
## 4 Discussion
LHAASO J2108+5157 is an intriguing UHE gamma-ray source with no known counterparts yet spatially coincident with molecular clouds. In this study, we investigate a scenario whereby the molecular cloud is illuminated by energetic protons accelerated at a SNR in the vicinity. By scanning the parameter space of SNR age and separation distance between the hypothetical SNR and the cloud, we obtain model predictions that can be compared to data, thereby constraining the most likely SNR properties. Consistently, we find that a comparatively young (\(<10\) kyr) and nearby (\(d\lesssim 40-60\) pc) SNR is required.
There are currently no known SNRs matching this description. From the SNR catalogue (Ferrand and Safi-Harb 2012), the two closest SNRs are G094.0+01.0 and G093.7-00.2, at angular
Figure 4: Gamma-ray and neutrino fluxes for two of the best-matching scenarios for a type Ia SN from table 3. Solid lines correspond to the expected gamma-ray flux and dashed lines to the neutrino flux.
Figure 3: Model curves corresponding to the parameter combinations that best match the data as listed in table 3. Left: type II and Right: type Ia supernova remnant. Solid lines and shaded uncertainty band correspond to MML[2017]4607. Dashed lines and hatched region correspond to FKT[2022].
\begin{table}
\begin{tabular}{l c c c c} Cloud & \(t\) (kyr) & \(\Delta d\) (pc) & SN type & \(\chi^{2}\) \\ \hline MML[2017]4607 & 1 & 37 & Ia & 5.1 \\ FKT[2022] & 4 & 37 \({}^{*}\) & Ia & 6.7 \\ FKT[2022] & 4 & 57 & Ia & 9.2 \\ FKT[2022] & 4 & 57 & II & 15.5 \\ FKT[2022] & 8 & 24 \({}^{**}\) & II & 17.0 \\ MML[2017]4607 & 4 & 24 \({}^{**}\) & II & 24.4 \\ MML[2017]4607 & 2 & 37 & II & 25.0 \\ MML[2017]4607 & 1 & 24 & Ia & 28.2 \\ \hline \end{tabular} \({}^{*}\) Model curves for the same SNR age yet with smaller distances provided a comparable fit to the LHAASO data, but severely overestimated the LST-1 upper limits and are hence not shown.
\({}^{**}\) Model curves for the same SNR age yet with a distance of 10 pc and 15 pc were comparable to the 24 pc distance quoted.
\end{table}
Table 3: Combinations of SNR age, \(t\) and separation distance, \(\Delta d\) for the model curves that best match the LHAASO data, listed in ranked order. These curves are shown in figure 3.
distances of more than 2.4\({}^{\circ}\) from LHAASO J2108+5157. At the 3.28 kpc distance of MML[2017]4607 this corresponds to 140 pc and 190 pc separation from the cloud respectively, whilst at the 1.7 kpc distance of FKT[2022] the SNRs are situated 80 pc and 110 pc away from the cloud. Additionally, G094.0+1.0 has an estimated age of 25 kyr, far older than the SNR ages preferred by our model. We conclude that neither SNR is associated to LHAASO J2108+5157.
Nevertheless, it remains plausible that there are further, as yet undiscovered SNRs located in the region. Recent results from the EMU/POSSUM survey, performed using the Australian Square Kilometer Array Pathfinder (ASKAP) observed a region of the galactic plane containing 7 known SNRs, and found 21 candidates, of which 13 were new discoveries Ball et al. (2023). This supports the notion that radio surveys to date may not be sufficiently sensitive to detect all SNRs within a given region.
Several molecular clouds have been identified in the region, two from Miville-Deschenes et al. (2017) based on Dame et al. (2001) (MML[2017]4607 and MML[2017]2870) and most recently a new cloud FKT[2022] reported by de la Fuente et al. (2023). Model parameters were explored for the two clouds spatially coincident with LHAASO J2108+5157, namely MML[2017]4607 and FKT[2022].
Both type II and type Ia supernova explosion scenarios were considered, where the main difference is in the assumed time for transition to the Sedov-Taylor phase (\(t_{\rm sed}\)). Although a better match could be achieved under the type Ia scenario, we consider this unlikely. Type Ia supernovae occur in older systems where at least one member of a binary system has sufficiently evolved to become a white dwarf, generally corresponding to environments not rich in molecular material. Type II supernovae, however, occur in younger environments where an abundance of molecular material can be expected, similar to that observed in the vicinity of LHAASO J2108+5157. Hence, we rather interpret these results as indicating that an earlier transition into the Sedov-Taylor phase is preferred, which may reflect (e.g.) properties of the ambient medium rather than the nature of the progenitor (Truelove & McKee 1999).
In all model curves, the highest energy data point at \(\sim\)500 TeV could not be well matched with a maximum energy of the proton spectrum of 1 PeV. Therefore, throughout this study we assumed a maximum energy at the Sedov time of 3 PeV.
For MML[2017]4607 to account for the data, we neglected an additional suppression within the cloud due to turbulence compared to the ISM (i.e. \(\chi=1\)). With \(\chi=0.1\) within the cloud, MML[2017]4607 consistently under predicted the data in our model (figures 1 and 2). Our model assumed locally suppressed diffusion compared to the Galactic average also in the intervening medium between the SNR and the cloud, a reasonable assumption for regions of active particle acceleration (Gabici et al. 2007; D'Angelo et al. 2018; Inoue 2019).
Suppressed diffusion and young SNR age as preferred model parameters is in agreement with the 4.5 kyr age obtained by Kar & Gupta (2022), although De Sarkar (2023) suggests an older SNR age of 44 kyr, obtained with a different spectral index for the particle population. A young SNR may still be a comparatively weak producer of synchrotron emission, or could be of small size and remain embedded within (or obscured by) molecular clouds in the region. Given the angular size of the molecular clouds, a young SNR could be completely hidden behind the clouds along the line of sight. Using the relation \(R_{\rm SNR}\propto t^{2/5}\) for evolution in the Sedov-Taylor phase, an SNR younger than 12 kyr for MML[2017]2870 and 19 kyr for FKT[2022] would be small enough to be obscured by the cloud. This is consistent with the preferred \(<10\) kyr SNR age.
Nonetheless, other scenarios for the origin of LHAASO J2108+5157 remain plausible. Young stellar clusters have been hypothesised as suitable Galactic PeVatrons, with particle acceleration occurring at the termination shock of the collective wind (Aharonian et al. 2019; Morlino et al. 2021). There are two known young stellar clusters nearby to LHAASO J2108+5157: although the distance to Kronberger 80 is known to be at least 4.8 kpc or larger (Kharchenko et al. 2016; Cantat-Gaudin & Anders 2020), disfavouring an association with the molecular clouds in the region, and the distance to Kronberger 82 remains unknown (Kronberger et al. 2006). As such, a stellar cluster is a potential alternative accelerator, also capable of illuminating molecular clouds with CRs, but not well motivated in this region.
Given the spatial correlation of LHAASO J2108+5157 with molecular clouds a leptonic scenario for the emission seems unlikely, nevertheless it has been demonstrated that powerful pulsar wind nebulae are capable of accelerating leptons to beyond 1 PeV and can account for UHE gamma rays, especially in high radiation field environments (Vannoni et al. 2009; Breuhaus et al. 2022). The lack of a pulsar counterpart, or of X-ray synchrotron emission that would indicate the presence of a pulsar wind nebula also in cases where the pulsed emission is mis-aligned, disfavours such a scenario.
With the advent of current generation detectors such as LHAASO sensitive to UHE gamma rays, we may expect other enigmatic sources to emerge, corresponding to clouds illuminated by unknown accelerators. Other unidentified gamma-ray sources for which no known counterpart has been identified to date, such as LHAASO J0341+5258, may have a similar origin (Cao et al. 2021a). The first LHAASO catalogue reported no fewer than seven further new sources that seem to be "dark" in nature, without any known counterparts (Cao et al. 2023). Undoubtedly, further follow-up studies, both in terms of observation and interpretation, are necessary to determine the origin of these enigmatic gamma-ray sources.
## 5 Conclusion
LHAASO J2108+5157 is a dark UHE gamma-ray source spatially coincident with two molecular clouds. We find that the gamma-ray emission can be accounted in terms of molecular cloud illumination by CRs from a nearby (\(\lesssim 40-60\) pc) young (\(<10\) kyr) SNR. Although no SNR is currently known matching these criteria, such an SNR could be obscured by other material along the line of sight, or simply lie below the detection threshold of previous surveys (Ball et al. 2023). Interactions of the diffuse galactic CR sea with the molecular clouds is found to be insufficient to explain the observed gamma-ray flux.
As the exposure of current survey instruments increases, and with the advent of future facilities such as CTA and SWGO, we can anticipate further such discoveries, potentially unveiling a population of UHE sources tracing the presence of PeV particles (Acharya et al. 2013; Albert et al. 2019; Casanova 2022; Cao et al. 2023). The key to identifying PeVatrons may lie not in emission from the accelerators themselves, but rather from evidence of energetic particles that have escaped the source region.
###### Acknowledgements.
The author is grateful to G. Rowell & C. van Eldik for fruitful discussions and especially to A. Specovius for reading the manuscript. This work is supported by the _Deutsche Forschungsgemeinschaft, DFG_ project number 452934793. |
2308.02215 | The mean wind and potential temperature flux profiles in convective
boundary layers | We develop innovative analytical expressions for the mean wind and potential
temperature flux profiles in convective boundary layers (CBLs). CBLs are
frequently observed during daytime as the Earth's surface is warmed by solar
radiation. Therefore, their modeling is relevant for weather forecasting,
climate modeling, and wind energy applications. For CBLs in the convective-roll
dominated regime, the mean velocity and potential temperature in the bulk
region of the mixed layer are approximately uniform. We propose an analytical
expression for the normalized potential temperature flux profile as a function
of height, using a perturbation method approach in which we employ the
horizontally homogeneous and quasi-stationary characteristics of the surface
and inversion layers. The velocity profile in the mixed layer and the
entrainment zone is constructed based on insights obtained from the proposed
potential temperature flux profile and the convective logarithmic friction law.
Combining this with the well-known Monin-Obukhov similarity theory allows us to
capture the velocity profile over the entire boundary layer height. The
proposed profiles agree excellently with large-eddy simulation results over the
range of $-L/z_0 \in [3.6\times10^2, 0.7 \times 10^5]$, where $L$ is the
Obukhov length and $z_0$ is the roughness length. | Luoqin Liu, Srinidhi N. Gadde, Richard J. A. M. Stevens | 2023-08-04T09:16:49Z | http://arxiv.org/abs/2308.02215v1 | # The mean wind and potential temperature flux profiles in convective boundary layers
###### Abstract
We develop innovative analytical expressions for the mean wind and potential temperature flux profiles in convective boundary layers (CBLs). CBLs are frequently observed during daytime as the Earth's surface is warmed by solar radiation. Therefore, their modeling is relevant for weather forecasting, climate modeling, and wind energy applications. For CBLs in the convective-roll dominated regime, the mean velocity and potential temperature in the bulk region of the mixed layer are approximately uniform. We propose an analytical expression for the normalized potential temperature flux profile as a function of height, using a perturbation method approach in which we employ the horizontally homogeneous and quasi-stationary characteristics of the surface and inversion layers. The velocity profile in the mixed layer and the entrainment zone is constructed based on insights obtained from the proposed potential temperature flux profile and the convective logarithmic friction law. Combining this with the well-known Monin-Obukhov similarity theory allows us to capture the velocity profile over the entire boundary layer height. The proposed profiles agree excellently with large-eddy simulation results over the range of \(-L/z_{0}\in[3.6\times 10^{2},0.7\times 10^{5}]\), where \(L\) is the Obukhov length and \(z_{0}\) is the roughness length.
+
Footnote †: Published by Journal of the Atmospheric Sciences
## 1 Introduction
Convective boundary layers (CBLs) are frequently observed during daytime as the Earth's surface is warmed by solar radiation (Stull, 1988). Due to their frequent occurrence, the fundamental understanding of CBLs is highly relevant to agriculture, architectural design, aviation, climate modeling, weather prediction, and wind energy applications, to name a few. The modern scientific literature on CBLs goes back over 100 years. Initially, the focus was on low-altitude measurements, and with the introduction of more advanced measurement techniques, the focus gradually shifted upwards. However, only after the introduction of large eddy simulations (LES) in the early seventies, it has become widely accepted that thermodynamic indicators are most suitable to identify the different CBL regions (LeMone et al., 2019). However, obtaining analytical profiles that describe the wind and potential temperature flux in the entire CBL has remained challenging due to the different flow physics in the various CBL regions.
The CBL can be subdivided into three layers (excluding the roughness sublayer), i.e. the surface layer, the mixed layer, and the entrainment zone (see figure 1). The surface layer is characterized by a superadiabatic potential temperature gradient and a strong wind shear, which is usually described by the Monin-Obukhov similarity theory (MOST, Monin and Obukhov, 1954). According to the MOST the non-dimensional wind speed and potential temperature gradient profiles are universal functions of the dimensionless height \(z/L\), where \(z\) is the height above the surface and \(L\) is the surface Obukhov length (Obukhov, 1946; Monin and Obukhov, 1954). Many studies have pointed out that the MOST does not explain all important surface-layer statistics under convective conditions (Panofsky et al., 1977; Khanna and Brasseur, 1997; Johansson et al., 2001; McNaughton et al., 2007; Salesky and Anderson, 2020; Cheng et al., 2021) or very stable conditions (Mahrt et al., 1998; Cheng et al., 2005). In particular, the normalized wind gradient \(\phi_{m}=(\kappa z/u_{*})(\partial U/\partial z)\) depends both on \(z/L\) and \(z_{i}/L\)(Khanna and Brasseur, 1997; Johansson et al., 2001), where \(z_{i}\) is the height of the inversion layer (see figure 1). Nevertheless, MOST is still widely used in numerical weather prediction and climate models (Salesky and Anderson, 2020), and thus will be used in the theoretical analysis and numerical simulations of this study. MOST applies only to the surface layer, and for it to be applicable, the absolute value of the Obukhov length \(L\) must be smaller than the height of the surface layer. Therefore we only consider the CBL with \(-z_{i}/L\gg 1\). In particular, we focus on the convective-roll dominant regime with \(-z_{i}/L\gtrsim 10\)(Salesky et al., 2017). Furthermore, we focus on dry and cloud-free CBLs to avoid complications due to physical processes like evaporation, precipitation, and cloud formation.
The mixed layer is characterized by intense vertical mixing caused by warm air thermals rising from the ground. Within the mixed layer, the magnitude of the mean velocity is much larger than the variations in the mean velocity. Thus, for applications where the mean velocity gradient is unimportant, the wind speed and potential temperature can be regarded as uniform (Kaimal et al., 1976; Salesky et al., 2017). This insight is incorporated in various CBL models (Lilly, 1968; Deardorff, 1973; Stull, 1976; Deardorff, 1979; Tennekes and Driedonks, 1981; Garratt et al., 1982). The entrainment zone is characterized by entrainment of
air from the free atmosphere. Deardorff et al. (1980) found in laboratory experiments that the ratio of the entrainment zone thickness to the depth of the mixing layer decreases asymptotically with increasing Richardson number \(Ri\) as follows \((h_{2}-h_{1})/h_{1}=0.21+1.31/Ri\) (see figure 1 for the definitions of \(h_{1}\) and \(h_{2}\)). This ratio is essential for developing entrainment models and has been studied extensively (Lilly, 1968; Sullivan et al., 1998; Zilitinkevich et al., 2012; Haghshenas and Mellado, 2019). The potential temperature flux profile decreases linearly with height and becomes negative in the entrainment zone. The entrainment flux ratio \(\Pi_{m}\), which is defined as the ratio of the potential temperature flux at the inversion layer height to its value at the ground, turns out to be nearly constant, i.e. \(\Pi_{m}\approx-0.2\)(Stull, 1976; Sorbjan, 1996; Conzemius and Fedorovich, 2006; Sun and Wang, 2008; LeMone et al., 2019). Note that the inversion layer is the upper region of the entrainment zone in which the potential temperature flux increases steeply from its minimum value at \(z=z_{i}\) to zero at \(z=h_{2}\) (see figure 1).
The geostrophic wind \((U_{g},V_{g})\) and the friction velocity \(u_{*}\) are usually connected through the well-known geostrophic drag law, which was initially derived for neutral boundary layers (Rossby and Montgomery, 1935; Blackadar and Tennekes, 1968; Tennekes and Lumley, 1972) and later extended to include buoyancy effects (Zilitinkevich, 1969). To include the effect of unsteadiness, Zilitinkevich and Deardorff (1974) and Arya (1975) proposed to replace the Ekman depth \(u_{*}/|f|\) in the geostrophic draw law, where \(f\) is the Coriolis parameter, with the time-dependent inversion layer height \(z_{i}\). However, significant disparities were observed between the geostrophic drag law for CBLs and measurement data (Zilitinkevich, 1975). Garrett et al. (1982) derived a relationship for the velocity defects in the mixed layer using a three-layer CBL model, which accounts for the effects of entrainment, baroclinity, advection, and local acceleration. In their formulation, the velocity defects are defined as the differences between the mixed-layer averaged winds and the geostrophic winds. They proposed a geostrophic drag law to relate the geostrophic winds and the geostrophic winds and the geostrophic winds. The geostrophic winds and the geostrophic winds are also considered in the context of the rotation velocity of the fluid.
The geostrophic wind \((U_{g},V_{g})\) and the friction velocity \(u_{*}\) are usually connected through the well-known geostrophic drag law, which was initially derived for neutral boundary layers (Rossby and Montgomery, 1935; Blackadar and Tennekes, 1968; Tennekes and Lumley, 1972) and later extended to include buoyancy effects (Zilitinkevich, 1969). To include the effect of unsteadiness, Zilitinkevich and Deardorff (1974) and Arya (1975) proposed to replace the Ekman depth \(u_{*}/|f|\) in the geostrophic draw law, where \(f\) is the Coriolis parameter, with the time-dependent inversion layer height \(z_{i}\). However, significant disparities were observed between the geostrophic drag law for CBLs and measurement data (Zilitinkevich, 1975). Garrett et al. (1982) derived a relationship for the velocity defects in the mixed layer using a three-layer CBL model, which accounts for the effects of entrainment, baroclinity, advection, and local acceleration. In their formulation, the velocity defects are defined as the differences between the mixed-layer averaged winds and the geostrophic winds. They proposed a geostrophic drag law to relate the geostrophic winds and the geostrophic winds and the geostrophic winds. The geostrophic winds and the geostrophic winds are also considered in the context of the geostrophic winds and the geostrophic winds.
The geostrophic wind \((U_{g},V_{g})\) and the friction velocity \(u_{*}\) are usually connected through the well-known geostrophic drag law, which states that the velocity defect \(U-U_{m}\) scales with \(u_{*}\). The convective logarithmic friction law is derived from matching the law of the wall in the inner-inner layer with the velocity-defect law in the surface-layer. This act leading-order result relates the friction velocity \((u_{*})\) to the mixed-layer velocity scale \((U_{m})\). The difference between \(U_{g}\) and \(U_{m}\) scales as \((u_{*}^{2}w_{e})/(fz_{i})^{2}\), where \(w_{e}\) is the entrainment velocity and \(V_{g}\) scales as \(-u_{*}^{2}/(fz_{i})\). Thus, up to non-dimensional coefficients, one can relate the geostrophic velocities \((U_{g},V_{g})\) to \(u_{*}\). As Tong and Ding (2020) do not consider the effects of the entrainment zone the velocity profiles are only valid for \(z/z_{i}<0.4\).
Various time-dependent models have been developed to explicitly account for entrainment processes at the top of CBLs (Troen and Mahrt, 1986; Noh et al., 2003; Hong et al., 2006). For example, the counter-gradient transport method (Holtslag and Moeng, 1991) and the eddy-diffusivity mass-flux approach (Sieghsma et al., 2007; Li et al., 2021) are the Atmospheric Sciences
Figure 1: Profiles of the potential temperature \(\Theta\), wind speed \(U_{\rm mag}\), and potential temperature flux \(q\) in the CBL. The vertical lines with double arrows indicate different length scales in the CBL, namely, from left to right, the Obukhov length \(L\), the lowest height where the potential temperature flux first reaches zero, \(h_{1}\), the inversion layer height at which the potential temperature flux reaches its minimum value, \(z_{i}\), and the height where the potential temperature flux first recovers zero, \(h_{2}\). The background color indicates the magnitude of the potential temperature flux \(q\,(z)\) for case 2, see Table 1.
widely used in coarse-resolution climate models. In general, the potential temperature is time-dependent (Lilly, 1968) and the entrainment velocity can affect the mean wind speed in the mixed layer (Tong and Ding, 2020). However, the velocity and potential temperature flux profiles are quasi-stationary, and therefore similarity theory can be employed to obtain analytical expressions for these profiles shapes (Zilitinkevich and Deardorff, 1974; Arya, 1975; Zilitinkevich et al., 1992).
In this study, we focus on the the derivation of analytical expressions for the mean velocity and potential temperature flux profiles in cloud-free CBLs. We use a perturbation method approach to construct an analytical expression for the normalized potential temperature flux profile as a function of height, taking into account the characteristics of both the surface layer and the capping inversion layer. The depth of the entrainment zone is connected to the convective logarithmic friction law to obtain analytical expressions for the velocity profile in the mixed layer and the entrainment zone. As remarked previously, the surface layer is still described by the MOST.
The organization of the paper is as follows. In Section 2 we obtain analytical expression for the potential temperature flux and wind profiles. In Section 3 we validate the proposed profiles against LES. The conclusions are given in Section 4.
## 2 Theory
### Potential temperature flux profile
The potential temperature flux profile provides a precise and convenient demarcation between the mixed layer and the entrainment zone of the CBL (Deardorff, 1979; Deardorff et al., 1980). Figure 1 shows a definition of the various length scales in the CBL. Previous studies (Kaimal et al., 1976; Deardorff et al., 1980; Moeng and Sullivan, 1994; Noh et al., 2003; Garcia and Mellado, 2014; Haghshenas and Mellado, 2019) showed that the potential temperature flux (including both the turbulent part and the diffusive part) in CBLs decreases linearly from its maximum value at the surface to a minimum value at \(z=z_{i}\), and then increases steeply to zero in a narrow region \(z_{i}\leq z\leq h_{2}\) at the top of the boundary layer (figure 1). For typical CBLs the condition \(|\mathrm{d}z_{i}/\mathrm{d}t|\ll w_{*}\) holds, which implies that the boundary layer is quasi-stationary (Nieuwstadt et al., 2016, Section 7.6). Besides, the potential temperature flux \(q\) is fixed at the surface, and its value at the inversion layer height is nearly a constant fraction of the value at the ground \(q_{w}\). Therefore, the normalized potential temperature flux \(q(z,t)/q_{w}\) only depends on the similarity variable \(\xi=z/h_{2}(t)\), i.e.
\[q(z,t)/q_{w}=\Pi(\xi), \tag{1}\]
where the form of \(\Pi\) remains to be determined. Using the potential temperature equation, we derive below an
The solution of Eq. (6) reads
\[\Pi=1-c_{\Pi}\xi+(c_{\Pi}-1)\frac{e^{\xi/\epsilon}-1}{e^{1/\epsilon}-1},\quad 0 \leq\xi\leq 1. \tag{7}\]
Since \(\epsilon\ll 1\), the value of \(\Pi\) in the bulk of the mixed layer can be approximated as
\[\Pi\approx 1-c_{\Pi}\xi. \tag{8}\]
Similarly, since \(\Pi=0\) at \(\xi=h_{1}/h_{2}\), the slope \(c_{\Pi}\) reduces to
\[c_{\Pi}=h_{2}/h_{1}>1. \tag{9}\]
Therefore, the ratio of the entrainment zone thickness to the mixing layer depth is
\[R\equiv(h_{2}-h_{1})/h_{1}=c_{\Pi}-1>0. \tag{10}\]
Deardorff et al. (1980) found in laboratory experiments that the value of \(R\) is between 0.2 and 0.4. Furthermore, the entrainment flux ratio \(\Pi_{m}\), i.e. the minimum value of \(\Pi\), can be approximated as
\[\Pi_{m}\approx 1-c_{\Pi}(1-2\epsilon)\approx-(R-2\epsilon). \tag{11}\]
Stull (1976) and Sorbjan (1996) found that \(-0.3\leq\Pi_{m}\leq-0.1\). This results is consistent with the LES results of Sullivan and Patton (2011) with \(\Pi_{m}\approx-0.2\), the empirical results of Lenschow (1974) with \(\Pi_{m}=-0.1\), and the direct numerical simulation results of Garcia and Mellado (2014) with \(\Pi_{m}\approx-0.12\).
We note that the perturbation method approach to model the potential temperature flux profile was recently introduced by Liu et al. (2021) for conventionally neutral atmospheric boundary layers where the surface potential temperature flux is always zero. However, it should be noted that in the CBLs under consideration, the surface is heated and thermal plumes are generated at the ground, resulting in significantly different turbulence generation mechanisms. The applicability of the perturbation method approach to model the strong inversion layer relies on its ability to capture the strong gradients in the inversion layer. Here we used the second-order ODE defined by Eq. (6) to model the potential temperature flux profile as our _a posteriori_ tests confirm that this is sufficient to capture the inversion layer accurately. Higher-order terms could be incorporated, but this is not considered here to keep the obtained profiles relatively simple. An important observation is that the perturbation method approach is consistent with the finding of Garcia and Mellado (2014). They showed that the vertical structure of the entrainment zone is best described by two overlapping sublayers characterized by different length scales, namely the mean penetration depth of an overshooting thermal for the upper sublayer and the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the by turbulent by the by turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the by turbulent by the by turbulent by the turbulent by the by turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent by the turbulent by turbulent by the turbulent by turbulent by the turbulent by turbulent by the turbulent by turbulent by the turbulent by the turbulent by turbulent by the turbulent by the turbulent by the turbulent
convective logarithmic friction law from first principles, which connects the mixed-layer mean velocity scale \(U_{m}\) and the friction velocity \(u_{*}\) in the convective-roll dominant regime (\(-z_{\ell}/L\gg 1\)) as follows,
\[\frac{U_{m}}{u_{*}}=\frac{1}{\kappa}\ln\left(-\frac{L}{z_{0}}\right)-C, \tag{16}\]
where \(C=1\) is an empirical constant determined from our LES database (see below).
From the potential temperature flux profile modeling we learned that the ODE should have a second-order derivative term \(\epsilon U^{\prime\prime}\) to model the entrainment zone near the top of the boundary layer. The top boundary condition is given by the geostrophic wind component \(U_{g}\). The lower boundary condition is given by equaling Eqs. (13) and (16), namely \(U(\xi_{0})=U_{m}\), since Tong and Ding (2020) showed that \(U_{m}\) is very close to \(U(z)\) in the mixed layer. Here \(\xi_{0}\) represents the height of the top of surface layer, which can be determined using Eqs. (13) and (16),
\[\ln\left(-\frac{h_{2}}{L}\xi_{0}\right)-\psi_{m}\left(\frac{h_{2}}{L}\xi_{0} \right)=-\kappa C\quad\Rightarrow\quad\xi_{0}=\xi_{0}\left(\frac{h_{2}}{L} \right). \tag{17}\]
Because \(\epsilon\ll 1\), the solution obtained from \(U(\xi_{0})=U_{m}\) is almost the same as from \(U(0)=U_{m}\), while the expression of the latter is much simpler. Therefore, we model the profile of the streamwise velocity \(U\) in the mixed layer and the entrainment zone as:
\[\epsilon U^{\prime\prime}-U^{\prime}=0,\quad U(0)=U_{m},\quad U(1)=U_{g}. \tag{18}\]
The solution of Eq. (18) is
\[U=U_{m}+(U_{g}-U_{m})\frac{e^{\xi/\epsilon}-1}{e^{1/\epsilon}-1}. \tag{19}\]
We note that Eq. (19) is only valid in the mixed layer and entrainment zone as the wind speed in the surface layer is still modeled using the MOST. By combining Eqs. (13) and (19) and recalling that Eq. (13) increases monotonically as \(z\) increases, we obtain the following analytic description of the streamwise velocity profile \(U(z)\) for the entire CBL,
\[U=\begin{cases}\frac{u_{*}}{\kappa}\left[\ln\left(\frac{z}{z_{0}}\right)-\psi _{m}\left(\frac{z}{L}\right)\right],&\xi\leq\xi_{0},\\ U_{m}+(U_{g}-U_{m})\frac{e^{\xi/\epsilon}-1}{e^{1/\epsilon}-1},&\xi_{0}<\xi \leq 1,\end{cases} \tag{20}\]
where \(\xi_{0}\) is given by Eq. (17). As remarked in Section 1 the surface layer profile contains two length scales, i.e. \(z_{0}\) for the inner-inner layer and the Obukhov length \(L\) for the inner-outer layer. Similarly, the velocity profile for the mixed layer and entrainment zone contains two length scales, i.e. \(z_{\ell}\) to describe the mixed layer and \(h_{2}-z_{\ell}=2\epsilon h_{2}\) to describe the upper sublayer of the entrainment zone. The upper sublayer of the entrainment zone, the upper sublayer of the entrainment zone, the upper sublayer of the entrainment zone, and the upper sublayer of the entrainment zone. This confirms the view presented by Tong and Ding (2020) that the entrainment zone has a different scaling than the surface and mixed layers, and can therefore be considered as another inner layer in the overall CBL problem. We note that the proposed analytical profile is empirical, similar to the MOST, and that the parameter \(\epsilon\) parameterizes the effect of various physical processes. We further note that \(U_{m}\) and \(u_{*}\) are related as given by Eq. (16), and that the difference \(U_{g}-U_{m}\) scales as \((u_{*}^{*}w_{e})/(fz_{\ell})^{2}\)(Tong and Ding, 2020). Thus, Eq. (20) is predictive if the entrainment velocity \(w_{e}\) is given as an input parameter. To determine the value of \(w_{e}\), one may need to revisit the entrainment processes at the top of CBLs (e.g. Garcia and Mellado, 2014). In addition, the velocity predicted by Eq. (20) is continuous throughout the boundary layer and applicable for the considered ranges (see figure 5 below). However, its first derivative is discontinuous at the patching location \(\xi=\xi_{0}\). This is a typical character of low-order models (Garrett et al., 1982). To capture the smooth transition, a high-order model is needed (Tong and Ding, 2020). We leave these for future work.
To model the wind profile \(V\) in the the mixed layer and entrainment zone, we use a similar ODE as Eq. (18). The top boundary condition is given by the geostrophic wind component \(V_{g}\). As the spanwise velocity \(V\) is small compared to the streamwise velocity \(U\) in the mixed layer (Tong and Ding, 2020), the lower boundary condition is given by \(V(\xi_{0})=V(0)=0\). Therefore, we model the profile of the spanwise velocity \(V\) in the entire boundary layer using
\[\epsilon V^{\prime\prime}-V^{\prime}=0,\quad V(0)=0,\quad V(1)=V_{g}. \tag{21}\]
The solution of Eq. (21) is
\[V=V_{g}\frac{e^{\xi/\epsilon}-1}{e^{1/\epsilon}-1}. \tag{22}\]
Since \(V_{g}\) scales as \(-u_{*}^{2}/(fz_{i})\)(e.g. Wyngaard, 2010; Tong and Ding, 2020), the geostrophic wind component \(V_{g}\) can be connected to \(u_{*}\), up to a non-dimensional coefficient \(-V_{g}fz_{i}/u_{*}^{2}=0.66\), which is determined from our LES database (see Table 1).
## 3 Numerical validation
### Numerical method and computational setup
We use LES to simulate the CBL flow over an infinite flat surface with homogeneous roughness. We integrate the spatially-filtered Navier-Stokes equations and the filtered transport equation for the potential temperature (Albertson, 1996; Albertson and Parlange, 1999; Gadde et al., 2021; Liu et al., 2021, 2021; Liu and Stevens, 2021). Molecular viscosity is neglected as the Reynolds number in the atmospheric boundary layer flow is very high, and we use the Atmospheric Sciences
use the advanced Lagrangian-averaging scale-dependent model to parameterize the sub-grid scale shear stress and potential temperature flux (Bou-Zeid et al., 2005; Stoll and Porte-Agel, 2008). We note that the Lagrangian-averaging scale-dependent model has been extensively validated and widely used in the literature (Bou-Zeid et al., 2005; Stoll and Porte-Agel, 2008; Calaf et al., 2010; Wu and Porte-Agel, 2011; Zhang et al., 2019; Gadde et al., 2021).
Our code is an updated version of the one used by Albertson and Parlange (1999). The grid points are uniformly distributed, and the computational planes for horizontal and vertical velocities are staggered in the vertical direction. The first vertical velocity grid plane is located at the ground. The first gridpoint for the horizontal velocity components and the potential temperature is located at half a grid distance above the ground. We use a second-order finite difference method in the vertical direction and a pseudo-spectral discretization in the horizontal directions. Time integration is performed using the second-order Adams-Bashforth method. The projection method is used to enforce the divergence-free condition. At the top boundary, we impose a constant potential temperature lapse rate, zero vertical velocity, and zero shear stress boundary condition. At the bottom boundary, we employ the classical wall stress and potential temperature flux formulations based on the MOST (Moeng, 1984; Bou-Zeid et al., 2005; Stoll and Porte-Agel, 2008; Gadde et al., 2021).
We perform eleven LES to verify the validity of the derived wind speed and potential temperature flux profiles for CBLs. The computational domain is \(5\,\mathrm{km}\times 5\,\mathrm{km}\times 2\,\mathrm{km}\) and the grid resolution is \(256\times 256\times 256\). Due to large computational expense, only several external parameters are varied in the simulations. The flow is driven by the geostrophic wind of \(G=\sqrt{U_{g}^{2}+V_{g}^{2}}=10\) m/s, the buoyancy parameter is \(\beta=0.0325\,\mathrm{m/(s^{2}\cdot K)}\), and the Coriolis parameter is \(f=1\times 10^{-4}\,\mathrm{rad/s}\)(Moeng and Sullivan, 1994; Abkar and Moin, 2017; Gadde et al., 2021). To ensure the CBLs are in the convective-roll dominant regime with \(-z_{i}/L\gtrsim 10\), the surface potential temperature flux is set to \(q_{w}=0.12\sim 0.24\,\mathrm{K\cdot m/s}\). Note that the convective logarithmic friction law (Eq. (16)) is derived very recently by Tong and Ding (2020) and tested only in a relatively narrow range of \(-L/z_{0}\), namely \(-L/z_{0}\in[2.5\times 10^{2},1.5\times 10^{3}]\). To evaluate the performance of this law in much wider range, i.e. \(-L/z_{0}\in[3.6\times 10^{2},0.7\times 10^{5}]\), the roughness length is varied between \(z_{0}=0.0002\) m and \(z_{0}=0.16\) m, where the lower bound of \(z_{0}\) is set to a representative value of the sea surface (Wieringa et al., 2001). The vertical potential temperature gradient is varied between \(\Gamma=1\) K/km and \(\Gamma=9\) K/km to capture the relevant range observed in atmospheric measurements (Sorbjan, 1996). The velocity field is initialized with the geostrophic wind \(G=10\) m/s.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline Case & \(\Gamma\) (K/km) & \(q_{w}\) (K\(\cdot\)m/s) & \(z_{0}\) (m) & \(u_{*}\) (m/s) & \(U_{m}\) (m/s) & \(|V_{g}|\) (m/s) & \(\epsilon\) & \(c_{\Pi}\) & \(Ri\) & \(-z_{i}/L\) & \(-L/z_{0}\) \\ \hline
1 & 9 & 0.24 & 0.16 & 0.562 & 7.60 & 2.00 & 0.044 & 1.32 & 56.1 & 19.2 & \(3.6\times 10^{2}\) \\
2 & 3 & 0.24 & 0.16 & 0.563 & 7.59 & 1.87 & 0.052 & 1.34 & 51.0 & 19.1 & \(3.6\times 10^{2}\) \\
3 & 1 & 0.24 & 0.16 & 0.562 & 7.59 & 1.84 & 0.055 & 1.34 & 47.9 & 19.2 & \(3.6\times 10^{2}\) \\
4 & 3 & 0.12 & 0.16 & 0.533 & 7.70 & 2.20 & 0.046 & 1.34 & 94.2 & 11.0 & \(0.6\times 10^{3}\) \\
5 & 3 & 0.24 & 0.016 & 0.463 & 8.44 & 1.17 & 0.050 & 1.33 & 51.0 & 34.5 & \(2.0\times 10^{3}\) \\
6 & 3 & 0.20 & 0.02 & 0.468 & 8.36 & 1.32 & 0.046 & 1.31 & 59.6 & 27.5 & \(2.0\times 10^{3}\) \\
7 & 3 & 0.12 & 0.016 & 0.444 & 8.45 & 1.43 & 0.038 & 1.31 & 91.5 & 19.0 & \(3.5\times 10^{3}\) \\
8 & 3 & 0.20 & 0.002 & 0.392 & 8.93 & 0.86 & 0.033 & 1.28 & 55.5 & 47.3 & \(1.2\times 10^{4}\) \\
9 & 3 & 0.24 & 0.0016 & 0.389 & 8.94 & 0.75 & 0.044 & 1.30 & 50.4 & 58.3 & \(1.2\times 10^{4}\) \\
10 & 3 & 0.12 & 0.0016 & 0.375 & 8.94 & 0.88 & 0.036 & 1.30 & 93.9 & 31.6 & \(2.1\times 10^{4}\) \\
11 & 3 & 0.20 & 0.0002 & 0.334 & 9.24 & 0.57 & 0.041 & 1.30 & 58.6 & 75.8 & \(0.7\times 10^{5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of all simulated cases. Here \(\Gamma=\partial\Theta/\partial z\) is the vertical derivative of the mean potential temperature in the free atmosphere, \(q_{w}\) is the surface potential temperature flux, \(z_{0}\) is the roughness length, \(u_{*}\) is the friction velocity, \(U_{m}\) is the wind speed in the mixed layer, \(L\) is the Obukhov length, \(z_{i}\) is the inversion layer height, \(|V_{g}|\) is the magnitude of the spanwise geostrophic wind, \(\epsilon\) and \(c_{\Pi}\) are dimensionless parameters calculated by Eqs. (5) and (9), respectively, and \(Ri=\beta A\Theta z_{i}/w_{*}^{2}\) is the Richardson number, where \(\Delta\Theta=\Theta(h_{2})-\Theta(h_{1})\) is the potential temperature difference across the entrainment zone and \(w_{*}=(\beta q_{w}z_{i})^{1/3}\) is the convective velocity.
Figure 3: Vertical profile of normalized potential temperature flux \(q/q_{w}\). Circles: LES data (Table 1); up-triangle: LES data of Mason (1989); down-triangle: LES data of Sorbjan (1996); square: direct numerical simulations data of Garcia and Mellado (2014); diamond: LES data of Abkar and Moin (2017); red line: prediction given by Lenschow (1974); blue line: prediction given by Noh et al. (2003); black line: prediction given by Eq. (7) with \(\epsilon=0.044\) and \(c_{\Pi}=1.32\). The figure shows that the proposed model captures the simulation trends very well.
Figure 2: The comparison of simulated (a) normalized wind speed \(\sqrt{U^{2}+V^{2}}/G\) and (b) normalized potential temperature flux \(q/q_{w}\) profiles for case 2 (see Table 1) with different computational domain sizes.
In figure 4(a) we compare the wind speed in the bulk of the mixed layer (\(0.4\leq z/h_{2}\leq 0.6\)) against the normalized Obukhov length \(-L/z_{0}\) with results from Tong and Ding (2020). The figure shows that our simulations convincingly confirm the validity of the convective logarithmic friction law for the wind speed (Eq. (16) with \(C=1\)) over a much wider range of \(-L/z_{0}\in[3.6\times 10^{2},0.7\times 10^{5}]\) than previously considered (\(-L/z_{0}\in[2.5\times 10^{2},1.5\times 10^{3}]\)). To further confirm the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the logarithmic friction law, the computed velocity of the convective logarithmic friction law, the computed velocity of the
present LES data, the dashed line is the theoretical prediction given by the MOST, and the solid line is the prediction given by Eq. (20) with \(\epsilon=0.044\). The figure shows that the MOST accurately captures the surface layer's wind profile (lowest 20% of the boundary layer). However, in the mixed layer, the prediction of the MOST deviates significantly from the LES data. In particular, the discrepancy from the MOST increases as the surface potential temperature flux \(q_{w}\) decreases (figure 5b,c) or the roughness length \(z_{0}\) increases (figure 5a,d). Therefore, MOST is seldom used to specify wind profiles outside of the surface layer. Figure 5 shows that the proposed wind profile given by Eq. (20) accurately captures the velocity profile throughout the entire boundary layer. This excellent agreement confirms the validity of our proposed wind profile of Eq. (20) for atmospheric boundary layers in the range of studied parameters.
Figure 6 shows the corresponding profiles of the mean spanwise velocity \(V\). The filled symbols are the present LES data and the solid line is the prediction given by Eq. (22) with \(\epsilon=0.044\). Overall, the agreement between the proposed wind profile given by Eq. (22) and the LES data is reasonably good in the entire boundary layer. This agreement confirms the validity of our proposed wind profile of Eq. (22) for CBLs in the range of studied parameters (i.e. \(-L/z_{0}\in[3.6\times 10^{2},0.7\times 10^{5}]\)). We note that the figure confirms that the spanwise velocity \(V\) is much smaller than the streamwise velocity \(U_{\text{PD}}\). The figure also indicates that the magnitude of the geostrophic wind component \(|V_{\text{g}}|\) increases as the surface potential temperature flux \(q_{w}\) decreases (figure 6b,c) or the roughness length \(z_{0}\) increases (figure 6a,d).
## 4 Conclusions
This work uses a perturbation method approach in conjuncture with the convective logarithmic friction law and the Monin-Obukhov similarity theory to develop analytical expressions of the wind and potential temperature flux profiles in convective atmospheric boundary layers. The validity of the proposed wind (given by Eqs. (20) and (22)) and potential temperature flux profiles (given by Eq. (7)) has been confirmed by their excellent agreement with large-eddy simulations results for atmospheric boundary layers in the convective-roll dominant regime with \(-z_{i}/L\gtrsim 10\), where \(L\) is the Obukhov length and \(z_{i}\) the inversion layer height. Furthermore, our simulations confirm that the convective logarithmic friction law of Eq. (16), which was originally proposed by Tong and Ding (2020) for the mixed-layer mean velocity scale, is valid for an extensive range of \(-L/z_{0}\), namely \(-L/z_{0}\in[3.6\times 10^{2},0.7\times 10^{5}]\), where \(z_{0}\) is the surface roughness length. Since accurate capturing the coupling between meso and microscale processes is a long-standing challenge in numerical weather predictions (Wyngaard, 2004; Larsen et al., 2018; Veers and Atmospheric Sciences
Figure 6: Vertical profile of the mean spanwise velocity \(V\). Filled circles: LES data (Table 1); solid line: prediction given by Eq. (22) with \(\epsilon=0.044\).
et al. (2019), the proposed analytical profiles may be relevant for climate modeling and weather forecasting to better understand the effect of convective atmospheric boundary layers on, for example, wind farms. Possible future work will involve investigating models to predict the entrainment velocity at the top of CBLs and developing a high-order model that can capture the transition between the entrainment zone and free atmosphere. The latter may require a formal asymptotic series expansion of the governing equations, allowing for the separation into a time-dependent and steady-state problem at different orders.
AcknowledgmentsThis work was supported by the Hundred Talents Program of the Chinese Academy of Sciences, the National Natural Science Fund for Excellent Young Scientists Fund Program (Overseas), the National Natural Science Foundation of China Grant (No. 11621202), the Shell-NWO/FOM-initiative Computational sciences for energy research of Shell and Chemical Sciences, Earth and Live Sciences, Physical Sciences, Stichting voor Fundamenteel Onderzoek der Materie (FOM) and STW, and an STW VIDI Grant (No. 14868). This work was sponsored by NWO Domain Science for the use of the national computer facilities. We acknowledge PRACE for awarding us access to Irene at Tres Grand Centre de Calcul du CEA (TGCC) under PRACE project 2019215098, and the advanced computing resources provided by the Supercomputing Center of the USTC.
Data availability statementThe data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2302.08005 | Slapo: A Schedule Language for Progressive Optimization of Large Deep
Learning Model Training | Recent years have seen an increase in the development of large deep learning
(DL) models, which makes training efficiency crucial. Common practice is
struggling with the trade-off between usability and performance. On one hand,
DL frameworks such as PyTorch use dynamic graphs to facilitate model developers
at a price of sub-optimal model training performance. On the other hand,
practitioners propose various approaches to improving the training efficiency
by sacrificing some of the flexibility, ranging from making the graph static
for more thorough optimization (e.g., XLA) to customizing optimization towards
large-scale distributed training (e.g., DeepSpeed and Megatron-LM). In this
paper, we aim to address the tension between usability and training efficiency
through separation of concerns. Inspired by DL compilers that decouple the
platform-specific optimizations of a tensor-level operator from its arithmetic
definition, this paper proposes a schedule language, Slapo, to decouple model
execution from definition. Specifically, Slapo works on a PyTorch model and
uses a set of schedule primitives to convert the model for common model
training optimizations such as high-performance kernels, effective 3D
parallelism, and efficient activation checkpointing. Compared to existing
optimization solutions, Slapo progressively optimizes the model "as-needed"
through high-level primitives, and thus preserving programmability and
debuggability for users to a large extent. Our evaluation results show that by
scheduling the existing hand-crafted optimizations in a systematic way using
Slapo, we are able to improve training throughput by up to 2.92x on a single
machine with 8 NVIDIA V100 GPUs, and by up to 1.41x on multiple machines with
up to 64 GPUs, when compared to the out-of-the-box performance of DeepSpeed and
Megatron-LM. | Hongzheng Chen, Cody Hao Yu, Shuai Zheng, Zhen Zhang, Zhiru Zhang, Yida Wang | 2023-02-16T00:34:53Z | http://arxiv.org/abs/2302.08005v2 | # Decoupled Model Schedule for Deep Learning Training
###### Abstract
Recent years have seen an increase in the development of large deep learning (DL) models, which makes training efficiency crucial. Common practice is struggling with the trade-off between usability and performance. On one hand, DL frameworks such as PyTorch use dynamic graphs to facilitate model developers at a price of sub-optimal model training performance. On the other hand, practitioners propose various approaches to improving the training efficiency by sacrificing some of the flexibility, ranging from making the graph static for more thorough optimization (e.g., XLA) to customizing optimization towards large-scale distributed training (e.g., DeepSpeed and Megatron-LM).
In this paper, we aim to address the tension between usability and training efficiency through separation of concerns. Inspired by DL compilers that decouple the platform-specific optimizations of a tensor-level operator from its arithmetic definition, this paper proposes a schedule language, Slapo, to decouple model execution from definition. Specifically, Slapo works on a PyTorch model and uses a set of schedule primitives to convert the model for common model training optimizations such as high-performance kernels, effective 3D parallelism, and efficient activation checkpointing. Compared to existing optimization solutions, Slapo _progressively_ optimizes the model "as-needed" through high-level primitives, and thus preserving programmability and debuggability for users to a large extent. Our evaluation results show that by scheduling the existing hand-crafted optimizations in a systematic way using Slapo, we are able to improve training throughput by up to 3.35\(\times\) on a single machine with 8 NVIDIA V100 GPUs, and by up to 1.32\(\times\) on multiple machines with up to 64 GPUs, when compared to the out-of-the-box performance of DeepSpeed and Megatron-LM.
## 1 Introduction
The demand of large deep learning (DL) models is surging in recent years as they demonstrate dominating model accuracy on a range of tasks in natural language processing (NLP) [3, 5, 9, 11] and computer vision [12, 28, 54]. These models are normally invented in user-friendly DL frameworks like PyTorch [35] with dynamic model graphs1, which by design lacks sufficient optimization for high-performance execution. This issue becomes more and more critical as the size of models grows exponentially and so does the time of training.
Footnote 1: Dynamic graph DL frameworks construct the model graph on the fly when executing its forward computation instead of constructing the graph ahead of time.
In order to reduce the model training time, developers propose various kinds of optimization. The first type of optimization is implemented manually in different layers of model training, such as inserting high-performance kernels [10, 24, 34, 47] for computationally intensive operators on specific devices (e.g., NVIDIA GPUs), employing data, tensor, and pipeline parallelism [32, 47, 43], as well as activation checkpointing [7, 18, 20], to efficiently distribute the training across multiple devices. However, manual optimization introduces the following two challenges.
**Challenge 1: Generality -** Incorporating the above optimizations requires making direct changes to the model implementation, which means that the optimization is not easy to generalize to other models. A new model, even with minimal change from the old one, may not be able to directly reuse the old optimization. In addition, the optimized model becomes platform-specific, requiring developers to maintain multiple implementations to serve all requirements (e.g., training on different platforms and deploying on non-GPU devices).
**Challenge 2: Ease of Tuning -** In practice, an optimization scheme has a number of configurations to tune (e.g., pipeline stages, number of activation checkpoints) to get a combination that results in the best performance. Developers need to identify tunable configurations in the implementation and modify the model to expose them for effective tuning. This process can be tedious and error-prone especially when the model definition is closely tied to optimizations.
In addition to manual optimization, the other set of optimization approaches converts the DL model into a number of
static graphs_ and leverages DL compilers to automatically apply optimizations. For example, JAX [4] is a compiler-based DL framework powered by a DL compiler XLA [16]. JAX traces the entire model to obtain a whole graph statically, on top of which the compiler can perform aggressive optimizations such as operator fusion, expression simplification, and even 3D parallelism [58]. Similarly, the recent release PyTorch 2.0 [37] provides a compiler interface to trace PyTorch dynamic graph executions and construct static graphs in torch.fx [45] for optimizations. While automatic optimization requires minimal engineering effort from model developers and addresses some of the challenges mentioned above, it also introduces two new challenges.
**Challenge 3: Programmability -** Working on static model graphs is limited by the requirement that everything must be statically analyzable and deterministic. Frameworks may impose constraints on the users to facilitate the conversion to static graphs. For example, JAX programming model requires pure Python functions, no in-place updates, etc., so developers may need to rewrite the model to meet these constraints in order to make it runnable. Moreover, it is usually non-trivial for developers to control or configure the optimizations in fine granularity, such as disabling certain rules, or excluding certain operators from a compiler pass.
**Challenge 4: Debuggability -** To make model implementation easy to understand and maintain, model developers usually implement layer modules (e.g., convolutional, fully connected, and attention layers) as building blocks, and use them to compose a model _hierarchically_. However, to expand the scope of optimization and improve performance, DL compilers operating on a static model graph often flatten the hierarchy to create a single-level dataflow graph, and rewrite certain operators (e.g., decomposing the batch_norm op into a number of smaller ones). This prevents developers from understanding and troubleshooting performance or convergence issues, as the optimized model may bear little resemblance to the original model implementation.
To address the challenges mentioned above, we propose _Slapo2_, a Schedule **LA**nguage for **P**rogressive **O**ptimization, designed for DL frameworks with dynamic model graphs. Slapo has the following major features.
Footnote 2: Open source: [https://github.com/awslabs/slapo](https://github.com/awslabs/slapo).
**Decouple model execution from its definition.** To address Challenge 1, we decouple model execution (named "schedule") from its definition. As a result, the model implementation remains the same, and developers can optimize a model-specific and platform-specific schedule in a separate place. This idea is inspired by well-known domain-specific compilers - Halide [42] and Apache TVM [6] - which propose widely adopted schedule languages that decouple tensor operator scheduling from its arithmetic definition.
**Auto-tuning.** A separate schedule also enables massive auto-tuning opportunities. Similar to AutoTVM [8], Slapo provides a programming interface that allows developers to specify a set of tuneable knobs to form an efficient tuning space. The tuning space can then be explored by Slapo auto-tuner to realize the optimal configuration, which addresses Challenge 2. Along this direction, Slapo can also enable auto-scheduling as Ansor [57], and this is our planned future work.
**Progressive optimization.** Slapo incorporates a "trace by need" approach that only traces a desired module to be a static graph for compiler-based aggressive optimizations. The traced part can be expanded or shrunk progressively as needed. Developers explicitly call the scheduling primitives to realize this, addressing Challenge 3.
**Structure-preserving scheduling.** Model developers usually define building blocks (e.g., convolutional or attention layers), and then compose them together to form a complete model. Consequently, developers often leverage this structure to analyze and debug the model. Slapo preserves this hierarchy when constructing the schedule (see SS 3 for details), so that developers can easily locate the module and apply scheduling. Also, as the model structure is preserved and optimization can be progressively applied, it facilitates the users to debug any performance and convergence issue, addressing Challenge 4.
In summary, we make the following contributions:
* We propose Slapo, a schedule language that decouples model execution from definition, and preserves model structure hierarchy to enable progressive optimization.
* We define a set of schedule primitives for Slapo to cover widely used optimizations in efficient training.
* We design and implement a lightweight auto-tuner for further reducing the efforts required to identify the optimal schedule configurations for training efficiency.
We evaluate Slapo by training popular deep learning models with billions of parameters and compare Slapo with SOTA distributed training frameworks such as DeepSpeed [44] and Megatron-LM [47]. With minimal program effort, Slapo is capable of scheduling the existing hand-crafted optimizations to achieve up to 3.35\(\times\) speedup on a single machine with 8 NVIDIA V100 GPUs, and up to 1.32\(\times\) speedup on multiple machines with up to 64 NVIDIA V100 GPUs, when compared to the out-of-the-box best baseline.
## 2 Background and Motivation
In this section, we first introduce common practices of improving a DL model training efficiency for dynamic graphs (e.g., PyTorch [35]), followed by an end-to-end motivating example to illustrate the challenges of this process.
### Efficient Model Training
**High-performance kernel libraries.** To achieve high efficiency in deep learning model training, it is straightforward to leverage efficient kernels specifically optimized for particular hardware platforms (e.g., NVIDIA GPUs). For example, there
are a number of libraries [10, 24, 34, 47] that provide efficient CUDA kernel implementations. These libraries encapsulate kernels as DL framework-compatible modules for developers to replace the native implementation in their models. In the case where there are no existing CUDA implementations, developers may leverage compiler-based solutions, such as TorchScript [39], to generate a kernel.
**Activation checkpointing.** Apart from compute optimization techniques, memory footprint optimization is also essential for training large models. A large portion of memory consumption in the training process is contributed by forward activation tensors that are stored for the later gradient calculation. By checkpointing some activation tensors and re-computing the rest of them in backward propagation, we are able to significantly reduce memory footprint, and thus support a larger batch size and higher training throughput. This approach is called activation checkpointing and is originally proposed by [7]. Furthermore, existing works [18, 20, 21] also demonstrate that by carefully selecting which activations to checkpoint, we are capable of better utilizing device memory and achieving an even better throughput.
**Parallelism in distributed training.** When the model is too large to fit in a single device, training it in parallel in a distributed environment is inevitable. The parallelism techniques are usually classified into three types: data parallelism, tensor parallelism, and pipeline parallelism. Both tensor and pipeline parallelism belong to a larger class called model parallelism. _Data parallelism_ partitions training data, so each device trains the replicated model with different data [1, 26, 43], and aggregates their partial gradients for parameter updating. Since data parallelism replicates an entire model on each device, it is insufficient when the model size (i.e., total memory consumption of its parameters) is too large to fit on a single GPU. In this case, _tensor parallelism_, takes another approach by partitioning the model parameter tensor onto multiple devices [47]. However, it requires developers to explicitly use collective communication operators to scatter tensors and aggregate partial results. For example, Megatron-LM [47] is a widely used PyTorch-based framework that provides manual parallelized Transformer models [50] and is adopted to train extremely large models [55]. Finally, _pipeline parallelism_[17, 32] partitions models by layers and groups them into a series of stages. By putting each stage on a different device, we can overlap the computation of multiple data batches. To ensure correctness and performance, pipeline parallelism needs a specialized runtime to schedule and synchronize data. These techniques are not mutually exclusive and can be combined. Combining all of them is known as _3D parallelism_[33].
### A Motivating Example
In this subsection, we use BERT [11] model implementation from HuggingFace Hub (v4.25.1) [51] to showcase how the above techniques are applied to a DL model for efficient training. Fig. 1(a) depicts the architecture of attention layer [50], the core and most time-consuming module in BERT. An attention layer is composed of two submodules - SelfAttention and Projection. We conduct a few steps to progressively improve the training efficiency of this attention layer and show the resulting implementation in Fig. 1(b).
**1** **Fuse QKV.** By default, the Query, Key, Value in SelfAttention are three standalone nn.Linear modules, which may incur extra kernel launch overheads. We replace them with a single nn.Linear module with their parameters concatenated, as shown in the following code snippet.
```
def__init__(self,...): self.query= nn.Linear(n_hidden,n_head) self.key= nn.Linear(n_hidden,n_head) self.value= nn.Linear(n_hidden,n_head) self.dvb= nn.Linear(n_hidden,n_head*3)
6forforward(self,hidden,states,...):
7query=transpose(self.query(hidden_states)) key=transpose(self.key(hidden_states)) value=transpose(self.value(hidden_states))
8div=transpose(self.key(hidden_states))
9query,key,value=split(qkr,1,dim=1)
```
**2** **Use efficient kernels.** The pink blocks in Fig. 1 are the core attention computation, which is also the bottleneck of performance and memory footprint. A recent work flash attention [10] proposes to compute the attention in a block-by-block fashion, so only a block of the intermediate attention tensor is generated at a time, significantly reducing the peak memory consumption, and thus can improve the training efficiency by enlarging the batch size. The following code snippet shows how we replace the existing attention with the flash attention implementation provided by xFormers [24]. The transpose and reshape operations are simplified.
```
defforward(self,hidden_states,...): query=1/query.shape=[1]*0.5 attn=badcomm(bias,query,key) attn=dropout(softmax(attn),p) attn=attn@value attn=xformers.ops.mem.eff_attention(...)
```
Note that this flash attention implementation only supports
Figure 1: An attention layer in BERT. The Query, Key, Value, and Output are nn.Linear modules containing learnable weight and bias. 1
the latest NVIDIA GPUs with Volta, Ampere, and Hopper architectures, so once this kernel is used, the model is no longer compatible with other platforms.
Another optimization opportunity is the pattern in Projection. By default, the bias addition operation is contained in Output module. A more aggressive and more efficient way is to use a DL compiler (e.g., TorchScript [39]) to fuse the pattern BiasAdd-Dropout-ResidualAdd-LayerNorm into a single kernel, as suggested by [46].
**3 Tensor parallelism.** We then partition the FusedQKV and Output parameters onto multiple devices. Given the input of the attention module \(X\), the weights of FusedQKV (\(A\)) and Output (\(B\)), we have self-attention function \(f\):
\[f(XA)B=f\left(X\begin{bmatrix}A_{1}&A_{2}\end{bmatrix}\right)\begin{bmatrix}B_ {1}\\ B_{2}\end{bmatrix}=f(XA_{1})B_{1}+f(XA_{2})B_{2}\,.\]
Accordingly, we follow the convention of Megatron-LM [47] to shard the weights of FusedQKV in columns and shard the weights of Output in rows. We illustrate the latter in the following code snippet.
```
1def__init__(self,...):
2self.output=m.Linear(m_hidden,m_hidden)
3new_size=n_hidden//wolf_size
4self.output=m.Linear(new_size,m_hidden)
5defforwolf(self,hidden_states):
6out=self.output(hidden_states)
7dist.allreduce(out)
```
Since the output tensor only holds partial results after sharding, we need to conduct all_reduce to aggregate outputs from different devices.
**4 Pipeline parallelism.** To pipeline a BERT model and execute it on a SOTA pipeline runtime, we have to further manually partition the model to a sequence of sub-models (each of them includes a series of attention layers) by re-writing the top module3.
Footnote 3: The code example is omitted due to page limit.
The above process poses a generality issue. Although developers have spent efforts on identifying and optimizing the performance bottleneck of a model with semantics preserved, this effort is hard to be reused by another model. Furthermore, the above improved model is no longer compatible with a single device, unless we add control logic to only enable model parallelism on multi-GPU environments or maintain a separate single-device version. It also creates a barrier for the model deployment after training, because a model implementation with custom efficient kernels and parallelism may not be recognized by inference compilers.
In essence, the above pain points are the result of tightly coupling model definition and training/platform-specific optimizations. This motivates us to propose a schedule language that decouples model execution (i.e., schedule) from definition and provides easy-to-use primitives for optimizing large model training. In fact, the idea of decoupling optimization has been widely accepted in DL compilers [2, 6, 42], and opens opportunities for auto-tuning [6] and auto-scheduling [57].
## 3Slapo Design
This section presents the design of Slapo, our schedule language to progressively optimize DL model training using proposed primitives. Slapo decouples model definition from its training execution strategy for better portability. Slapo also abstracts out the common optimization techniques using a set of primitives to apply (or un-apply) one by one, lowering the bar for model developers to try out different performance optimization ideas. Furthermore, Slapo makes it possible to automate the performance tuning via hyperparameter search.
Fig. 2 illustrates the overview of Slapo. Slapo accepts a deep learning model implementation in a DL framework with dynamic graphs (e.g., PyTorch [35]) and parses the original model execution as its default schedule. Then, developers make use of the schedule primitives for optimizations on top of the default schedule. We define the schedule language in SS 3.1, and present the scheduling in various scenarios in SS 3.2 and SS 3.3. The scheduling strategy can be auto-tuned to search for a configuration that achieves the best performance (SS 3.4). Meanwhile, Slapo adopts a verifier (SS 3.5) to ensure the functional correctness of all schedules. After the schedule is determined and applied, the scheduled model can be trained on the runtime of the original DL framework (e.g., PyTorch), or if needed, on the dedicated runtime of existing distributed systems such as DeepSpeed [44] pipeline.
### Schedule Language
Motivated by SS 2.2, our goal is to let developers optimize models in a few lines of code without changing the model definition itself. Fig. 3 presents the Slapo schedule language with the BERT model from HuggingFace Hub [51] as an example. As shown in Fig. 3(a), most DL model developers define models with a hierarchical structure for better readability and easy maintenance. The _init__ constructor defines the configurations, submodules, and learnable parameters, and
Figure 2: Overview of Slapo.
the forward method defines the forward computation (the backward computation is generated by the framework with automatic differentiation [35]). Developers can then pass the created model into Slapo and create a default _schedule_ that specifies _how_ to execute the model in the framework. The schedule preserves the hierarchical structure, and create_schedule is applied recursively to all the submodules, so that developers can easily apply schedule primitives at arbitrary levels.
Table 1 defines the schedule primitives. It is worth noting that the primitive set can be easily extended in the future to incorporate other optimizations. We categorize schedule primitives in two sets: whether or not it requires a static graph to be generated. On one hand, when scheduling at the module level, such as enabling activation checkpointing, replacing with an efficient alternative, and sharding learnable parameters, we do not change the computation specified in the forward method. As a result, the schedule primitives in the left column of Table 1 do not require a static graph. We present the details of this scheduling in SS 3.2. On the other hand, scheduling the computation, such as operator fusion and pipeline parallelism, has to manipulate the forward method. Thus, the schedule primitives in the right column of Table 1 require the computation to be in a static graph, so we have to use.trace() prior to applying these primitives, as presented in SS 3.3.
### Schedule Modules and Parameters
We first present scheduling a module and its parameters, which typically does not change the computation and thus does not require static graphs.
#### 3.2.1 Schedule Modules
For important workloads such as attention in Fig. 3(a), researchers or hardware vendors may manually implement efficient kernels. These highly customized, hand-crafted kernels sometimes could outperform the ones generated by DL compilers. With Slapo, we can use.replace(new_module) primitive to replace a native implementation with an efficient one, where new_module is the custom module to be replaced, as shown in Fig. 3(b).
In addition to replacing a module with an efficient implementation, activation checkpointing is another important feature for large model training, as mentioned in SS 2.1. Slapo offers.checkpoint() primitive that wraps a module with a checkpointed module. This primitive allows developers to add activation checkpointing without changing the model implementation. Consequently, compared to existing activation checkpointing implemented in DL models with fixed strategies, Slapo enables developers to flexibly adjust the number of checkpoints via our schedule primitive or leverage the auto-tuner for better memory and throughput trade-offs.
#### 3.2.2 Parameter Sharding
In 3 of SS 2.2, we introduced the steps to enable tensor parallelism - shard parameters and aggregate the outputs. With Slapo, we can shard a parameter with.shard(name, axis) primitive, and aggregate the results using.sync(type) primitive. The type can be "forward" (aggregate the forward activations), "backward" (aggregate the gradients), or "both".
With the considerations of flexibility and comprehensive schedule primitives, instead of implicitly aggregating the outputs, Slapo exposes.sync() primitive to developers to determine a proper aggregation point with an optionally customized aggregation function. Meanwhile, we leverage the verifier (SS 3.5) to check the correctness after the scheduling. In the future, we plan to develop an auto-scheduler that automatically generates these primitives.
We again use Fig. 3 to illustrate the advantage of flexible output aggregation points. In Fig. 3(c), we shard the parameters of a QKV module along the axis of output features (.shard(["weight", "bias"], 0)). In this case, each device computes a part of the output. Since we also shard the weights of its subsequent linear module, OUT, along input features, each device only takes a part of inputs for computation.
\begin{table}
\begin{tabular}{l|l} \hline.replace(new\_mod) &.replace(new\_mod, subgraph) \\.shard(param\_name, axis) &.fuse(complier, subgraph) \\.sync(type) &.pipeline\_split() \\.checkpoint() &.checkpoint(subgraph) \\ \hline \end{tabular}
\end{table}
Table 1: A summary of Slapo schedule primitives.
**Primitives with Dynamic Graphs** & **Primitives with Static Graphs**
Figure 3: An example of scheduling modules and parameters of a BERT model. ffn is the Feed-Forward Network. eff\_attn refers to the replaced attention module. The weight matrix has a shape of (output_dim, input_dim). Thus, sharding the weight matrix at axis=0 is equivalent to partitioning the output dimension.
As a result, we can defer the output aggregation after OUT to reduce memory footprint.
### Schedule Computations
The prerequisite of scheduling computations is tracing the forward method of the target module, and constructing a static graph in a certain intermediate representation (IR). There are several approaches to obtaining the static graph IR. First, _run-with-dummy-data_[39] is an approach that directly executes the method with dummy inputs and captures all executed operators in order. Second, _AST-analysis_[39] directly analyzes Python abstract-syntax-tree (AST) to obtain the static graph. Third, _just-in-time (JIT)_[4, 48] captures the execution graph in every training iteration, compiles it once, and reuses it in the rest process. Finally, _bytecode-analysis_[38] hooks into the Python frame evaluation to construct the graph from Python bytecode.
In Slapo, we define.trace(leaves, flatten) primitive on top of all approaches. This primitive lets developers configure the granularity and the form of a traced graph. Specifically, leaves indicates the submodules we will not trace into, and flatten indicates whether to flatten a traced static graph. By default, the predefined modules (e.g., nn.Linear) in DL frameworks are all leaves, and we create the static graph in a hierarchical way. The specification is then passed to the underlying tracing engine, and the traced module and submodules become static graphs so that compiler-related primitives can be enabled. We show a traced BERT attention module in Fig. 4. In the rest of this subsection, we discuss the scheduling with static graphs.
#### 3.3.1 Partial Computation Scheduling
The computation latency of a module is usually dominated by a part of its computational logic, such as attn and output in Fig. 4 (highlighted in yellow and green). As a result, it is effective if developers can offload the performance bottleneck logic to efficient kernels or DL compilers. Since most DL models usually have repetitive layers [54, 11], Slapo offers a.find(regex_or_pattern_fn) primitive that performs pattern matching algorithm based on subgraph isomorphism [13], with user-specified regular expression or a function with an identical subgraph. This helps find all target subgraphs at once. These subgraphs then can be scheduled in the same fashion with operator fusion, partial computation replacement, activation checkpointing, etc.
**Operator Fusion.** Operator fusion is an important optimization technique, as it can obviously reduce the data transfer and kernel invocation overheads to improve the latency, throughput, and memory footprint. Existing DL compilers [6, 39, 2, 16] usually incorporate pattern- and rule-based fusion strategies. The former leverages a predefined pattern set to fuse matched subgraphs; while the latter traverses the dataflow graph and fuses all valid operators satisfying predefined rules together. As shown in Fig. 4(a), Slapo takes the advantage of this feature from DL compilers by defining the.fuse(compiler, subgraph) primitive, where compiler indicates the DL compiler that will be used to generate a fused kernel of the subgraph. We currently support a pattern-based fusion strategy and use TorchScript [39] as the DL compiler to improve the performance of a kernel.
**Partial Computation Replacement.** In addition to DL compilers, when the subgraph is the performance bottleneck in widely used models, developers may manually implement an efficient kernel and encapsulate it in a module. If this custom kernel achieves better performance than the one generated by a DL compiler, developers are capable of using.replace(new_mod, subgraph) primitive to directly replace the corresponding computation logic with the custom one, such as Fig. 4(b). The decision of leveraging a DL compiler or a custom kernel can be made by developers with a one-line change. Developers can also rely on Slapo auto-tuner (SS 3.4) to realize a better one.
**Partial Activation Checkpointing.** While we have introduced.checkpoint() in SS 3.2 that enables activation checkpointing for an entire module, Slapo also allows developers to use.checkpoint(subgraph) primitive to only checkpoint a subgraph in the computation, providing a fine-grained control for performance-memory trade-off dilemma, which has been studied by many existing works [7, 20, 18].
#### 3.3.2 Pipeline Partitioning
In SS 3.2, we demonstrated that Slapo can achieve tensor parallelism using parameter sharding at the module level. How
Figure 4: An example of scheduling computation of a BERT model. For illustration proposes, we do not show the entire IR of the traced eff_attn module, but depict it in an identical forward function. The bias of the new OUT module is set as None, which has been fused into the LN module.
ever, it is not possible to achieve pipeline parallelism with the same approach, as it requires rewriting the top module, including its forward method, to be a sequence of submodules.
SOTA dynamic graph-based DL frameworks support pipeline parallelism in two steps. First, unlike the native runtimes of DL frameworks that use a single process to execute the model graph, pipeline parallelism requires launching one process per pipeline stage. Therefore, DL frameworks with pipeline parallelism support must provide their own runtime. Second, the model must be rewritten to follow a specific API convention. The rewritten implementation consists of a sequence of submodules, with outputs connecting to inputs between two consecutive submodules. This allows the DL framework to assign each module to a stage for execution. However, this process can be tedious for developers to prepare a model for pipeline parallelism.
A recent work, PiPPy (Pipeline Parallelism for PyTorch) [19], attempts to address this challenge by tracing the entire model to a static graph and partitioning the graph into a series of modules based on user annotations. However, this approach has limitations, as tracing the entire model can suffer from the limitations of the graph tracer, as discussed in SS 1. If a part of the model cannot be transformed into a static graph, the entire model cannot be partitioned. In contrast, Slapo allows developers to configure the granularity and the form of the traced graph, meaning that developers can choose to only transform a few top-level modules into a static graph and use the.pipeline_split() primitive to annotate the pipeline stage boundaries.
We use the example in Fig. 5 for illustration. To evenly split a BERT model with 24 attention layers in its encoder into two partitions, we can use.pipeline_split() primitive4 to annotate a stage boundary between layer 11 and 12 (0-based) in Fig. 5(a). In this case, only encoder module has to be traced, but not its submodules (e.g., attention) or siblings (e.g., embeddings and pooler). We note that the untraceable, complex computation logic usually lies in core building block modules (e.g., attention), so limiting the tracing granularity makes our pipeline partitioning more applicable.
Footnote 4: Where to insert pipeline splits to achieve optimal throughput is out of scope in this paper, but developers can use auto-tuner in § 3.4 for exploration.
However, since Slapo preserves the model structure hierarchy, it is non-trivial to partition a model into a sequence of modules with user-specified pipeline annotations. Specifically, if we simply partition the model based on annotations as Fig. 5(a), we fail to include embeddings and pooler modules in BERT architecture. To generate the correct partitions, we propose an algorithm that propagates the pipeline annotations from the annotated submodule to the top module, so that all ancestor and descendant modules at all levels can be included. The algorithm is shown as follows.
```
1defpartition(sch,common_parent_sch):
2seq_modules=partition_by_annotation(sch.module)
3parent_mod=sch.parent.module
4inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,inline,,inline,inline,inline,inline,,inline,inline,inline,inline,inline,inline,inline,inline,inline,,inline,inline,inline,inline,,inline,inline,inline,inline,,inline,inline,,inline,inline,,inline,,inline,inline,inline,inline,inline,,inline,,inline,inline,,inline,,inline,inline,,inline,inline,,inline,inline,,inline,inline,,inline,inline,inline,,inline,inline,,inline,inline,,inline,,inline,inline,,inline,,inline,,inline,inline,,inline,,inline,,inline,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,inline,,,inline,,inline,,,inline,,inline,,,inline,,inline,,,inline,,inline,,inline,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,,inline,,,inline,,,inline,,,inline,,,inline,,,inline,,,,inline,,,,inline,,,inline,,,,inline,,,inline,,,,inline,,,,inline,,,,inline,,,inline,,,,inline,,,,inline,,,,inline,,,inline,,,,,inline,,,,inline,,,,,inline,,,,inline,,,,,inline,,,,,inline,,,,,inline,,,,,,inline,,,,,,inline
(the white region). By avoiding the overlapping region of gray and orange regions plus the white region, the tuning efficiency can be improved.
With the search space constructed, the auto-tuner algorithm iteratively determines the values of all tunable parameters, schedules the model, and launches an evaluation script provided by developers to benchmark the performance and memory footprint. Our auto-tuner leverages exhaustive search by default. Meanwhile, we also provide randomized coordinate descent for users to accelerate the tuning process. We evaluate the efficiency of the search space and the tuning algorithm in SS 5.4 to show the effectiveness of our auto-tuner.
### Verification
To achieve high usability, Slapo primitives are flexible to schedule modules and computations. However, it is possible for developers to schedule a model incorrectly. For example, the replaced module may require a different data layout but developers do not provide the corresponding layout transformation logic, or developers insert the output aggregation to an invalid point when sharding a parameter tensor.
To guarantee the scheduled model is executable and maintains its numerical correctness, Slapo includes the following verification stages. First, before applying the schedule, we validate the sequence of schedule primitives with a set of predefined rules in each primitive. For example, a.sync() primitive must have a corresponding.shard() primitive beforehand; primitives for distributed training can only be specified in a distributed environment with multiple devices; primitives that require static graph must have a corresponding.trace() primitive in advance. If any of the rules are violated, the verifier raises an error and stops the rest scheduling process.
Second, we need to assure the numerical correctness of the replaced or fused operators. To achieve this, the verifier generates random inputs and compares the results of the replaced module against the original one. Developers can also configure the number of random inputs and provide a function to generate inputs when there are any constraints.
Lastly, we verify the end-to-end correctness also using random inputs. This verification includes two purposes: (1) whether the shape of the sharded parameters or the tensors in the replaced modules are correct, and (2) whether the scheduled model produces the same output as the vanilla one.
## 4 Implementations
We implemented Slapo with \(\sim\)3K Lo in Python on top of PyTorch [35] to benefit from its dynamic graph and usability. In this section, we highlight some implementation details.
**Static Graph Tracing.** Our tracer and the static graph are based on torch.fx[45], which captures PyTorch models via symbolic tracing and constructs a static graph with a 6-instruction Python-based intermediate representation (IR). Like other DL compilers, torch.fx tracer also has unsupported coding styles when capturing the graph, such as certain types of control flow and data structures. It also flattens the IR and discards all the model structural hierarchy. Thus, instead of simply invoking torch.fx tracer from the top module, we invoke the tracer module by module and carefully maintain the hierarchy. When a particular module cannot be traced by torch.fx, developers can simply disable the corresponding schedule primitives that require a static graph while the rest primitives can still be applied.
**Framework Dialects.** Slapo scheduled model is by design compatible with native PyTorch runtime and can be executed on PyTorch directly. To integrate with the dedicated runtime of SOTA distributed systems for certain parallelism (e.g., pipeline), we also implemented two framework dialects for Megatron-LM [47] and DeepSpeed [44]. These systems require the model to be in a certain form or wrapped by their custom module. For example, DeepSpeed [44] pipeline runtime requires a single tuple of inputs and outputs in each pipeline stage, so our DeepSpeed pipeline stage module includes the logic to (1) unpack the inputs and pack the outputs, and (2) perform liveness analysis to bypass the tensors required by subsequent stages.
## 5 Evaluation
In this section, we evaluate Slapo on different training configurations, in terms of the number of GPUs, number of instances, and parallelism strategies, to demonstrate Slapo is able to align or even outperform existing solutions while preserving usability. Note that Slapo does not change the semantics of models and optimizers, so model convergence is not affected. We also provide ablation studies to show the effectiveness of the schedule primitives and the auto-tuner.
**Setups.** All experiments are conducted on Amazon EC2 p3 instances. More specifically, we use p3.16xlarge instances with 8 NVIDIA V100 16GB GPUs for single-device and single-node evaluations, and use p3dn.24xlarge instances with 8 NVIDIA V100 32GB GPUs for multi-node evaluations. GPUs in these instances are connected via NVLink, which provides 300 GB/s theoretical aggregated GPU interconnect bandwidth, and the inter-node bandwidth is 100 Gbps. The software environment includes CUDA 11.7, PyTorch v1.13, Megatron-LM (git-hash 0bb597b), DeepSpeed (customization
Figure 6: An example of a user-defined search space.
from v0.6.2 with pipeline parallelism fix to match Megatron-LM pipeline performance), and NCCL v2.14.3.
**Models and Metrics.** We apply schedules to a set of popular PyTorch models from HuggingFace Hub [51] and torchvision [30], covering encoder-/decoder-based Transformer models and image classification models to demonstrate the generality of Slapo. Detailed model settings are shown in Table 2. All models are trained by AdamW optimizer [29] with mixed precision, and the micro-batch size (i.e., the number of samples per data parallel rank) is selected based on the memory footprint maximizing the system performance. We use the training throughput (the number of total processed samples per second) as our evaluation metric. For each setting, we train the models for tens of steps and take the average throughput after discarding the first few warm-up steps.
### Evaluation on a Single GPU
We first conduct end-to-end training efficiency evaluations on a single GPU to demonstrate that Slapo is generally applicable to various model implementations.
**Systems.** We use PyTorch Eager mode [35] as the baseline, which provides out-of-the-box performance. If activation checkpointing is implemented in a model, we evaluate the performance with and without activation checkpointing, and report the better one. When applicable, we also include the performance result with TorchScript [39], a commonly-used compiler framework in PyTorch, as another baseline. Since PyTorch 1.12, TorchScript adopts nvFuser [36] as the operator fusion mechanism, and supports training workloads. For Slapo, we schedule models with efficient kernels, operator fusion, and activation checkpointing, and leverage the auto-tuner proposed in SS 3.4 to search for the best configuration.
**Results.** Fig. 7 shows the throughput of each configuration. In most cases, Slapo can consistently improve the training throughput. Compared to Eager mode, Slapo achieves 1.05\(\times\)-2.11\(\times\) speedup. Even compared to TorchScript-optimized models, Slapo still achieves 1.45\(\times\) speedup on average. This is mainly attributed to Slapo employing the similar operator fusion but with more optimizations such as activation checkpointing and efficient custom kernels. Note that although Slapo's fusion capability is limited by module boundaries for structure-preserving scheduling, most performance bottleneck subgraphs do not cross modules in training workloads.
Furthermore, Slapo enables more optimization opportunities than TorchScript by progressive tracing. For example, the GPT model implementation we adopted is GPT-Neo [15] from the HuggingFace hub. This implementation, unfortunately, cannot be optimized by TorchScript due to its coding style, thus showing no results in Fig. 7. However, Slapo allows us to selectively trace the FFN (Feed-Forward Network) module but not others, resulting in effective optimization.
In addition, Slapo tunes the ratio of activation checkpointing to maximize GPU memory utilization. For example, after tuning the ratio of activation checkpointing in BERT, we only checkpoint 25% of attention layers. This brings another 1.06\(\times\) speedup compared to checkpointing all layers.
### Evaluation on Multiple GPUs
This subsection evaluates the end-to-end training efficiency on 2, 4, and 8 NVIDIA V100 16GB GPUs in a single p3.16xlarge instance to showcase the effectiveness of Slapo.
**Systems.** We select Megatron-LM v2 [33] as the baseline, which is a SOTA system built on top of PyTorch for training large Transformer-based language models on GPUs. Megatron-LM implements its own data loader, tokenizer, and optimizer for better training efficiency. In addition, it implements popular Transformer models with tensor parallelism as well as efficient customized CUDA kernels. We also choose DeepSpeed [44] as another baseline. DeepSpeed is a SOTA framework that incorporates ZeRO-powered data parallelism (ZeRO-DP) [43]. Since ZeRO-DP is applicable to arbitrary PyTorch models, DeepSpeed has been widely used to train large DL models. As Slapo is agnostic to parallelism strategies, we evaluate two configurations for every model to show that Slapo is compatible with the existing distributed training frameworks. Specifically, "Slapo-ZeRO3" schedules models with ZeRO-3 [43] that automatically partitions optimizer states, gradients, and parameters to enable memory-efficient data parallelism; while "Slapo-TP" schedules them to enable tensor parallelism. For each configuration, we auto-tune the activation checkpointing along with the batch size.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Task & \begin{tabular}{c} \# of params \\ (Million) \\ \end{tabular} &
\begin{tabular}{c} Seq Length / \\ Image Size \\ \end{tabular} & Precision \\ \hline BERT [11] & MLM & 335, 335 & 512 & FP16 \\ \hline RoBERTa [27] & MLM & 335, 355 & 512 & FP16 \\ \hline ALBERT [23] & MLM & 177, 177 & 512 & FP16 \\ \hline GPT [40] & CLM & 125, 1300 & 1024 & FP16 \\ \hline OPT [55] & CLM & 350, 350 & 1024 & FP16 \\ \hline T5 [41] & Seq2Seq & 223, 770 & 1024, 512 & FP16 \\ \hline WideResNet [54] & IC & 250, 250 & 3\(\times\)224\(\times\)224 & FP32 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Models used in the experiments. # of params shows the model size used in single-device and single-node experiments. MLM \(=\) Mask language modeling. CLM \(=\) Causal language modeling. Seq2Seq \(=\) Sequence-to-Sequence modeling. IC \(=\) Image Classification.
Figure 7: End-to-end throughput on a single NVIDIA V100 16GB GPU. \(\times\) denotes unsupported implementations.
**Results.** We first compare two baselines, Megatron-LM and DeepSpeed ZeRO-3, in Fig. 8. It is worth noting that Megatron-LM officially only supports three (i.e., BERT, GPT, and T5) out of the seven models listed in Table 2. Comparing these three models, we find that there is no one solution that is always superior to the other, highlighting the importance of Slapo which enables developers to easily implement the best parallelism strategies using schedules for different models.
We now compare the results of our approach, Slapo, to the baselines. In Fig. 8, for the models supported by Megatron-LM, Slapo-TP achieves 87% to 103% throughput on 8 GPUs. We have investigated the performance gap of up to 13% between those three models and concluded that it is mainly due to different model implementations between Megatron-LM and HuggingFace. For instance, Megatron-LM T5 adopts a fixed embedding instead of relative position embedding in self-attention layers. This avoids the overhead of computing relative position embedding, resulting in higher throughput.
The model difference is eliminated when comparing Slapo-ZeRO3 against DeepSpeed since both run the same HuggingFace models. With this premise, we can see that Slapo-ZeRO3 consistently outperforms DeepSpeed by a margin of 1.08\(\times\)-3.35\(\times\). Similar to the single GPU evaluation, the speedup mainly comes from operator fusion, efficient custom kernels, and tuned activation checkpointing ratio.
### Evaluation on Multiple Machines
This subsection presents the results of distributed training performance in multi-machine setups.
**Systems and Setups.** The testbed in this evaluation is p3dn.24xlarge instances with 8 NVIDIA V100 32 GB GPUs each. We again use DeepSpeed and Megatron-LM v2, as baselines. Following the common practices [33, 43], we use ZeRO-3 for DeepSpeed and set model-parallel and pipeline-parallel size as 8 and 2, respectively, for Megatron-LM. We consider the strong scaling efficiency, in which the global batch size is fixed. Typically, distributed training for large models runs on hundreds to thousands of GPUs and uses global batch sizes up to thousands out of consideration for model quality [33, 5, 9]. We fix the global batch size at 256 for our clusters with up to 64 GPUs, and tune the micro-batch size of each system for the best performance. The evaluation uses a GPT model [40] with 10 billion parameters. The input sequence length for the experiments is 1024.
**Results.** First of all, Fig. 9 again shows that there is no one parallelism strategy that always performs the best. On the other hand, Slapo consistently outperforms the best available baseline. This is because Slapo does not tie with a particular parallelism strategy or framework, meaning that we can pick whatever performs better and schedule additional effective optimizations as we will show in the ablation study.
### Ablation Study
We design an ablation study for applied optimizations to investigate the performance gain of the schedule.
**Schedule Primitives.** We start from a vanilla HuggingFace BERT model that is only capable of running on a single device. As shown in Fig. 10, by progressively applying the
Figure 8: Training throughput on Amazon EC2 p3.16xlarge instance with 8 NVIDIA V100 16GB GPUs. “Slapo-TP” uses tensor parallelism to align Megatron-LM. “Slapo-ZeRO3” uses DeepSpeed ZeRO-3 as the parallelism technique.
Figure 9: Training throughputs of a GPT model with 10 billion parameters on up to 64 GPUs.
schedule primitives to the model, we see a consistent increase in performance. Kernel optimizations, such as the use of Flash Attention [10] and fusing the Bias-GeLU kernel, have shown a 1.09\(\times\) speedup on a single device. Enabling selective checkpoints allows for a larger batch size, resulting in an additional 7% speedup. With the attention and ffn module sharded as discussed in Fig. 3, we can scale out to 8 V100 GPUs and achieve 3.25\(\times\) speedup. Furthermore, the word embedding layer contains parameters regarding the vocabulary size (>3K). Sharding this parameter results in large batch sizes, leading to a final 4.02\(\times\) speedup and similar scalability as Megatron-LM BERT in Fig. 8.
**Auto-Tuning.** We then investigate the efficiency of Slapo auto-tuner. We exhaustively evaluate a predefined search space (including white and yellow regions in Fig. 6) with 91 configurations composed of various batch sizes and activation checkpointing ratios for OPT on 8 V100 GPUs. As shown in Fig. 11, the optimal configuration only checkpoints 50% of the layers with the maximum batch size below the memory threshold. With the search space that has already pruned many inefficient configurations, as well as the coordinate descent algorithm deployed by the Slapo auto-tuner, only 17 configurations (19% of the total candidates) are explored to obtain the optimal one with the best throughput.
## 6 Related Work
**Schedule Languages and Compilers.** Many domain-specific languages (DSLs) leverage the idea of decoupling optimizations from algorithms [2, 6, 22, 42, 52], allowing users to focus on customization and enabling compilers to perform complex optimizations. TVM [6, 14] inherits the decoupling idea from Halide [42] and builds an end-to-end compiler for deep learning inference. Tiramisu [2] further extends the scope of schedule to polyhedral compilation. Slapo borrows a similar decoupling idea and applies it to the model execution level.
**Dynamic Graph Optimizations.** Due to the dynamic nature and usability of PyTorch, many frameworks and libraries directly explore optimizations on top of it. ZeRO [43] is a three-stage data parallelism strategy that partitions optimizer states, gradients, and parameters to reduce memory usage, implemented first in DeepSpeed [44] and then adopted by other frameworks [31]. MiCS [56] further improves ZeRO by minimizing the communication scale. Megatron-LM [33, 47] takes a different approach to implement model parallelism for Transformer models, becoming one of the mainstream SOTA parallelism libraries. Slapo applies those optimization techniques using primitives in a more user-friendly way. We also use framework dialects described in SS 4 to train the scheduled model on these frameworks.
**Static Graph Optimizations.** Some other DL frameworks adopt static graphs so that compiler optimizations can be easily involved. JAX [4] is a popular framework that offers a programming model similar to NumPy, and is powered by XLA [16] as the backend compiler. Accordingly, it is able to achieve 3D parallelism with the corresponding sharding mechanism, GSPMD [53] and GShard [25]. On top of that, Alpa [58] is the first compiler based on JAX and XLA to achieve automatic 3D parallelism. Besides, Unity [49] is another compiler-based distributed training framework that automatically jointly optimizes model parallelism and operator fusion. Their automation mechanisms are orthogonal to Slapo and could inspire Slapo's auto-scheduler in the future.
Moreover, PyTorch [35] eco-system also pays efforts on static graph optimization in recent years. torch.fx [45] is an IR to capture PyTorch dynamic graph so that it can be compiled by DL compilers such as PyTorch 2.0 [37]. Since Slapo adopts torch.fx as the IR for static graphs, we plan to integrate the PyTorch compiler in the future.
## 7 Conclusion
In this paper, we propose a schedule language Slapo for progressive optimization of large deep learning model training. Slapo decouples model execution from definition, and provides a set of schedule primitives for users to efficiently optimize the model execution. Apart from scheduling modules and parameters, Slapo is capable of partially scheduling the computation of a model, making deep learning compilers more practical for training workloads. Experimental results show Slapo can combine SOTA efforts to align or even outperform their performance with minimal programming effort.
Figure 11: Auto-tuning an OPT model. The contour line shows the training throughput of different combinations of batch size and checkpoint ratio. Throughput 0 means OOM. The purple \(\star\) shows the explored configurations by coordinate descent, and red \(\times\) depicts the optimal one.
Figure 10: Ablation study with HuggingFace BERT model. |
2302.07336 | Survey of physics reasoning on uncertainty concepts in experiments: an
assessment of measurement uncertainty for introductory physics labs | Measurement uncertainty is a critical feature of experimental research in the
physical sciences, and the concepts and practices surrounding measurement
uncertainty are important components of physics lab courses. However, there has
not been a broadly applicable, research-based assessment tool that allows
physics instructors to easily measure students' knowledge of measurement
uncertainty concepts and practices. To address this need, we employed
Evidence-Centered Design to create the Survey of Physics Reasoning on
Uncertainty Concepts in Experiments (SPRUCE). SPRUCE is a pre-post assessment
instrument intended for use in introductory (first- and second-year) physics
lab courses to help instructors and researchers identify student strengths and
challenges with measurement uncertainty. In this paper, we discuss the
development of SPRUCE's assessment items guided by Evidence-Centered Design,
focusing on how instructors' and researchers' assessment priorities were
incorporated into the assessment items and how students' reasoning from pilot
testing informed decisions around item answer options. We also present an
example of some of the feedback an instructor would receive after implementing
SPRUCE in a pre-post fashion, along with a brief discussion of how that
feedback could be interpreted and acted upon. | Michael Vignal, Gayle Geschwind, Benjamin Pollard, Rachel Henderson, Marcos D. Caballero, H. J. Lewandowski | 2023-02-14T20:46:11Z | http://arxiv.org/abs/2302.07336v3 | Survey of physics reasoning on uncertainty concepts in experiments: an assessment of measurement uncertainty for introductory physics labs
###### Abstract
Measurement uncertainty is a critical feature of experimental research in the physical sciences, and the concepts and practices surrounding measurement uncertainty are important components of physics lab courses. However, there has not been a broadly applicable, research-based assessment tool that allows physics instructors to easily measure students' knowledge of measurement uncertainty concepts and practices. To address this need, we employed Evidence-Centered Design to create the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE). SPRUCE is a pre-post assessment instrument intended for use in introductory (first- and second-year) physics lab courses to help instructors and researchers identity student strengths and challenges with measurement uncertainty. In this paper, we discuss the development of SPRUCE's assessment items guided by Evidence-Centered Design, focusing on how instructors' and researchers' assessment priorities were incorporated into the assessment items and how students' reasoning from pilot testing informed decisions around item answer options.
## I Introduction
Measurement is a central component of experimental scientific research, as all experimental measurements have some uncertainty. Proper consideration and handling of measurement uncertainty (MU) is critical for appropriately interpreting measurements and making claims based on experimental data. While some techniques for determining and using MU can be quite sophisticated, it is still possible (and desirable [1]) to teach basic MU techniques in introductory science labs. In experimental physics, MU informs comparisons of multiple measurements [2] or between measurements and values predicted by models [3], and so instruction around MU can help students better understand the nature of experimentation. This and other important features of MU led to policy and recommendations to include MU in introductory science courses [4; 5]. As developing proficiency with MU practices becomes an even more important goal in undergraduate physics labs, it is critical to be able to assess the level to which students are reaching this goal.
Educators often wish to evaluate student learning around important concepts and practices--often articulated as learning goals or learning objectives [6]--in order to inform and improve their instruction. To help instructors determine if learning goals or objectives are being achieved, the physics education research community has often developed and employed research-based assessments instruments (RBAIs), which Madsen, McKagan, and Sayre define as "an assessment that is developed based on research into student thinking for use by the wider...education community to provide a standardized assessment of teaching and learning" [7]. It is important to note that RBAIs provide researchers and instructors with opportunities to assess student learning across time, institutions, and curricular and pedagogical changes in order to inform and improve instruction; they are not intended to evaluate individual students for the purpose of assigning grades.
Developers of RBAIs often employ a theoretical framework during assessment development, such as Evidence Centered Design (ECD) [8], the Three-Dimensional Learning Assessment Protocol [9], or the framework described by Adams and Wieman [10]. Such frameworks "facilitate communication, coherence, and efficiency in assessment design and task creation" [8], typically by outlining steps or stages of assessment development, including exploratory research, data collection, and item development (discussed in this paper) through to assessment delivery, scoring, and validation (discussed in future papers). ECD, the framework used in this work, also provides a structure for establishing evidence-supported claims about student reasoning based on student responses on the assessment: these claims are grounded in evidentiary arguments, a major focus of this paper.
Of the RBAIs employed in physics labs, several focus on measurement and MU, albeit to varying extents. The Physics Measurement Questionnaire (PMQ) has been fundamental in articulating the point-like and set-like reasoning paradigms [11] and in measuring the success of course transformations aimed at helping students shift towards more set-like reasoning [2; 12; 13; 14]; the
Physics Lab Inventory of Critical Thinking (PLIC) [15] has been used to assess the effectiveness of a scaffold and fade approach to teaching critical thinking in a physics course [16]; and the Concise Data Processing Assessment (CDPA) [17] has been used to identify changes in student performance around MU [18] and to look at student performance across genders [19].
While each of these assessments deals with MU in some way, there is not currently a widely-administable RBAI that focuses explicitly on MU in introductory (first- and second-year) physics laboratory courses. To address this gap in assessments, we have developed the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE) using the assessment development framework of ECD [8]. It is our hope that SPRUCE will help instructors and researchers identify and improve instruction around measurement uncertainty concepts and practices that are challenging for introductory physics lab students.
In this paper, we present SPRUCE's assessment questions (hereafter referred to as "assessment items" or simply "items") and discuss their development. The goals of this paper are to demonstrate:
1. A need for a widely-administable assessment of measurement uncertainty and how SPRUCE will satisfy that need,
2. The assessment item development and refinement process, as guided by ECD,
3. Examples of evidentiary arguments, formed from student reasoning, that support our ability to make claims about student knowledge based on student responses to the assessment items.
We begin by discussing, in Sec. II, the need for a new MU RBAI and how the framework of ECD can facilitate the development of such an assessment. In Sec. III, we describe the first three layers of ECD (_domain analysis_, _domain model_, and _conceptual assessment framework_) and how we gathered information and made decisions to support the development of assessment items and evidentiary arguments. The development and refinement of these items and arguments is discussed in Sec. IV, followed by a summary, a discussion of future work, and implications for instruction in Sec. V: this future work, including the development of a scoring scheme and statistical validations of the assessment, are not the focus of the present paper.
## II Background
Over the last 30 years, research-based assessment instruments (RBAIs) have been used in physics classrooms to probe areas of interest and import for physics education researchers and physics instructors. Particularly notable examples of RBAI use include Mazur's use of assessment questions from the Force Concept Inventory [20] to probe student understanding of Newton's third law in his introductory physics lecture at Harvard [21], which led him (and others) to rethink what instruction in a lecture setting should look like [22] and Eblen-Zayas' use of the Colorado Learning Attitudes about Science Survey for Experimental Physics (E-CLASS) [23] in her advanced lab courses, where she found that introducing metacognitive activities in an open-ended lab course had a positive impact on student enthusiasm and confidence [24].
More broadly, RBAIs have been developed and deployed in the areas of mechanics [25; 20], electrostatics [26; 27], quantum mechanics [28], and thermodynamics [29]. In addition, assessments have been used to probe student quantitative reasoning [30], beliefs about physics and physics courses [31], modeling [32; 33], experimental research and lab courses [23], and concepts and practices used in laboratory courses [11; 15; 17]. These and other assessments can be found on the PhysPort website [34].
In this section, we provide more detail about RBAIs that probe student proficiency in working with measurement uncertainty (MU). We highlight the strengths of these existing assessments, while we also, as stated in our first research goal, argue that there is still a need for a new assessment specifically probing MU in introductory physics labs. We then present initial work on the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE), which is our response to the need for a new MU assessment instrument, and discuss how the framework of Evidence-Centered Design (ECD) [35] informed this work.
### Research-Based Assessment Instruments in Physics Labs
The following sections discuss three existing RBAIs that include some assessment of MU topics. While each of these RBAIs has contributed to our collective understanding of student reasoning around measurement uncertainty, they each have limitations that point to a need for a widely-administable assessment of measurement uncertainty for introductory physics labs.
#### ii.1.1 Physics Measurement Questionnaire
The Physics Measurement Questionnaire (PMQ) consists of multiple-choice and open-response items adapted from the Procedural and Conceptual Knowledge in Science (PACKS) Project [36] for use with students at the University of Cape Town, South Africa [11]. These items present options that students might face in a lab course and ask students which option they agree with (in a multiple-choice format), and then ask them to explain their reasoning (in an open-response format).
One of the most important findings to come out of the PMQ was the articulation of the point and set paradigms for student reasoning. These paradigms classify many
types of student reasoning as being either point-like, indicating students believe that quantities measured have a true value that can be obtained with a single, perfect measurement; set-like, a typically more expert-like view that measurements will always have uncertainty and that a true value (if it exists) can never be perfectly known; or something else, usually with elements of both point-like and set-like perspectives.
Despite the successes of the PMQ in articulating this paradigm and helping to inform course transformations, the assessment has two large limitations: the PMQ covers only a narrow range of ideas related to MU, and the assessment is open response and therefore laborious to score. This second limitation is compounded by variance in student responses observed at different institutions, sometimes requiring instructors and researchers to first modify the scoring scheme provided by the developers of the PMQ [2].
#### ii.1.2 Physics Lab Inventory of Critical Thinking
The Physics Lab Inventory of Critical Thinking was developed by physics education researchers at Cornell University and Stanford University to "assess how students critically evaluate experimental methods, data, and models" [34]. The developers of the PLIC conducted multiple rounds of interviews and full-course piloting and worked to establish various forms of validity of the assessment [15]. The PLIC is contextualized in a small number of experiments, about which students are asked multiple questions, and the assessment is administered in an online format.
The PLIC has been used to evaluate a "scaffold and fade approach" to instruction around making comparisons between measurements, or between measurements and models, for students in an introductory physics lab course [16]. This approach involves structured, explicit focus on a concept or practice initially (the "scaffold"), which then "fades" over the course of instruction as student proficiency develops. Students who received this scaffold and fade instruction around making comparisons were much more likely to think critically about their results and propose possible improvements to their experimental setup than were students who had taken the course the previous year and not received this instruction [16].
The PLIC was explicitly designed to assess critical thinking, which the authors define as "the ways in which one uses data and evidence to make decisions about what to trust and what to do." The authors aim to assess critical thinking in a lab setting, and while this includes components of MU, MU is not the primary focus of the assessment [15].
#### ii.1.3 Concise Data Processing Assessment
The Concise Data Processing Assessment (CDPA) was developed by researchers at the University of British Columbia (UBC) to assess student proficiency around MU and data handling [17]. It consists of multiple-choice questions and can be presented in a pre-post format so as to probe student learning in a course. The CDPA was developed to complement the learning goals of a "rigorous" introductory physics lab, and the researchers used full-class piloting and student interviews to refine the assessment items.
The CDPA has been employed to explore if improvements in student proficiency with MU had an impact on their scores on E-CLASS [18]. While there were not enough matched pre-and post-instruction data to make comparisons of improvement on these two assessments, no correlation was found between CDPA scores and E-CLASS pre-instruction scores. However, the CDPA was found to be able to measure shifts in student proficiency, specifically positive shifts around content that was emphasized in the courses and negative shifts in content that was not emphasized in instruction. This study was conducted with participants in their second- or third-year laboratory course at the University of Helsinki.
As stated above, the CDPA was developed to complement an intensive introductory physics lab, but even still, it is a challenging assessment: as part of assessment development, graduate students at UBC were administered the assessment and scored, on average, just over 50%, with post-test scores for first-year students averaging less than 40%. In the second study discussed above, second- and third-year physics majors showed no improvement in CDPA scores from the pre- to post- assessment (with an overall score of around 40%). As such, the CDPA may not be appropriate for many introductory physics labs, as its difficulty may limit its ability to identify trends and provide usable feedback for instructors.
### Assessment Development Framework
To guide the development of SPRUCE, we employed the assessment development framework of Evidence Centered Design (ECD) [8]. In particular, ECD was selected to help us effectively incorporate instructor priorities around MU into the assessment instrument and to support the gathering of evidence of student reasoning during assessment development that would then inform the interpretation and evaluation of student responses to the assessment items. Throughout this paper, we refer to explanations that link student reasoning to student item responses as evidentiary arguments.
ECD consists of five layers to facilitate "communication, coherence, and efficiency in assessment design and task creation" [8], which we list and briefly summarize below:
* _Domain Analysis_: gather information on the topic to be assessed, including from current instructors.
* _Domain Model_: organize _domain analysis_ data by writing narrative assessment arguments that describe proficiencies to be measured (which we do via assessment objectives [29; 37]), acceptable evidence of such proficiencies, and the methods for gathering this evidence.
* _Conceptual Assessment Framework_: operationalize assessment arguments to determine appropriate assessment features and item formats.
* _Assessment Implementation_: write then iteratively pilot and revise assessment items while establishing evidentiary arguments that link observable data (student responses) to targeted claims about student reasoning, which will eventually be quantified via a scoring scheme.
* _Assessment Delivery_: finalized implementation of assessment, scoring scheme, and instructor reports.
The first layer of ECD, _domain analysis_, is the topic of a previous paper [1] and briefly summarized below. _Domain model_, _conceptual assessment framework_, and especially _assessment implementation_ constitute the bulk of the work presented here, with a strong emphasis on piloting and evidentiary arguments. Development of a quantitative scoring scheme (part of _assessment implementation_) and the fifth layer (_assessment delivery_) will be discussed in future papers.
## III Spruce Development
### _Domain Analysis_
The first steps towards developing an RBAI on MU were presented in a previous paper [1]. In that work, we conducted and analyzed interviews with 22 physics lab instructors at institutions that spanned a range of sizes, highest degrees offered, selectivity, and student body demographics. In these interviews, we sought to identify instructor priorities when it came to the teaching and learning of MU. These interviews were semi-structured in nature and typically lasted around one hour.
Preliminary coding of these interviews was done to identify which concepts and practices instructors described as priorities or aspirational priorities for their courses. Instructors also talked about challenges for students and for instruction, including dealing with ideas taught in high school that students need to unlearn or refine, which informed our decisions of what content to include (or not include) in SPRUCE.
### _Domain Modeling_
After the _domain analysis_, _domain modeling_ involves "articular[ing] the argument[s] that connects observations of students' actions in various situations to inferences about what they know or can do" [8]. These assessment arguments are in a narrative structure that describe the concepts and practices to be assessed, how evidence of student proficiency with respect to those concepts and tasks might be gathered, and how the items will allow students to demonstrate such proficiencies.
To more explicitly express the assessment priorities of instructors, we expressed our assessment arguments in terms of assessment objectives [29]. Assessment objectives (AOs) are "_concise, specific articulations of measurable desired student performances regarding concepts and/or practices targeted by the assessment_" [37]. AOs are similar in concept and grain size to learning objectives [6], but they are designed to "span the space of feasible, testable outcomes" of an assessment [29]. As discussed in [37], AOs also provide a number of additional benefits for assessment development beyond organization of ideas collected in the domain analysis.1
Footnote 1: In [37], we described the inclusion of assessment arguments as being part of the _conceptual assessment framework_: we now believe that it more appropriately belongs in the _domain model_.
For SPRUCE, our AOs emerged from the qualitative codes developed during the _domain analysis_ and from the list of concepts and practices noted as being important to experts that was developed in Ref. [1] and from a survey of instructor priorities around MU. These AOs were iteratively refined during item development, piloting, and development of our evaluation scheme.
Ultimately, we identified four main areas of concepts and practices into which all of our AOs can be organized:
* Sources of uncertainty: estimating the size of uncertainty and identifying ways to reduce it
* Handling of uncertainty: uncertainty propagation and significant digits
* Distributions and repeated measurements: mean, standard deviation, standard error, and the importance of taking multiple measurements.
* Modeling: comparisons between explicit externalized models and the data
Because the modeling category pertained primarily to explicit comparisons between externalized models and data, we determined that AOs in this category fell outside of the scope of this assessment instrument. However, there are still elements of modeling, as defined by the Experimental Modeling Framework [38], that remain as integral parts of the other categories. Additionally, as described in [37] and discussed further in Sec. IV, some individual AOs in the other categories were also removed because of difficulties in establishing clear evidence of student reasoning. The finalized AOs are presented in Table 1.
In practice, rather than a strictly narrative structure, our assessment arguments included: a narrative description of the task that would be presented to students; the AOs the item would assess and which student responses would constitute evidence of proficiency; and a paragraph describing the rationale for why the item is appropriate. In a sense, these assessment arguments represent a hypothesis regarding a claim that the assessment will be able to make: if we present task X to students and they provide response Y, then we can conclude Z about their knowledge and reasoning around a particular AO. The connection between student responses and student reasoning comes from evidentiary arguments, which are developed during _assessment implementation_ and described in Sec. IV.1.
While the literature on ECD portrays a fairly linear progression from one layer to the next, we take a more iterative approach in which we revisited and revised our work in previous layers (including _domain modeling_) as we worked on subsequent layers.
### Conceptual Assessment Framework
The third layer of the ECD framework involves operationalizing the assessment arguments developed in the second layer to inform the development of assessment items. This process included deciding on the format of the assessment and the individual items.
In order to ensure a compact survey and reduce the cognitive load on students, we contextualized all of the assessment items within four experiments (as opposed to each item being a unique experimental context). Initial experimental contexts aligned with contexts discussed by instructors in the _domain analysis_ and were refined as needed to support the establishment of evidentiary arguments. The four experiments are summarized in Table 2 and presented in their entirety in Appendix A.
To develop an assessment that is easy to administer to a large number of students--twice, as SPRUCE is intended to be used pre-instruction and post-instruction--we opted for an online format for the assessment [39, 40] using the survey platform Qualtrics. Additionally, we selected six potential item formats that will allow for automated scoring.
The first two item formats are multiple choice (MC) and multiple response (MR). These formats contain a single prompt to which students respond by selecting a single answer (for MC items) or multiple answers (for MR items) from a list of answer options. These are the most common types of items on assessments as they are straightforward to develop and evaluate.
The next two item types are coupled multiple choice (CMC) and coupled multiple response (CMR). These items contain two (or more) questions with separate prompts, with the first question being multiple choice format and the second being either multiple choice or
\begin{table}
\begin{tabular}{l l} \hline \hline Experiment & Description \\ \hline \begin{tabular}{l} Cart Acceleration \\ (Experiment 1) \\ \end{tabular} & \begin{tabular}{l} A cart is released from rest to roll down \\ a ramp as part of an experiment to \\ determine the acceleration of the cart. \\ Students are asked about taking multiple \\ measurements and to identify the source \\ of greatest uncertainty in their \\ calculation of the acceleration. \\ \end{tabular} \\ \begin{tabular}{l} Mug Density \\ (Experiment 2) \\ \end{tabular} & \begin{tabular}{l} The density of a mug is to be computed \\ by measuring its mass and volume. \\ Students are asked to identify \\ uncertainties in each measurement and \\ then propagate those uncertainties. \\ \end{tabular} \\ \begin{tabular}{l} Spring Constant \\ (Experiment 3) \\ \end{tabular} & \begin{tabular}{l} A mass hangs from a spring and the \\ period of (vertical) oscillation is used to \\ determine a spring constant. Students \\ are asked to estimate and propagate \\ uncertainties and make comparisons \\ between results. \\ \end{tabular} \\ \begin{tabular}{l} Breaking Mass \\ (Experiment 4) \\ \end{tabular} &
\begin{tabular}{l} Successive masses are added to a mass \\ hanger until the string holding the mass \\ hanger breaks. Students are asked to \\ estimate uncertainty of a single \\ measurement, make comparisons \\ between results, and answer questions \\ about taking many more measurements. \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: SPRUCE Experiment Descriptions
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{3}{c}{**Sources of Uncertainty**} \\ S1 & Estimate size of random/statistical uncertainty by considering instrument precision \\ S2 & Identify actions that might improve precision \\ S3 & Identify actions that might improve accuracy \\ \hline \multicolumn{3}{c}{**Handling of Uncertainty**} \\ H1 & Identify when to use fractional versus absolute uncertainty \\ H2 & Propagate uncertainties using formulas \\ H3 & Report results with uncertainties and correct significant digits \\ H4 & Use concepts of uncertainty propagation to identify the largest contributor to uncertainty in a calculated value \\ \hline \multicolumn{3}{c}{**Distributions and Repeated Measurements**} \\ D1 & Articulate why it is important to take several measurements during experimentation \\ D2 & Articulate that repeated measurements will give a distribution of results and not a single number \\ D3 & Calculate and report the mean of a distribution for the best estimate of the measurement \\ D4 & Articulate that the standard deviation is related to the width of the distribution of measurements \\ D5 & Report the standard error (standard deviation of the mean) for the uncertainty in the mean of a distribution \\ D6 & Calculate the standard error from the standard deviation \\ D7 & Determine if two measurements (with uncertainty) agree with each other \\ \hline \hline \end{tabular}
\end{table}
Table 1: Final SPRUCE assessment objectives, organized by assessment objective category.
multiple response. These item types are both examples of two-tier questions [41] in which, rather than the answer options selected in either question independently, it is the combination of selections from the coupled questions that is important and evaluated. For SPRUCE's CMR items, the the multiple response answer options are _reasoning elements_ that allow students to compose a justification to their response to the multiple choice question. This implementation of CMR items has been used in other physics assessments [42, 27, 43] and allows for evaluating complex student reasoning in a format that can be evaluated by a computer (as opposed to, for example, a free response justification that must be evaluated by a person or well-trained machine learning algorithm [44]). However, as CMC and especially CMR items are more laborious to develop than are MC and MR items, we included only one CMC and two CMR items in SPRUCE.
The final two item types we employed are numeric open response (NOR) and coupled numeric open response (CNOR). NOR items allow students to type a number into a text box, and the student response can be evaluated in terms of the value entered. CNOR items allow for two related NOR items to be compared: in SPRUCE, we used CNOR items to compare student values and uncertainties to see if students reported these quantities using appropriate significant digits. As these items ask students to respond in a text box, student browsers may store student responses from the pre-instruction assessment and suggest or auto-fill them during the post-instruction assessment: for this reason, the values of quantities that factor into student responses on NOR and CNOR items are slightly different between pre- and post-instruction versions of the assessment.
## IV Assessment implementation
In the fourth layer of ECD, _assessment implementation_, assessment items are written and, iteratively, pilot tested and refined as the developers construct the evidentiary arguments that facilitate meaningful interpretations of student responses. Items were constructed by expressing the assessment arguments developed in the _domain analysis_ in terms of the item formats identified in the _conceptual assessment framework_.
### Evidentiary Arguments
As stated above, the key focus of this paper, and a key component of ECD, is the establishment of evidentiary arguments, which allow researchers to map student reasoning to student responses. The primary source of evidence for evidentiary arguments in this work is student responses to the assessment items, though previous work with the PMQ [2] and researcher expertise and experience also informed these arguments.
Ideally, researchers are able to make a one-to-one mapping of student responses to student reasoning to ensure that evaluation is based on a meaningful interpretation of student responses, and many of our item revisions altered item prompts and/or answer options to obtain such a mapping by addressing instances in which different students either provided the same response with different justifications or provided different responses with the same justification. If no such one-to-one mapping could be established, an item was discarded.
To clearly illustrate what we mean by evidentiary arguments and how they were constructed and employed, we provide an example of our evidentiary arguments for item 4.1 (shown in Fig 1) in Table 3. This item is in a CMC (coupled multiple choice) format and only has one AO: "S1 Estimate size of random/statistical uncertainty by considering instrument precision."
As our planned evaluation scheme evaluates each item along potentially multiple assessment objectives, we established these mappings for each AO relevant to each item. In a few instances, when a mapping could be made for one AO but not another, the item was retained and simply not evaluated along the AO for which we could not establish sufficient evidentiary arguments.
The following sections discuss the different stages of piloting and many of the specific changes made to items as we worked to established evidentiary arguments.
### Piloting
We implemented six pilot versions of the assessment between January and November 2022. These pilots consisted of multiple rounds of interviews and classroom implementation (which we refer to as "beta piloting" or simply "betas"). The primary goals of piloting were to ensure that our items are appropriately interpreted by students and to collect sufficient evidence of student reasoning such that we could form comprehensive evidentiary arguments.
While each assessment item was intended to be presented to students in a particular format (e.g., MC, CMR, etc.), during piloting we often temporarily changed the response format to gather additional information about student reasoning and student responses. These formatting decisions, as well as other priorities of the various pilots, are described in Table 4. Information about the piloting institutions and numbers of student participants is shown in Table 9 in Appendix B.
Even with fairly robust evidentiary arguments (as exemplified in Table 3) resulting from 39 interviews and beta testing with around 2000 students, it is possible there are examples of student reasoning that we did not observe. However, we worked to minimize the chance of this by recruiting as many students as possible from different types of institutions and introductory physics courses: solicitations to students were sent via course instructors, and instructors were solicited using a database
of instructors previously constructed [1] and since expanded upon. Additionally, as this assessment is intended to inform instruction at the classroom level, not assign grades at an individual level, the impact of this limitation is reduced by reporting averages and aggregated data to instructors and researchers.
#### ii.2.1 Pilot Interviews
Pilot interviews took place at three distinct stages of SPRUCE's development. The primary goal of these interviews was to gather evidence of student reasoning in order to establish evidentiary arguments linking student reasoning to student item responses.
**Interviews: Round 1**
The first round of interviews was conducted to ensure item clarity, identify potential item refinements, and to begin developing evidentiary arguments.
Through course instructors, we solicited interview participants who had completed a introductory physics lab with a MU component in the previous 12 months. Nine students from four institutions were interviewed between January and February of 2022. Interviews were conducted with students completing the assessment on their computer while screen-sharing with the interviewer via Zoom. Interviews lasted between 30 minutes and 1 hour and were video and audio recorded. Students were compensated for their time with an electronic gift card.
In the interviews, students worked through the assessment items while the interviewer observed their responses and prompted students to provide reasoning
\begin{table}
\begin{tabular}{l l c} \hline \hline Pilot & Purpose(s) & N \\ \hline Interviews 1 & (Primarily Open-Response items) & 9 \\ & Check Item Clarity & \\ & Establish Evidentiary Arguments & \\ & Identify Potential Refinements & \\ \hline Beta 1 & (Primarily Closed-Response Items) & 911 \\ & Preliminary Validation & \\ & Identify Potential Refinements & \\ & Pilot Scoring Scheme & \\ \hline Interviews 2 & (Primarily Closed-Response Items) & 3 \\ & Check Item Clarify & \\ & Expand Evidentiary Arguments & \\ \hline Beta 2 & (Primarily Open-Response Items) & 74 \\ & Confirm MC Answer Options & \\ \hline Beta 3 & (Primarily Final Item Formats) & 1048 \\ & Pilot Pre-instruction Implementation & \\ & Expand Evidentiary Arguments & \\ & Refine Scoring Scheme & \\ \hline Interviews 3 & (Primarily Final Item Formats) & 27 \\ & Finalize Evidentiary Arguments & \\ \hline \hline \end{tabular}
\end{table}
Table 4: The item formats, primary goals, and number of student participants (N) for each of the six pilots (presented in chronological order).
\begin{table}
\begin{tabular}{c l l} \hline \hline Answer Option & Evidence-supported Reasoning & Example of Evidence \\ \hline
520 g & Maximum Confirmed Supported & “520 is the last value reported that this string is able to support before it breaks.... So that’s the closest value [to the breaking value] that we get before it breaks” \\
521 g & ‘Just Over’ Maximum Confirmed & “I guess it would be 521, since that wouldn’t be too far [off from 520].” \\ & Supported Mass & \\
530 g & Midpoint of 520 g and 540 g & “We do know it’s within the range of 520 to 540, and so what this does, if we have it at 530 with an uncertainty of 10, means our minimum value is just over 520, and maximum value is just under 540.” \\
539 g & ‘Just Under’ Minimum & “Maybe it’s 539, because it breaks when you hit 540- maybe that was just slightly too big.” \\ & Confirmed Unsupported Mass & “That’s the value that the string broke on” \\
540 g & Minimum Confirmed Unsupported Mass & “What’s the value that the string broke on” \\ &
\begin{tabular}{l} Unsupported \\ \end{tabular} \\ \hline
0 g & There is no uncertainty & Common Beta Response (not seen in interviews) \\
1 g & Small but non-zero uncertainty & “It’s better to include some uncertainty than to just make assumptions. So it wouldn’t be zero, but it shouldn’t be too far off.” \\
5 g & Half of Measurement Increment ‘Place’ & “Like I said earlier, if it gives me one decimal place, my uncertainty would be the next one, like 05, so it can go up or down.” \\
10 g & Half of Measurement Increment & “I picked 10 because the smallest increments that we can go in this measurement tool is 20, so I took the 20, divided by 2, and got 10” \\
19 g & Non-Inclusively Spans Range & “I would say 19 g...since it wouldn’t include 520 but it could be anywhere else in that range [of 520] to 540.” \\
20 g & Measurement Increment & “So I said 20 because we don’t know what the- say like 521, 535, or 539, if that would also break. So there’s uncertainty there, which I found because 540 - 520 is equal to 20.” \\ \hline \hline \end{tabular}
\end{table}
Table 3: Example Evidentiary Arguments for item 4.1. The top and bottom halves of the table includes the evidentiary arguments for the MC questions asking students, respectively, what mass and uncertainty they would report.
supporting their final responses as well as other answers they considered. The majority of items were presented to students in an open response format.
#### Interviews: Round 2
A second set of interviews was conducted between June and August of 2022 to verify that our item distractors were sufficiently tempting and to again ensure that items and answer options were clear and understandable to students. We also further expanded our body of evidence of student reasoning by explicitly prompting students to explain their reasoning for not only their response but also, on many items, for a "second-best" response as well.
Interviews were solicited, conducted, and compensated in the same way as the first round of interviews. Despite the low number of participants, these interviews provided valuable data about student reasoning, especially for items that we had changed or were considering changing.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from courses that participated in beta 3 (discussed below), so the majority of these students had already taken a prior version of the assessment. Twenty-seven interviews took place with students from eight courses across four institutions. These data provided substantial evidence of student reasoning and also identified a few items where our assessment was not capturing student reasoning as intended, prompting us to make a few minor modifications. These interviews were conducted and compensated in the same way as the previous interviews.
#### Interviews: Round 3
A final set of interviews to finalize our evidentiary arguments was conducted in October and November of 2022. We solicited interviewees (through instructors) from the
is done by evaluating student responses to each item according to relevant AOs, and these betas allowed us to test and refine this scheme using large data sets. Specific examples of these changes are given in Sec. IV.3.
**Beta 1**
The first beta ran in the Spring of 2022, between interviews 1 and 2. This beta collected responses from students from eight courses at eight different institutions. In beta 1, almost all of the items were presented to students in a closed format (e.g., MC, MR as opposed to NOR), so that we could begin analyzing the distribution of students' responses across expected common response options, though for many items we did include a "not listed" option that allowed students to enter a response in a text box.
However, one item, item 4.4, was presented in a NOR format despite being designed to be a MC item. Student responses to this item are shown in Fig. 2. The distribution of student responses had peaks at values that corresponded to our planned answer options and, critically, there were no unexpected peaks indicating an attractive distractor that we had not anticipated. This finding informed the development of our second beta, discussed below.
**Beta 2**
The primary purpose of beta 2 was to verify the reasonableness of our distractors for MC items, and so items were presented to students primarily in an open-response format (e.g., NOR). While this beta was administered only to students in one course, the responses gathered strongly indicated that our chosen distractors covered the most frequent incorrect answers provided by students, with only a few new distractors being identified through this beta.
**Beta 3**
The final round of piloting occurred during the beginning of the Fall 2022 term, where the assessment was administered prior to instruction (as SPRUCE is intended to be used in a pre-post modality). Items were primarily presented to students in their final format.
### Evidentiary Arguments and Item Refinement Informed by Piloting
As discussed above, evidentiary arguments ideally allow for a one-to-one mapping between student reasoning and student item responses. When this is not the case, items should be modified or discarded. In the following sections, we provide examples of how items were modified or removed based on our ability to develop sufficient evidentiary arguments. We do not discuss every evidentiary argument or even every assessment item, but the examples given represent the breadth of these arguments while highlighting the items for which establishing evidentiary arguments proved to be the most challenging. These sections are organized according to the four experiments that students work through on the assessment: brief descriptions of the four experiments are given in Table 2, further detail is given in the following sections, and the full experiments with all items are included in Appendix A. In addition to changes informed by evidentiary arguments, many small formatting and wording changes informed by student interviews were made to ensure the items and answer options are clear and easily understood.
#### Experiment 0: Arrows on a Target
The first item on SPRUCE is actually independent of the four experimental contexts and was added because of observed student difficulties during interviews in which students would consistently conflate accuracy and precision. The item presents four targets with different groupings of arrows and asks students to identify which grouping has high precision and low accuracy. This is a canonical scenario for discussing accuracy and precision in physics, and was added to help us 'calibrate' our interpretation of student responses in items (specifically items 1.1, 3.2, and 4.8) that require students to distinguish between concepts of accuracy and precision in more complex scenarios.
#### iv.3.1 Experiment 1: Cart Acceleration
Experiment 1 presents students with an experiment to determine the acceleration of a cart rolling down a ramp. Specific assessment tasks are summarized in Table 5.
Item 1.1 (shown in Fig. 3) is largely modeled after item RD from the PMQ [11]. Early iterations of this item consisted of MC and CMC questions, however the research
Figure 2: Histogram showing students’ reported uncertainty values for item 4.4 on beta 1. These responses shows peaks at the correct value of 0.8278 and our planned distractors of 11.7197, 5.8599, 3.4234, and 0.0586.
team was unable to clearly establish evidentiary arguments because multiple explanations, some correct and some incorrect, would lead different students to selecting the same answer options. Eventually the team decided to present the item as a single CMR item (as shown in Fig. 3), in which students select an answer and also the reasoning that supports their answer.
The reasoning elements in the MR part of the CMR item were initially derived from the codes used to score item RD on the PMQ [2; 11] and refined based on interviews 2 and 3 and beta 3. Care was taken to ensure that answer options were generally mechanistic in nature: for example, one of the early answer options, "to improve accuracy," was removed because evidentiary arguments for this answer option were somewhat tautological, as the answer option was redundant with one of the item's AO (S3 - Identify actions that might improve accuracy). Instead, answer options that explained how accuracy could be improved were added to the item.
#### iv.2.2 Experiment 2: Mug Density
In experiment 2, students are asked to measure the mass and volume of a coffee mug in order to determine the mug's density (with uncertainty), with specific item tasks summarized in Table 6.
For item 2.2, interviews revealed that many students were guessing the correct answer of "standard error (also known as standard deviation of the mean)" because it had the word "mean" in it (and most students had correctly calculated the mean in the previous item). However, when the parenthetical was removed for later interviews and betas, we observed that many students knew the correct answer to be "the standard deviation of the mean" who were unfamiliar with the term "standard error." This presented the research team with a dilemma as both of these findings threatened our ability to confidently make evidentiary arguments for this item. Ultimately, we decided to keep the parenthetical to avoid arbitrarily large discrepancies between classes based on the particular language used in the course. This decision also impacted item 4.7, where we use the same language.
Item 2.3 originally asked for water level and uncertainty without asking for significant digits, and a follow-up item asked students to report the same values but now with appropriate significant digits. However, the research team was unable to establish clear evidentiary arguments for an appropriate number of significant digits due to the specifics of how the water levels could be measured using the graduated cylinders, thus the follow-up item was removed.
#### iv.2.3 Experiment 3: Spring Constant
In experiment 3, students are asked to determine the spring constant of a spring by first measuring the value of a mass and then the period of oscillation of that mass when it oscillates up and down while hanging from the spring. Specific task summaries are presented in Ta
\begin{table}
\begin{tabular}{|c l l|} \hline \hline Item & Type & Description \\ \hline
1.1 & CMR & Students are given a formula for \(a\) and asked, after taking one measurement of \(d\) and \(t\), what they would do next, and why. \\
1.2 & MC & Student are then presented with values of \(d\), \(t\), and their uncertainties, and asked to reason about contributions to the uncertainty in the calculated value of \(a\). \\ \hline \hline \end{tabular}
\begin{tabular}{|c l|} \hline \hline You want to measure the magnitude of acceleration, \(a\), of a cart rolling down a ramp (pictured below). Your setup includes two photogates that serve as high-precision timers. \\ \end{tabular}
\end{table}
Table 5: Experiment 1 item types and descriptions.
Figure 3: Item 1.1 asks students what they would do after taking a single measurement for time, then asks students to support that choice.
ble VII.
Item 3.2 (shown in its final form in Fig. 4) asks students to select and then justify in the number of trials, and the number of oscillations per trial, they would use to obtain a measurement of the period of oscillation. Interviews revealed that different students were employing the same reasoning (e.g., wanting to minimize how much the period changed throughout the measurement) to justify different answers, and conversely other students were using different, often conflicting, reasoning to justify the same answer. The research team ultimately elected to present this item in a double CMR format, with one MR follow-up asking students to justify the number of trials and the other to justify the number of oscillations per trial.
Student interview responses to this item and to item 1.1 (which targets the same AOs), as well as a qualitative coding of student responses to a "justify your answer" free-response follow-up question on beta 3, informed the development of CMR reasoning element answer options.
\begin{table}
\begin{tabular}{c c l} Item & Type & Description \\ \hline
2.1 & MC & Students are asked to report a value for the mass of the mug based on five measurements. \\
2.2 & MC & Students are asked to report an uncertainty for their value of the mass of the mug. \\
2.3 & CNOR & Students are shown before and after images of a graduated cylinder filled with water, where submerging the mug in the water has changed the level of the water line in the cylinders (and students are asked to report the values and uncertainties for the water levels before and after the mug is submerged). \\
2.4 & MC & Students are asked to propagate uncertainty in the water levels before and after submerging the mug through subtraction in order to determine the uncertainty in the measurement of the mug. \\
2.5 & MC & Students are asked to propagate uncertainty in the mass and volume of the mug through division to determine the uncertainty in the calculated density of the mug. \\ \end{tabular}
\end{table}
Table 6: Experiment 2 item types and descriptions.
Figure 4: Item 3.2 went through many iterations, ultimately coming a CMR item with two multiple response follow-ups to the multiple choice question.
\begin{table}
\begin{tabular}{c c l} Item & Type & Description \\ \hline
3.1 & CNOR & Students are asked to identify the mass uncertainty in a single digital scale \\ & & measurement. \\
3.2 & CMR & Students are asked how many measurements or trials, and then how many oscillations per trial, they would use to measure the period of oscillation for a mass hanging vertically from a spring. Follow-up questions ask for justifications. \\
3.3 & MC & Students are asked how they would report a value and uncertainty for a single oscillation based on a measurement of 10 oscillations and a given uncertainty estimate. \\
3.4 & MR & Students are asked to identify means and uncertainties from other groups (represented numerically) that agree with their mean and uncertainty. \\ \end{tabular}
\end{table}
Table 7: Experiment 3 item types and descriptions.
This item, in its CMR format, was then piloted in the third round of interviews, in which interviewers asked targeted follow-up questions to understand why students did or did not select specific answer options. As with item 1.1, the use of CMR items allows for students to provide evidentiary arguments in an instance in which we are unable to create a one-to-one mapping between answer options and reasoning.
Item 3.4 asks students to determine if their measured value with uncertainty (reported numerically) agrees with the measurements of other groups. This item is isomorphic to item 4.3, which presents the exact same relative relationships between measurements using graphs. There is a abundance of research in the physics education literature regarding the use of various or multiple representations in physics (e.g., [46, 47, 48, 49]), and by having isomorphic items using different representations, these items provide researchers and instructors an opportunity to observe the impact of representation on student reasoning around comparing data.
#### iv.2.4 Experiment 4: Breaking Mass
Experiment 4 asks students to consider a somewhat novel situation: determining how much mass one must hang from a string before the string breaks. This situation is presented in item 4.1 as shown in Fig. 1, and the specific experiment tasks are summarized in Table 8.
Item 4.1 is a somewhat unusual question for an experimental setting in that the resolution of the measurement is quite large (20 g), even for introductory physics labs. During interviews, this feature revealed interesting insights into student reasoning, and led to the refinement of the prompts and the inclusion of 521 g and 539 g (as the string was able to hold 520 g but broke when an additional 20 g was added) for the mass estimate and 1 g and 19 g for the uncertainty estimate. The evidentiary arguments for this item are presented in detail in Table 3.
Item 4.2 was initially developed, in part, to address an assessment objective of identifying and removing outliers, as one of measurements given was substantially different from the rest. However, fewer than 10 % of students removed the outlier in beta piloting. In interviews, students described not removing the outlier for many different reasons, including that they did not notice the outlier, did not think it was enough of an outlier to justify removal, or thought it was a substantial outlier, but did not feel comfortable removing it without being able to explain why it was an outlier. For example, one student stated that "I would prefer to keep [the outlier] because...unless I'm writing it up, I don't think values should be ignored". Attempts to develop this item into a CMR item were unsuccessful, as professional practices around removing outliers greatly depend on context and the specific sub-discipline of physics. For these reasons, we removed this AO for this item (and from the assessment as a whole), but, as this was not the only AO addressed by this item, we kept the item in the assessment.
During pilot interviews, one of our first interviewees asked if they could look up the formula for standard error in order to answer item 4.4. As there is no easy way to prevent students from doing this when they are taking the assessment, the interviewer permitted the student to look up the formula. This interaction informed the evidentiary arguments for this item: we are unable to make claims about students' knowledge of the formula (i.e., do they have it memorized), but we could make claims about student use of the formula regardless of if they recalled it or looked it up (or both, as was seen in multiple interviews).
## V Summary and Future Work
In this paper, we discussed the need for a widely-administrable assessment of measurement uncertainty for introductory physics labs, and then discussed how we are using the assessment development framework of Evidence Centered Design (ECD) [8] to create the Survey of Physics Reasoning on Uncertainty Concepts in Experiments (SPRUCE) to meet this need. We described the development and application of assessment objectives (AOs), which are not part of the ECD framework, to inform item design and evaluation. We also detailed how the establishment of evidentiary arguments, which map student reasoning to student responses, progressed across multiple rounds of interviews and full-class piloting to in
\begin{table}
\begin{tabular}{c c l} Item & Type & Description \\ \hline
4.1 & CMC & Students are asked to identify \(m_{breaking}\) and the uncertainty for a single measurement. \\
4.2 & NOR & Students are asked to report a value for the breaking mass based on 10 measurements. \\
4.3 & MR & Students are asked to compare their value (and an uncertainty we provide for them, both represented graphically) with the value and uncertainties of other groups. \\
4.4 & MC & Students are asked to calculate the standard error given the mean, number of \\ & & measurements, and standard deviation. \\
4.5 & CNOR & Given the mean, number of measurements, \\ & & standard error, and standard deviation, \\ & & students are asked to report their value and \\ & & uncertainty with appropriate significant digits. \\
4.6 & MC & Students are asked what the impact on the \\ & & standard deviation would be when going from 200 to 1000 measurements. \\
4.7 & MC & Students are asked what the impact on the \\ & & standard error would be when going from 200 to 1000 measurements. \\
4.8 & MC & Students are asked what the impact on \\ & & accuracy and precision would be when going \\ & & from 200 to 1000 measurements. \\ \end{tabular}
\end{table}
Table 8: Experiment 4 item types and descriptions.
form item revision and evaluation.
While the ECD documentation depicts a fairly linear progression through the ECD layers, we found iteration across layers to be extremely valuable. Specifically, the multiple rounds of piloting allowed us to present the same items to students using different formats (e.g., open response formats where students could input any answer, and closed response formats where students selected from a list of possible answer options), which allowed us to gather different types of data on student responses. Interviews, especially, allowed us to probe student reasoning around not only students' final responses but also their'second best' responses and other responses they considered. All together, these data informed our refinement of AOs, items formats, item prompts and answer options, and evidentiary arguments.
This work constitutes layers 2, 3, and the majority of layer 4 of the ECD framework (_domain model, conceptual assessment framework_, and _assessment implementation_. Future papers will focus on scoring (the last portion of _assessment implementation_) and the final layer of ECD, _assessment delivery_, which includes assessment validation (including reliability testing). Preliminary work in these areas is discussed below.
The evaluation scheme for SPRUCE has involved evaluating student responses to items along each of the AOs that the item targets. Some items target only one AO, while others target up to four. Through interviews and full-class beta pilot testing, we worked to establish evidentiary arguments that would map specific reasoning to specific responses or answer options on each item, and for items targeting more than one AO, a set of evidentiary arguments for each AO was developed. Through iterative piloting, we were able to refine items and expand the body of evidence that supports our evidentiary arguments, allowing us to confidently draw conclusions about student reasoning based on student responses.
During item development and refinement, our evaluation of student responses was often qualitative and primarily concerned with establishing if a particular response represented reasoning that indicated proficiency or a lack of proficiency with a particular AO or if the response was irrelevant to that particular AO. The next step in this process is to establish a quantitative scoring scheme by which student reasoning, represented by student responses, is scored along each of the AOs for each item in the assessment. This process is ongoing, and a discussion of the process and how exactly the scoring scheme will function is the topic of a future paper. This scoring scheme will have important implications for how instructors and researchers use SPRUCE in their courses and in research, and also for statistical validation of the assessment using classical test theory (CTT) [50] and item response theory (IRT) [51].
Our use of AOs and multiple rounds of piloting with students at many different institutions ensure that SPRUCE is appropriate for introductory level students. Additionally, our selection of item types and an online format, paired with our novel evaluation scheme, will allow for automated and meaningful evaluation of student responses.
Instructors and researchers who are interested in using SPRUCE in their teaching and/or research can find the finalized items in appendix A and can also visit the SPRUCE website at [52] for more information and to fill out the "Course Implementation Survey" to get started.
## VI Acknowledgements
This work is supported by NSF DUE 1914840, DUE 1913698, and PHY 1734006. We would also like to thank Robert Hobbs for his work contributing to the _domain analysis_, as well as the instructors and students who contributed to the body of data upon which this assessment was built and refined.
|
2307.02490 | Advances in Engine Efficiency: Nanomaterials, Surface Engineering, and
Quantum-based Propulsion | This study explores strategies to improve engine efficiency through
innovative materials, design concepts, and alternative energy sources. It
highlights the use of nanomaterials and surface engineering to create
hydrophobic or other types of surfaces for harnessing entropy-gradient forces.
Additionally, it discusses the potential of information-burning engines and
quantum-based propulsion systems. The manuscript emphasizes the
multidisciplinary nature of engine research and its potential to contribute to
a sustainable and efficient future. | Mario J. Pinheiro | 2023-07-03T18:25:35Z | http://arxiv.org/abs/2307.02490v1 | # Advances in Engine Efficiency: Nanomaterials, Surface Engineering, and Quantum-based Propulsion
###### Abstract
This study explores strategies to improve engine efficiency through innovative materials, design concepts, and alternative energy sources. It highlights the use of nanomaterials and surface engineering to create hydrophobic or other types of surfaces for harnessing entropy-gradient forces. Additionally, it discusses the potential of information-burning engines and quantum-based propulsion systems. The manuscript emphasizes the multidisciplinary nature of engine research and its potential to contribute to a sustainable and efficient future.
Thermodynamics; Energy conversion; Nonlinear dynamics and chaos; Renewable energy; Information-burning engines; Propulsion +
Footnote †: preprint: IST/DF 2023-MJPinheiro
## I Introduction
Engines are crucial devices in modern society, providing power to everything from vehicles to generators. However, the efficiency of engines is limited by the laws of thermodynamics, which dictate that only a portion of the available energy can be converted into useful work. To improve engine efficiency, researchers are exploring new materials and design concepts that can better harness available energy sources.
One promising area of research is the use of nanomaterials and surface engineering to create hydrophobic surfaces that can harness entropy-gradient forces to generate useful work. These surfaces are capable of converting the random thermal motion of water molecules into directed motion, which can be used to perform work. Other researchers are exploring information-burning engines, which operate by extracting energy from information processing, rather than from thermal or chemical energy sources.
Two kinds of engines can be thought of: i) temperature dependent [1]; entropy-gradient forces, usually used in hydrophobic wettability surfaces [2].
We may write the fundamental equation of dynamics in the form [3]:
\[m\vec{a}=\vec{F}^{ext}+\frac{\partial}{\partial\vec{r}}(TS) \tag{1}\]
for thermal engine analysis, or
\[m\vec{a}=\vec{F}^{ext}+\frac{\partial}{\partial\vec{r}}\left(T\sum_{i}\rho_{ i}\ln\rho_{i}\right) \tag{2}\]
with
\[\rho_{i}=\frac{n_{i}}{N} \tag{3}\]
for information-burning engines, where \(n_{i}\) refers to the energy-dependent probabilities of various alternative microstates, and \(\rho_{i}\) represents the energy-dependent probabilities of various alternative microstates in the information-burning engines.
The development of more efficient engines is a critical goal for both environmental and economic reasons, as reducing energy waste can help mitigate climate change and improve energy security. Advances in materials science, engineering, and thermodynamics are all contributing to this effort, and there is reason to be optimistic about the potential for new and innovative engine designs in the future.
The search for more effective engines is, in general, a multidisciplinary endeavor that necessitates cooperation between scientists and engineers from numerous domains. Engine technology has the ability to drastically minimize energy waste and open the door to a more sustainable future with sustained study and innovation.
As we move towards a more data-driven society, the ability to harness energy from information processing could become increasingly important. Information-burning engines represent a promising avenue for achieving more efficient and sustainable energy production, while also potentially enabling new applications in fields such as artificial intelligence and quantum computing. And this is the motivation of the present work, which aims to investigate the potential of information-burning engines and advance our understanding of their underlying principles and design concepts. By exploring new approaches for converting information into useful work, we hope to contribute to the development of more efficient and sustainable engines, as well as pave the way towards innovative applications in emerging technologies.
### Thermal engines
Eq. 1 can be applied to a cyclic process, where \(\vec{F}^{ext}=0\), yielding
\[\Delta W_{irr}=\int_{i}^{f}m\vec{a}\cdot d\vec{r}=T_{0}\sum\Delta S \tag{4}\]
where \(\Delta W_{irr}\) represents the real decreasing of work \(W\) due to irreversibility, \(T_{0}\) is the temperature of the medium, and \(\sum\Delta S\) denotes the increasing of the total entropy of the fluid and the heat source.
To use the quantum version of the dynamical equation of motion given by Eq. 1 to calculate the thrust of the Otto quantum engine, we need to express the external force \(\vec{F}^{ext}\) and the entropy \(S\) as quantum mechanical operators.
Assuming that the external force is constant and acting along the x-axis, we can express it as \(\vec{F}^{ext}=F^{ext}\hat{x}\), where \(F^{ext}\) is a constant and \(\hat{x}\) is the position operator along the x-axis. The entropy operator can be written as \(\hat{S}=-k_{B}\ln(\hat{\rho})\), where \(k_{B}\) is the Boltzmann constant and \(\hat{\rho}\) is the density operator of the system.
Substituting these expressions into Equation 1, we get:
\[m\frac{d^{2}\hat{x}}{dt^{2}}=F^{ext}+\frac{\partial}{\partial\hat{x}}(T\hat{S}) \tag{5}\]
To solve this equation for the thrust of the Otto quantum engine, we need to determine the time evolution of the density operator \(\hat{\rho}\). This can be done using the von Neumann equation:
\[i\hbar\frac{\partial\hat{\rho}}{\partial t}=[\hat{H},\hat{\rho}] \tag{6}\]
where \(\hat{H}\) is the Hamiltonian of the system. The Hamiltonian of the Otto quantum engine can be written as:
\[\hat{H}=\frac{\dot{\hat{p}}^{2}}{2m}+V(\hat{x}) \tag{7}\]
where \(\hat{p}\) is the momentum operator, \(m\) is the mass of the engine, and \(V(\hat{x})\) is the potential energy of the engine as a function of position.
Using Eqs. 5, 6, and 8, we can calculate the time evolution of the position operator \(\hat{x}\), which can then be used to determine the thrust of the engine. However, the calculation of the thrust of the Otto quantum engine using this approach can be quite challenging, as it involves the solution of coupled partial differential equations and requires advanced knowledge of quantum mechanics.
### Information-burned engine
The information-driven engine root in the Maxwell-demon Gedankenexperiment. To analyze the working mechanism of this engine, we ought to choose to write the equation under the form:
\[m\vec{a}=\vec{F}^{ext}+\frac{\partial}{\partial\hat{r}}(T\sum_{i}\rho_{i}\ln \rho_{i}) \tag{8}\]
Considering that \(F=-kT\ln e^{-\beta E_{0}}\), it can be inferred that the maximal power output of this engine is given by
\[P_{max}=kT\frac{E_{0}}{\tau}, \tag{9}\]
where \(E_{0}\) is the minimal energy (eigenvalue of the Hamiltonian) at disposable of the system in a given time \(\tau\), assuming the medium temperature low enough \(T_{0}\to 0\). For example, in Ref. [4], a heavy colloidal particle is held by an optical trap and immersed in water, via a ratchet mechanism the bead is lifted against gravity with a maximal power output estimated to be \(10^{3}\frac{kT}{s}\) and a maximum velocity of \(190\mu\)/s.
A system of sensors and actuators that can recognize and react to the microstates of the environment around the automobile might be used in the information-based mechanism for moving the car suggested by Eq. 5. The probability density function \(\rho\) would be used to describe the likelihood that the automobile would be in a specific microstate \(\rho_{i}\), depending on factors like the location and speed of other vehicles on the road, the slope of the road, and so on. This data would allow the system to predict the car's most likely future microstate and modify the vehicle's speed and direction accordingly. For instance, the system could slow down the automobile in advance of a stoppage if it considers that there is a greater likelihood that the vehicle would face traffic up ahead. In examining the microstate probabilities and choosing the optimum course of action, a sophisticated algorithm would be needed but the procedure would improve driving efficiency and safety if correctly applied.
### Engine fueled by entanglement
Eq. 8 was initially intended to represent a classical system. It would be necessary to make significant changes to the original equation in order to adapt it to describe a quantum Otto engine. However, there is a great potential for engines fueled by entanglement, a quantum mechanical phenomenon in which two particles become linked in such a way that the state of one particle is dependent on the state of the other, regardless of the distance between them. In these engines, entangled particles are used to extract work from heat baths, and the process is driven by a violation of local realism, a fundamental assumption in classical physics. Several theoretical proposals have been made for these engines, including a quantum Otto engine [5; 6] and a quantum refrigerator [7; 8].
In a nutshell, the engine is made up of two subsystems \(A\) and \(B\) that are entangled, meaning that they
are in a quantum state where the properties of one subsystem are correlated with the properties of the other subsystem. The first subsystem, which we will call the "information engine", is responsible for processing information and converting it into work. The second subsystem, which we will call the "thermal bath", is a reservoir of heat that is in contact with the information engine and provides the energy needed to perform work. The entangled state of two particles can be written as:
\[|\psi\rangle=\frac{1}{\sqrt{2}}\left(|0\rangle_{A}|1\rangle_{B}-|1\rangle_{A} |0\rangle_{B}\right). \tag{10}\]
Here, \(|0\rangle\) and \(|1\rangle\) represent the two possible states of a qubit, and \(A\) and \(B\) represent the two subsystems. The work performed by the information engine in the interval of time from instant \(t_{1}\) to \(t_{2}\) can be written in terms of the Hamiltonian operator \(\hat{H}\) and the wavefunction \(\psi\):
\[W=\int_{t_{1}}^{t_{2}}\langle\psi|\frac{d\hat{H}}{dt}|\psi\rangle dt \tag{11}\]
The thermal bath is modeled as a collection of harmonic oscillators, and its state is described by the density operator \(\hat{\rho}\):
\[\hat{\rho}=\frac{e^{-\beta\hat{H}}}{\text{Tr}e^{-\beta\hat{H}}} \tag{12}\]
where \(\beta\) is the inverse temperature of the bath.
The quantum Otto engine is based on the cyclic operation of four steps (\(i=1,...4\)), described by the following unitary operators:
\[\hat{U}_{i}=e^{-i\hat{H}_{i}\tau_{i}/\hbar}, \tag{13}\]
where \(\hat{H}_{i}\) is the Hamiltonian of the engine during the \(i\)-th step, \(\tau_{i}\) is the duration of the \(i\)-th step. The quantum Otto engine involves the following four steps:
1. Isothermal expansion: during this step, the engine is coupled to a hot thermal reservoir at temperature \(T_{H}\) and expands isothermally while doing work. The unitary transformation associated with this step is \(\hat{U}_{1}=e^{-i\hat{H}_{1}\tau_{1}/\hbar}\), where \(\hat{H}_{1}\) is the Hamiltonian of the engine during this step.
2. Adiabatic expansion: during this step, the engine is thermally isolated and expands adiabatically while doing work. The unitary transformation associated with this step is \(\hat{U}_{2}=e^{-i\hat{H}_{2}\tau_{2}/\hbar}\), where \(\hat{H}_{2}\) is the Hamiltonian of the engine during this step.
3. Isothermal compression: during this step, the engine is coupled to a cold thermal reservoir at temperature \(T_{C}\) and compresses isothermally while work is done on it. The unitary transformation associated with this step is \(\hat{U}_{3}=e^{-i\hat{H}_{3}\tau_{3}/\hbar}\), where \(\hat{H}_{3}\) is the Hamiltonian of the engine during this step.
4. Adiabatic compression: during this step, the engine is thermally isolated and compresses adiabatically while work is done on it. The unitary transformation associated with this step is \(\hat{U}_{4}=e^{-i\hat{H}_{4}\tau_{4}/\hbar}\), where \(\hat{H}_{4}\) is the Hamiltonian of the engine during this step.
The engine's architecture and the thermodynamic cycle being used determine the precise shape of the Hamiltonians and the lengths of the steps. Therefore, in order to determine the work and thrust we must first define the Hamiltonians \(\hat{H}_{1},\hat{H}_{2},\hat{H}_{3}\), and \(\hat{H}_{4}\). Let's suppose the system is a straightforward two-level system qubit. The Hamiltonians at each stage are expressed as follows:
\[\hat{H}_{i}=\omega_{i}|1\rangle\langle 1| \tag{14}\]
where \(\omega_{i}\) denotes the energy difference between the qubit's two states at step \(i\) (\(i=1,..4\)). The derivative of each Hamiltonian with respect to time can be determined as (\(i=1,...4\)):
\[\frac{d\hat{H}_{i}}{dt}=\frac{d\omega_{i}}{dt}|1\rangle\langle 1|. \tag{15}\]
Now that the integral of each derivative has been evaluated, the work that was done during each step can be computed, assuming that each step takes \(\tau_{1}\), \(\tau_{2}\), \(\tau_{3}\), and \(\tau_{4}\) amount of time. At each stage (\(i=1,...4\)), they are as follows:
\[W_{i}=\int_{0}^{\tau_{i}}\langle\psi|\frac{d\hat{H}i}{dt}|\psi\rangle dt \tag{16}\]
For a complete cycle, the total work performed is the sum of the work performed during each step:
\[W_{\text{total}}=\sum_{i=1}^{4}W_{i}. \tag{17}\]
Next, we can calculate the thrust \(T\) using the relationship:
\[T=\frac{\sum_{i=1}^{4}F_{i}\tau_{i}}{T_{\text{cycle}}} \tag{18}\]
where \(F_{i}\) is the force exerted during step \(i\), \(\tau_{i}\) is the duration of step \(i\), and \(T_{\text{cycle}}\) is the total time taken for the engine to complete a cycle.
With more qubits present in the system, the quantum Otto engine creates more thrust. We are still a long way from realizing the notion in practice. To boost the efficiency of a quantum Otto engine, a material or element should have the following qualities: i) Stable energy levels (in order to properly regulate and control the quantum processes required for the engine cycle, the material must contain consistently stable, clearly defined energy levels); ii) Coherent interactions (to maintain quantum coherence throughout the engine cycle, even when connected
to thermal reservoirs, the substance must exhibit coherent interactions between its component particles, such as qubits); iii) Efficient heat dissipation (to maintain the optimum operating conditions during the isothermal phases of the engine cycle, the material has to allow efficient heat dissipation); iv) Scalability (the material must allow the installation of a high number of qubits in order to boost the thrust and overall performance of the quantum Otto engine.)
The key to making the device move on an information budget is making use of the intimate linkages between quantum information and thermodynamics. Particularly, it has been shown that the amount of work that can be extracted from a system is constrained by the amount of information that can be acquired about it without unsettling it. This concept may be used, for instance, by building the information engine such that it extracts work by "measuring" the degree of entanglement between the two subsystems. By carefully adjusting the measurements, it is possible to "squeeze" energy out of the correlations between the two subsystems and get energy from the entanglement without disrupting the quantum state. The mathematical equations that describe this process vary in complexity depending on the specifics of the system under study. The core idea is that the entanglement between the two subsystems allows for the flow of energy and information that may be used to do work even in the absence of a direct source of energy like fuel or electricity.
One possible set of equations that can be used to describe this process is based on the concept of quantum mutual information. The mutual information between two quantum subsystems, denoted as \(A\) and \(B\), is defined as: \(H(A:B)=H(A)+H(B)-H(AB)\) or equivalently, \(I(A:B)=S_{A}+S_{B}-S_{AB}\) where \(S_{A}\), \(S_{B}\), and \(S_{AB}\) are the von Neumann entropies of subsystem \(A\), subsystem \(B\), and the joint subsystem \(AB\), respectively. Using these concepts, it is possible to derive the maximum amount of work that can be extracted from the entangled subsystems \(A\) and \(B\) as:
\[\Delta S=\frac{\Delta Q}{T}, \tag{19}\]
\[I(A:B)=S_{A}+S_{B}-S_{AB}, \tag{20}\]
\[S=-\text{tr}(\rho\log\rho), \tag{21}\]
and
\[W_{\text{max}}=k_{\text{B}}T\Delta I(A:B), \tag{22}\]
with \(T\) denoting the temperature of the thermal bath, and \(\Delta I(A:B)\) is the change in mutual information between subsystems \(A\) and \(B\) during the work extraction process.
One example of an information engine is the Brownian ratchet, which operates using the random motion of particles in a fluid. The ratchet consists of a series of asymmetric barriers, which allow particles to move in one direction but not the other. The basic idea of the Brownian ratchet can be described by the following equation:
\[W=k_{B}T\ln\left(\frac{p_{f}}{p_{i}}\right) \tag{23}\]
where \(W\) is the work done by the ratchet, \(T\) is the temperature, \(p_{f}\) is the probability of the ratchet moving forward, and \(p_{i}\) is the probability of it moving backward. The probability of the ratchet moving forward can be increased by "information ratcheting", which involves using information about the particles' positions to manipulate the barriers. One way to do this is to use a series of sensors to measure the positions of the particles, and then use this information to control the barriers. The amount of work done by the ratchet can be used to calculate the thrust generated by the device. For a Brownian ratchet, the maximum efficiency is given by the Carnot limit, which is \(\eta_{max}=1-\frac{T_{L}}{T_{H}}\), where \(T_{L}\) is the temperature of the environment and \(T_{H}\) is the temperature of the heat source. This equation allows us to calculate the ratchet's maximum thrust. The precise ratchet design and the effectiveness of the information ratcheting mechanism will determine the thrust.
One proposal for achieving this involves using a process called "quantum squeezing," in which the fluctuations of certain quantum observables are reduced below their usual quantum limits [10].
Hence, first, we will calculate the von Neumann entropy for the quantum Otto engine at each step of the cycle, and for its use, we need the density matrix at different stages of the Otto cycle. Applying the corresponding unitary transformations to the initial density matrix \(\rho_{0}\) (\(i=1,...4\)):
\[\rho_{i}=\hat{U}_{i}\rho_{0}\hat{U}_{i}^{\dagger} \tag{24}\]
After obtaining the density matrices for each step, you can calculate the von Neumann entropy at each step (\(i=1,...4\)):
\[S_{i}=-\text{Tr}(\rho_{i}\log_{2}\rho_{i}) \tag{25}\]
To calculate the work done during each step of the quantum Otto cycle, we can use the following relation:
\[W_{i}=\Delta E_{i}=E_{i}-E_{i-1} \tag{26}\]
Where \(W_{i}\) is the work done during step \(i\), and \(\Delta E_{i}\) is the change in energy of the engine during step \(i\). The energy of the engine can be found using the expectation value of the Hamiltonian:
\[E_{i}=\text{Tr}(\rho_{i}\hat{H}_{i}) \tag{27}\]
Finally, to calculate the total work done during the quantum Otto cycle, sum the work done during each step:
\[W_{\text{total}}=W_{1}+W_{2}+W_{3}+W_{4} \tag{28}\]
The thrust generated by the Otto quantum engine can be estimated by dividing the total work done by the cycle duration:
\[T=\frac{W_{\rm total}}{\tau_{\rm total}}, \tag{29}\]
where \(\tau_{\rm total}\) is the total duration of the quantum Otto cycle, given by:
\[\tau_{\rm total}=\tau_{1}+\tau_{2}+\tau_{3}+\tau_{4}. \tag{30}\]
To estimate the thrust in Newtons, we need to have more realistic values for the work done during each step and the durations of each step. Additionally, we need to convert the total work done during the quantum Otto cycle to force and relate it to the thrust. Here, we will provide an example calculation based on some assumptions. Let's assume the work done during each step is the following: \(W_{1}=1\times 10^{-23}\) J, \(W_{2}=0.8\times 10^{-23}\) J, \(W_{3}=-0.6\times 10^{-23}\) J, \(W_{4}=-0.5\times 10^{-23}\) J, and let the durations of each step be corresponding \(\tau_{1}=10^{-6}\) s, \(\tau_{2}=2\times 10^{-6}\) s, \(\tau_{3}=10^{-6}\) s, \(\tau_{4}=2\times 10^{-6}\) s.
The sum of the work done during each step gives the total work done during the quantum Otto cycle:
\[W_{\rm total}=W_{1}+W_{2}+W_{3}+W_{4}=0.7\times 10^{-23}\ \rm{J} \tag{31}\]
The total duration of the cycle is:
\[\tau_{\rm total}=\tau_{1}+\tau_{2}+\tau_{3}+\tau_{4}=6\times 10^{-6}\ \rm{s}. \tag{32}\]
Next, let's assume the quantum engine moves over a distance \(d\) during the Otto cycle. The average force \(F\) acting on the engine can be calculated as:
\[F=\frac{W_{\rm total}}{d} \tag{33}\]
Assuming the engine moves over a very small distance, for instance, \(d=10^{-9}\) meters (1 nanometer), we can estimate the average force:
\[F=\frac{0.7\times 10^{-23}\ \rm{J}}{10^{-9}\ \rm{m}}=7\times 10^{-15}\ \rm{N} \tag{34}\]
This estimate assumes a 1-qubit Otto quantum engine with specific values for the work done during each step and the step durations. However, we will scale now the system with the number of qubits N in the ion trap. The complexity of the system grows exponentially with the number of qubits N. However, since the qubits are assumed to be non-interacting and independent, the total work done during the cycle should scale linearly with the number of qubits. For a system with N qubits (see Ref. [9]), the total work done during the quantum Otto cycle can be approximated as:
\[W_{\rm total}(N)=N\times W_{\rm total}(1) \tag{35}\]
Here, \(W_{\rm total}(1)\) is the total work done for the 1-qubit Otto quantum engine calculated earlier, which was \(0.7\times 10^{-23}\) J. To estimate the average thrust for an N-qubit system, we can assume that the total duration of the cycle and the distance over which the engine moves remain unchanged. Thus, the average force and thrust can be calculated as:
\[F(N)=\frac{W_{\rm total}(N)}{d}=N\times\frac{W_{\rm total}(1)}{d} \tag{36}\]
or,
\[F(N)=N\times 7\times 10^{-15}\ \rm{N} \tag{37}\]
Here, \(F(1)\) is the average force (or thrust) for the 1-qubit Otto quantum engine calculated earlier, which is \(7\times 10^{-13}\) N.
Although this might seem tiny at the present time, it is anticipated that as technology develops and researchers seek to scale up ion trap quantum computers, the number of qubits will increase. However, given the fast advancement of quantum computing technology and the appearance of competing strategies like superconducting qubits or topological qubits, forecasting the precise value of N in the future is challenging. Assuming linear scaling with the number of qubits and no major interactions or mistakes between qubits, this calculation provides an average thrust. Since there are various difficulties in increasing the number of qubits while keeping high fidelity, and since the linear scaling assumption may not hold for larger systems, the estimation fails to accurately reflect the performance of a quantum Otto engine in real-world applications. Additionally, the scaling behavior and effectiveness of the quantum Otto engine may be impacted by elements like decoherence, defective gates, and interactions between qubits. Though it will depend on quantum technology, such as quantum computing, to evolve and validate itself in propulsion applications, we may anticipate using quantum Otto engines as CubeSat thrusters in the future.
### Thrust based on the gradient in the refractive index of a material
The use of quantum entanglement to propel a device is still a theoretical concept, and there are no widely accepted equations describing such a process. However, one approach could involve using entangled photon pairs to create a gradient in the refractive index of a material, leading to a net force on the device. The refractive index gradient can be created by manipulating the entangled photons in a way that causes a phase shift between them.
The force on the device can be expressed as \(\mathbf{F}=-\nabla U\), where \(\mathbf{U}\) is the potential energy of the system, which is proportional to the refractive index gradient. The refractive index gradient can be expressed as:
\[\nabla n(\mathbf{r})=\frac{2\pi}{\lambda}\mathbf{Re}\{\mathbf{\chi}\cdot\mathbf{\rho}( \mathbf{r})\} \tag{38}\]
where \(\mathbf{\chi}\) is the susceptibility tensor of the material and \(\mathbf{\rho}(\mathbf{r})\) is the density matrix of the entangled photons. The
density matrix can be written in terms of the individual photon states, \(\mathbf{\rho}(\mathbf{r})=\sum_{ij}\rho_{ij}\ket{\psi_{i}(\mathbf{r})}\bra{\psi_{j}( \mathbf{r})}\), where \(\ket{\psi_{i}(\mathbf{r})}\) and \(\ket{\psi_{j}(\mathbf{r})}\) are the spatial wave functions of the entangled photon states, and \(\rho_{ij}\) is the density matrix element corresponding to the probability amplitude of finding the system in state \(i\) or \(j\). As the potential energy is related to the refractive index gradient, \(\nabla U\propto\nabla n(\mathbf{r})\), the force on the device can then be expressed as:
\[\mathbf{F}=-\frac{2\pi}{\lambda}\mathbf{Re}\{\mathbf{\chi}\cdot\sum_{ij}\rho_{ij} \nabla\psi_{i}(\mathbf{r})\cdot\nabla\psi_{j}(\mathbf{r})\}. \tag{39}\]
The specific experimental setup and application in question will determine the exact form of the equations and the technique employed to control the entangled photons. There are various researchers working on the use of quantum entanglement for propulsion and other novel applications, including theoretical physicists such as Avi Loeb [11], Igor Pikovski [12], and Mark Kasevich [13]. There are currently no experimental data available for the theory of applying quantum entanglement to generate motion, which is still a fairly novel and theoretical topic. As a result, it has become unable to determine the thrust that could be produced utilizing this concept.
### Self-propelled EM device with metamaterials
To date, it has been challenging to produce macroscopic forces that can effectively propel spacecraft due to the limitations of material technology. However, recent advancements in the field of metamaterials provide new possibilities for creating enormous gradients of free energy [14; 15; 16; 17; 18; 19] that might possibly be utilized for propulsion purposes. But before they can be used successfully for this purpose, further research and development are required as the practical application of such materials is still in its inception. While the breadth and size of such propulsion systems are currently constrained, present technology enables the production of CubeSats that can be propelled utilizing already-in-use systems and technologies. Therefore, additional research and development in the field of innovative materials and propulsion technology are critical.
The force on the dipole moment is given by (see, e.g., Ref. [20]):
\[\mathbf{F}=\frac{1}{c^{2}}\left[\mathbf{p}\cdot\nabla\mathbf{E}+\frac{\partial \mathbf{p}}{\partial t}\times\mathbf{B}\right] \tag{40}\]
where \(\mathbf{p}\) is the dipole moment. This equation represents the Lorentz force acting on the dipole moment due to the gradient of the electric field and the time derivative of the magnetic field. It takes into account the interaction between the dipole moment and the electromagnetic field, resulting in a net force on the dipole.
Eq. 40 for the force exerted on a dipole moment due to an electromagnetic wave can be modified. We can write the dipole moment as \(\mathbf{p}=\mathbf{\alpha}\cdot\mathbf{E}\), where \(\mathbf{\alpha}\) is the polarizability tensor. Next, we can expand the polarizability tensor as:
\[\mathbf{\alpha}=\epsilon_{0}(\mathbf{\chi}^{(1)}+\mathbf{\chi}^{(2)}\cdot\mathbf{E}+\mathbf{ \chi}^{(3)}\cdot\mathbf{E}^{2}+\cdots), \tag{41}\]
where \(\mathbf{\chi}^{(1)}\) is the linear susceptibility tensor, \(\mathbf{\chi}^{(2)}\) is the second-order susceptibility tensor, and \(\mathbf{\chi}^{(3)}\) is the third-order susceptibility tensor. We can neglect higher-order terms in the expansion for small electric fields. Hence, we have:
\[\mathbf{p}=\epsilon_{0}\left(\mathbf{\chi}^{(1)}\mathbf{E}+\mathbf{\chi}^{(3)}| \mathbf{E}|^{2}\mathbf{E}\right). \tag{42}\]
Note that, while the second-order susceptibility tensor is neglected in Eq. 42 under the assumption of a small electric field limit, the third-order susceptibility tensor is included because it remains significant even for weak electric fields. Now, we can substitute Eq. 42 into Eq. 40:
\[\begin{split}\mathbf{F}=\frac{1}{c^{2}}\bigg{[}& \epsilon_{0}\left(\mathbf{\chi}^{(1)}\mathbf{E}+\mathbf{\chi}^{(3)}| \mathbf{E}|^{2}\mathbf{E}\right)\cdot\nabla\mathbf{E}\\ &+\frac{\partial\epsilon_{0}\left(\mathbf{\chi}^{(1)}\mathbf{E}+\bm {\chi}^{(3)}|\mathbf{E}|^{2}\mathbf{E}\right)}{\partial t}\times\mathbf{B} \bigg{]}.\end{split} \tag{43}\]
The term \((\epsilon_{0}\left(\mathbf{\chi}^{(1)}\mathbf{E}+\mathbf{\chi}^{(3)}|\mathbf{E}|^{2} \mathbf{E}\right)\cdot\nabla\mathbf{E})\) represents the gradient force or gradient pressure. It arises from the interaction between the spatial variation of the electric field (as captured by \(\nabla\mathbf{E}\)) and the polarization of the material (described by \(\mathbf{\chi}^{(1)}\mathbf{E}\) and \(\mathbf{\chi}^{(3)}|\mathbf{E}|^{2}\mathbf{E}\)). The gradient force tends to push or pull the material particles or dipoles in the direction of the electric field gradient; the term \((\frac{\partial\epsilon_{0}\left(\mathbf{\chi}^{(1)}\mathbf{E}+\mathbf{\chi}^{(3)}| \mathbf{E}|^{2}\mathbf{E}\right)}{\partial t}\times\mathbf{B})\) represents the radiation pressure or time-varying electromagnetic momentum. It arises from the time variation of the electric field (as captured by \(\frac{\partial}{\partial t}(\mathbf{\chi}^{(1)}\mathbf{E}+\mathbf{\chi}^{(3)}|\mathbf{ E}|^{2}\mathbf{E})\)) and its cross product with the magnetic field \(\mathbf{B}\). The radiation pressure results in a transfer of momentum from the electromagnetic field to the material, causing it to experience a force.
Overall, Eq. 43 combines the effects of the gradient force, which depends on the spatial variation of the electric field, and the radiation pressure, which arises from the time variation of the electric field and its interaction with the magnetic field. These phenomena play crucial roles in the interaction between electromagnetic fields and materials, particularly in the context of metamaterials and their response to electromagnetic waves. This equation represents the force acting on the dipole moment in terms of the electric field \(\mathbf{E}\) and its spatial and temporal derivatives, as well as the material properties characterized by the susceptibility tensors \(\mathbf{\chi}^{(1)}\) and \(\mathbf{\chi}^{(3)}\). Simplifying the expression further using the properties of complex numbers and phasors, we can write:
\[\begin{split}\frac{\partial\mathbf{p}}{\partial t}\times\mathbf{B} &=-\epsilon_{0}\Bigg{[}\Bigg{(}\frac{\partial}{\partial t}\Big{(} \mathbf{\chi}^{(1)}\mathbf{E}_{0}+\mathbf{\chi}^{(3)}|\mathbf{E}_{0}|^{2}\mathbf{E}_{0} \Big{)}\Bigg{)}\cdot\mathbf{E}_{0}\Bigg{]}\mathbf{B}_{0}\\ &\quad-\Bigg{(}\frac{\partial}{\partial t}\Big{(}\mathbf{\chi}^{(1)} \mathbf{E}_{0}+\mathbf{\chi}^{(3)}|\mathbf{E}_{0}|^{2}\mathbf{E}_{0}\Big{)}\Bigg{)} \cdot\mathbf{B}_{0}\mathbf{E}_{0}.\end{split} \tag{44}\]
It is significant to note that Eq. 40, which assumes the complete conversion of electromagnetic energy to kinetic energy, gives an upper constraint on the thrust that may be created. There are several processes that can generate micro newtons of thrust in the framework of the equations given above for metamaterials. One such process is the use of plasmonic metamaterials, which are composed of metallic nanoparticles arranged in a specific pattern to manipulate light at the nanoscale. When these materials are illuminated by light, they generate plasmons, which are collective oscillations of electrons. The plasmons can induce forces on the nanoparticles, which can result in a net thrust on the material.
Another process is the use of optomechanical metamaterials, which are composed of mechanical resonators coupled to optical cavities. When light is injected into the cavity, it can interact with the mechanical resonators, inducing mechanical motion. This motion can generate a net thrust on the material.
Both of these processes involve the manipulation of light at the nanoscale to induce forces on metamaterials, which can result in micro-newtons of thrust.
It is possible to imagine a scenario where a pulsed electromagnetic wave traverses a metamaterial and gains intensity, leading to an amplification of thrust. This could potentially occur if the metamaterial is designed to have nonlinear properties, such as a high third-order susceptibility \(\mathbf{\chi}^{(3)}\). When an intense pulsed electromagnetic wave interacts with such a material, it can induce a nonlinear polarization response that can lead to an amplification of the electromagnetic field inside the material.
If we assume a typical metamaterial with a linear susceptibility on the order of \(0.1\) and a third-order susceptibility on the order of \(10^{-8}\), and a pulsed electromagnetic wave with an intensity on the order of \(10^{12}\) W/m\({}^{2}\), we can estimate the thrust to be on the order of nano newtons to micro newtons. However, this is a very rough estimate and the actual thrust generated will depend on the specific properties of the metamaterial and the pulsed electromagnetic wave.
There have been several studies on the use of metamaterials for propulsion. One notable example is the work by Alu et al Alu et al. (2018), which proposed a metamaterial-based device capable of generating thrust by exploiting the nonlinear response of the material to an incident electromagnetic wave. The device consisted of a rectangular array of metallic split-ring resonators (SRRs) with a nonlinear material in the gaps between the SRRs. When an intense pulsed electromagnetic wave was incident on the device, the nonlinear material generated harmonics at frequencies different from the incident frequency, which in turn generated a net force on the device due to the asymmetric radiation of the harmonics. The authors estimated that the device could generate a thrust on the order of micro-Newtons.
Coulais et al Coulais et al. (2020) demonstrated the possibility of breaking reciprocity in static systems, enabling mechanical metamaterials to exhibit non-reciprocal behavior.
Another example is the work by Mihai et al Mihai et al. (2020), which proposed a metamaterial-based device consisting of an array of cylindrical pillars made of a nonlinear material. When an incident electromagnetic wave was incident on the device, the nonlinear material generated harmonics at frequencies different from the incident frequency, which in turn generated a net force on the device due to the asymmetric radiation of the harmonics. The authors estimated that the device could generate a thrust on the order of micro-Newtons.
## II Conclusion
In conclusion, this article explores various methods to enhance engine efficiency, considering the limitations imposed by the laws of thermodynamics. The use of nanomaterials and surface engineering capable of harnessing entropy-gradient forces is discussed, highlighting their potential to generate useful work by converting random thermal motion into directed motion. Additionally, the concept of information-burning engines, which extract energy from information processing, is discussed in the framework of (Bakli et al., 2022), particularly engines fueled by entanglement, discussing theoretical proposals for quantum Otto engines, and entropy-gradient engines. These engines rely on the exploitation of entanglement to extract work from heat baths, showcasing the potential of quantum principles in enhancing engine performance. There is a range of innovative approaches that could contribute to the development of more efficient and sustainable engines, pushing the boundaries of current engine technologies and exploring new frontiers in energy production for sustainable societies.
## III Acknowledgments
I would like to express my sincere gratitude to Professors Genito Maure, Dinelsa Machaieie, Volodymyr Valentyn Chernysh, and Marina Yuri Kotchkareva for their valuable insights and comments shared during a talk presented at the University Eduardo Mondlane, Maputo, Mozambique, in October 2022. Their contributions have greatly enriched this research project.
|
2306.07819 | False discovery proportion envelopes with m-consistency | We provide new non-asymptotic false discovery proportion (FDP) confidence
envelopes in several multiple testing settings relevant for modern high
dimensional-data methods. We revisit the multiple testing scenarios considered
in the recent work of Katsevich and Ramdas (2020): top-$k$, preordered
(including knockoffs), online. Our emphasis is on obtaining FDP confidence
bounds that both have non-asymptotic coverage and are asymptotically accurate
in a specific sense, as the number $m$ of tested hypotheses grows. Namely, we
introduce and study the property (which we call $m$-consistency) that the
confidence bound converges to or below the desired level $\alpha$ when applied
to a specific reference $\alpha$-level false discovery rate (FDR) controlling
procedure. In this perspective, we derive new bounds that provide improvements
over existing ones, both theoretically and practically, and are suitable for
situations where at least a moderate number of rejections is expected. These
improvements are illustrated with numerical experiments and real data examples.
In particular, the improvement is significant in the knockoffs setting, which
shows the impact of the method for a practical use. As side results, we
introduce a new confidence envelope for the empirical cumulative distribution
function of i.i.d. uniform variables, and we provide new power results in
sparse cases, both being of independent interest. | Iqraa Meah, Gilles Blanchard, Etienne Roquain | 2023-06-13T14:52:08Z | http://arxiv.org/abs/2306.07819v2 | # False discovery proportion envelopes with consistency
###### Abstract
We provide new false discovery proportion (FDP) confidence envelopes in several multiple testing settings relevant for modern high dimensional-data methods. We revisit the scenarios considered in the recent work of Katsevich and Ramdas (2020) (top-\(k\), preordered -- including knockoffs --, online) with a particular emphasis on obtaining FDP bounds that have both nonasymptotical coverage and asymptotical consistency, i.e. converge below the desired level \(\alpha\) when applied to a classical \(\alpha\)-level false discovery rate (FDR) controlling procedure. This way, we derive new bounds that provide improvements over existing ones, both theoretically and practically, and are suitable for situations where at least a moderate number of rejections is expected. These improvements are illustrated with numerical experiments and real data examples. In particular, the improvement is significant in the knockoffs setting, which shows the impact of the method for a practical use. As side results, we introduce a new confidence envelope for the empirical cumulative distribution function of i.i.d. uniform variables and we provide new power results in sparse cases, both being of independent interest.
**AMS 2000 subject classifications:** Primary 62G10; secondary 62G30.
**Keywords and phrases:** false discovery rate, post hoc inference, confidence envelope, knockoffs, online multiple testing, consistency.
###### Contents
* 1 Introduction
* 1.1 Background
* 1.2 New insight: consistency
* 1.3 Settings
* 1.4 Contributions
* 2 Results in the top-\(k\) case
* 2.1 Top-\(k\) setting
* 2.2 Existing envelopes
* 2.3 New envelope
* 2.4 FDP confidence bounds for BH and consistency
* 2.5 Adaptive envelopes
* 2.6 Interpolated bounds
* 3 Results in the pre-ordered case
* 3.1 Pre-ordered setting
* 3.2 New confidence envelopes
* 3.3 Confidence bounds for LF and consistency
* 4 Results in the online case
* 1 imsart-generic ver. 2014/07/30 file: BMR2022v6.tex date: June 14, 2023
[
###### Abstract
The study of the influence
Dockes et al., 2021). It should also be noted that methods providing bounds valid uniformly over _some particular_ selection subsets can also be used to provide bounds valid on _any_ subsets by using an 'interpolation' technique, see, e.g., Blanchard et al. (2020). This is the case for instance for bounds based upon an empirical distribution function confidence band, as investigated by Meinshausen and Buhlmann (2005); Meinshausen (2006); Meinshausen and Rice (2006); Dumbgen and Wellner (2023). Loosely, we will refer to such (potentially partial) FDP bounds as _FDP confidence envelopes_ in the sequel.
Recently, finding FDP confidence envelopes has been extended to different contexts of interest in Katsevich and Ramdas (2020) (KR below for short), including knockoffs (Barber and Candes, 2015; Candes et al., 2018) and online multiple testing (Aharoni and Rosset, 2014a). For this, their bounds are tailored on particular nested 'paths', and employ accurate martingale techniques. In addition, Li et al. (2022) have recently investigated specifically the case of the knockoffs setting by using a 'joint' \(k\)-FWER error rate control (see also Genovese and Wasserman, 2006; Meinshausen, 2006; Blanchard et al., 2020), possibly in combination with closed testing.
### New insight: consistency
The main point of this paper is to look at FDP confidence envelopes towards the angle of a particular property that we call _consistency_. First recall that the false discovery rate (FDR) is the expectation of the FDP, which is a type I error rate measure with increasing popularity since the seminal work of Benjamini and Hochberg (1995). Informally, an FDP confidence envelope is consistent, if its particular value on an FDR-controlling selection set is close to (or below) the corresponding nominal value, at least asymptotically. This property is important for several reasons:
* FDR controlling procedures are particular selection sets that are widely used in practice. Hence, it is very useful to provide an accurate FDP bound for these particular rejection sets. This is the case for instance for the commonly used Benjamini-Hochberg (BH) procedure at a level \(\alpha\) -- or even for a data dependent choice of the level \(\hat{\alpha}\) -- for which the FDP bound should be close to \(\alpha\) (or \(\hat{\alpha}\)), at least in 'favorable' cases;
* a zoo of FDP confidence envelopes have been proposed in previous literature, and we see the consistency as a principled way to discard some of them while putting the emphasis on others;
* searching for consistency can also lead to new bounds that are accurate for a moderate sample size.
It turns out that most of the existing bounds, while being accurate in certain regimes, are not consistent. In particular, this is the case for those of Katsevich and Ramdas (2020), because of a constant factor (larger than 1) in front of the FDP estimate. The present paper proposes to fill this gap by proposing new envelopes that are consistent. In a nutshell, we replace the constant in front of the FDP estimate by a function that tends to 1 in a particular asymptotical regime.
Since we evoke consistency, it is worth emphasizing that the envelopes developed in this
work have coverage holding in a _non-asymptotical_ sense. Here, consistency means that on top of this strong non-asymptotical guarantee, the bound satisfies an additional sharpness condition in an asymptotical sense and for some scenarios of interest, including sparse ones.
### Settings
Following Katsevich and Ramdas (2020), we consider the three following multiple testing settings for which a 'path' means a (possibly random) nested sequence of candidate rejection sets:
* Top-\(k\): the classical multiple testing setting where the user tests a finite number \(m\) of null hypotheses and observes simultaneously a family of corresponding \(p\)-values. This is the framework of the seminal paper of Benjamini and Hochberg (1995) and of the majority of the follow-up papers. In that case, the path is composed of the hypotheses corresponding to the top-\(k\) most significant \(p\)-values (i.e. ranked in increasing order), for varying \(k\).
* Pre-ordered: we observe \(p\)-values for a finite set of cardinal \(m\) of null hypotheses, which are _a priori_ arranged according to some ordering. In that setting, the signal (if any) is primarily carried by the ordering: alternatives are expected to be more likely to have a small rank. Correspondingly the path in that case is obtained by \(p\)-value thresholding (for fixed threshold) of the first \(k\) hypotheses w.r.t. that order, for varying \(k\). A typical instance is the knockoffs setting (Barber and Candes, 2015; Candes et al., 2018), where the null hypotheses come from a high-dimensional linear regression model and one wants to test whether each of the \(m\) variables is associated with the response. The ordering is data-dependent and comes from an ancillary statistic independent of the tests themselves, so that one can argue conditionally and consider the ordering (and path) as fixed.
* Online: the null hypotheses come sequentially, and there is a corresponding potentially infinite stream of \(p\)-values. An irrevocable decision (reject or not) has to be taken in turn for each new hypothesis, depending on past observations only. The path is naturally defined according to the set of rejections until time \(t\), for varying \(t\).
Let us introduce notation that encompasses the three settings mentioned above: the set of hypotheses is denoted by \(\mathcal{H}\) (potentially infinite), the set of null hypotheses \(\mathcal{H}_{0}\) is an unknown subset of \(\mathcal{H}\), and a path \(\Pi=(R_{k},k\geq 1)\) (with convention \(R_{0}=\emptyset\)) is an ordered sequence of nested subsets of \(\mathcal{H}\) that depends only on the observations. A confidence envelope is a sequence \((\overline{\mathrm{FDP}}_{k},k\geq 1)\) (with convention \(\overline{\mathrm{FDP}}_{0}=0\)) of random variables valued in \([0,1]\), depending only on the observations, such that, for some pre-specified level \(\delta\), we have
\[\mathbf{P}(\forall k\geq 1,\mathrm{FDP}(R_{k})\leq\overline{\mathrm{FDP}}_{k}) \geq 1-\delta, \tag{1}\]
where \(\mathrm{FDP}(R_{k})=\frac{|R_{k}\cap\mathcal{H}_{0}|}{|R_{k}|\lor 1}\) is the FDP of the set \(R_{k}\). In (1), the guarantee is uniform in \(k\), which means that it corresponds to confidence bounds valid uniformly over the subsets of the path. Also, distribution \(\mathbf{P}\) is relative to the \(p\)-value model, which will be specified further on and depends on the considered framework.
_Remark 1.1_ (Interpolation).: On can notice here that any FDP confidence envelope of the type (1) can also lead to a post hoc FDP bound valid uniformly for all \(R\subset\mathcal{H}\): specifically, by using the interpolation method (see, e.g., Blanchard et al., 2020; Goeman et al., 2021; Li et al., 2022), if (1) holds then the relation also holds with the sharper bound (\(\widehat{\mathrm{FDP}}_{k},k\geq 1\)) given by
\[\widehat{\mathrm{FDP}}_{k}=\frac{\min_{k^{\prime}\leq k}\{|R_{k} \cap(R_{k^{\prime}})^{c}|+|R_{k^{\prime}}|\widehat{\mathrm{FDP}}_{k^{\prime}} \}}{|R_{k}|\lor 1}, \tag{2}\]
due to the fact that the number of false positives in \(R_{k}\) is always bounded by the number of false positives in \(R_{k^{\prime}}\subset R_{k}\) plus the number of elements of \(R_{k}\cap(R_{k^{\prime}})^{c}\).
Particular subsets of \(\Pi=(R_{k},k\geq 1)\) that are of interest are those controlling the FDR. Given a nominal level \(\alpha\), a'reference' procedure chooses a data-dependent \(\hat{k}_{\alpha}\) such that \(\mathbb{E}\Big{[}\mathrm{FDP}(R_{\hat{k}_{\alpha}})\Big{]}\leq\alpha\). Depending on the setting, we consider different reference procedures:
* Top-\(k\) setting: the reference FDR controlling procedure is the Benjamini-Hochberg (BH) step-up procedure, see Benjamini and Hochberg (1995);
* Pre-ordered setting: the reference procedure is the Lei-Fithian (LF) adaptive Selective sequential step-up procedure, see Lei and Fithian (2016) (itself being a generalization of the procedure of Li and Barber, 2017);
* Online setting: the reference procedure is the (LORD) procedure, see Javanmard and Montanari (2018) and more precisely the improved version of Ramdas et al. (2017).
As announced, for all these procedures, the _expectation_ of the FDP (that is, the FDR) is guaranteed to be below \(\alpha\). On the other hand, it is commonly the case that in an appropriate asymptotic setting, the FDP concentrates around its expectation, see, e.g., Genovese and Wasserman (2004); Neuvial (2008, 2013). Therefore, an adequate confidence bound on the FDP should asymptotically converge to (or below) \(\alpha\) when applied to a reference procedure. Furthermore, we emphasize once more that we aim at a bound which is valid non-asymptotically, and uniformly over the choice of \(\alpha\) (or equivalently \(k\)) to account for possible 'data snooping' from the user (that is, \(\alpha=\hat{\alpha}\) is possibly depending on the data).
Let us now make the definition of consistency more precise.
**Definition 1.2** (Consistency for top-\(k\) and pre-ordered settings).: Let \(\delta\in(0,1)\) be fixed. For each \(m\geq 1\), let \(\mathbf{P}^{(m)}\) be a multiple testing model over the hypotheses set \(\mathcal{H}=\{1,\ldots,m\}\), \(\Pi=(R_{k},k\geq 1)\) be a possibly random path of nested subsets of \(\mathcal{H}\), and \((\widehat{\mathrm{FDP}}_{k},k\geq 1)\) a confidence envelope at level \(1-\delta\) over that path, i.e. satisfying (1) (for \(\mathbf{P}=\mathbf{P}^{(m)}\)). For any \(\alpha\in(0,1)\), let \(\hat{k}_{\alpha}\) be an FDR controlling procedure at level \(\alpha\), i.e. satisfying \(\mathbf{E}^{(m)}\Big{[}\mathrm{FDP}(R_{\hat{k}_{\alpha}})\Big{]}\leq\alpha\). Then the confidence envelope is said to be consistent for the sequence \((\mathbf{P}^{(m)},m\geq 1)\) and for the FDR controlling procedure \(R_{\hat{k}_{\alpha}}\in\Pi\) at a level \(\alpha\) in a range \([\alpha_{0},1)\subset(0,1)\), if for all \(\epsilon>0\),
\[\lim_{m\to\infty}\mathbf{P}^{(m)}\Bigg{(}\sup_{\alpha\in[\alpha_{0 },1)}\Big{\{}\overline{\mathrm{FDP}}_{\hat{k}_{\alpha}}-\alpha\Big{\}}\geq \epsilon\Bigg{)}=0. \tag{3}\]
In the above definition, \(\mathbf{P}^{(m)}\) stands for a multiple testing model with \(m\) hypotheses that is to be specified. We will be interested in standard model sequences that represent relevant practical situations, in particular sparse cases where a vanishing proportion of null hypotheses are false when \(m\) tends to infinity. This definition applies for the two first considered settings (top-\(k\) and pre-ordered). Note that due to (1), we have
\[\mathbf{P}(\forall\alpha\in(0,1),\mathrm{FDP}(R_{\hat{k}_{\alpha}})\leq \overline{\mathrm{FDP}}_{\hat{k}_{\alpha}})\geq 1-\delta. \tag{4}\]
Hence, (3) comes as an additional asymptotical accuracy guarantee to the non-asymptotical coverage property (4). Moreover, the uniformity in \(\alpha\) in (4)-(3) allows for choosing \(\alpha\) in a post hoc manner, while maintaining the false discovery control and without paying too much in accuracy, that is, for any data-dependent choice of \(\hat{\alpha}\), \(\mathrm{FDP}(R_{\hat{k}_{\hat{\alpha}}})\leq\overline{\mathrm{FDP}}_{\hat{k}_ {\hat{\alpha}}}\) with probability at least \(1-\delta\), with \(\overline{\mathrm{FDP}}_{\hat{k}_{\hat{\alpha}}}\lesssim\hat{\alpha}(1+o(1))\) in 'good' cases.
In the third setting, an online FDR controlling procedure provides in itself a sequence \((R_{k},k\geq 1)\) and not a single set \(R_{\hat{k}_{\alpha}}\). As a consequence, a confidence envelope \((\overline{\mathrm{FDP}}_{k},k\geq 1)\) is defined specifically for each procedure \((R_{k},k\geq 1)\). Hence, the definition should be slightly adapted:
**Definition 1.3** (Consistency for online setting).: Let \(\delta\in(0,1)\) be fixed and \(\mathbf{P}\) be an online multiple testing model over the infinite hypothesis set \(\mathcal{H}=\{1,2,\dots\}\). Let \((R_{k},k\geq 1)\) be an (online) FDR controlling procedure at level \(\alpha\), i.e. such that \(\sup_{k\geq 1}\mathbb{E}[\mathrm{FDP}(R_{k})]\leq\alpha\), and \((\overline{\mathrm{FDP}}_{k},k\geq 1)\) be a corresponding confidence envelope at level \(1-\delta\), i.e., satisfying (1). Then \((\overline{\mathrm{FDP}}_{k},k\geq 1)\) is said to be consistent for the model \(\mathbf{P}\) if for all \(\epsilon>0\),
\[\lim_{k\to\infty}\mathbf{P}\big{(}\overline{\mathrm{FDP}}_{k}-\alpha\geq \epsilon\big{)}=0. \tag{5}\]
Note that both in (1) and (5) no uniformity w.r.t. the level \(\alpha\) is imposed in the online setting.
### Contributions
Our findings are as follows:
* In each of the considered settings (top-\(k\), pre-ordered, online), we provide new (non-asymptotical) FDP confidence envelopes that are consistent under some mild conditions, including sparse configurations, see Proposition 2.5 (top-\(k\)), Proposition 3.4 (pre-ordered) and Proposition 4.5 (online). Table 1 provides a summary of the considered procedures in the different contexts, including the existing and new ones. It is worth noting that in the top-\(k\) setting, the envelope based on the DKW inequality (Massart, 1990) is consistent under moderate sparsity assumptions only, while the new envelope based on the Wellner inequality (Shorack and Wellner, 2009) covers all the sparsity range (Proposition 2.5).
* As a byproduct, our results provide (non-asymptotical) confidence bounds on the FDP for standard FDR-controlling procedures which are asymptotically sharp (consistency)
and for which a data-driven choice of the level \(\alpha\) is allowed. In particular, in the top-\(k\) setting, this gives a new sharp confidence bound for the achieved FDP of the BH procedure while tuning the level from the same data, see (18) below.
* In the top-\(k\) setting, we also develop _adaptive_ envelopes, for which the proportion of null hypotheses is simultaneously estimated, see Section 2.5. This is a novel approach with respect to existing literature and it is shown to improve significantly the bounds on simulations in 'dense' situations, see Section 5.
* In the pre-ordered setting, including the 'knockoff' case, we introduce new envelopes, called 'Freedman' and 'KR-U', which are the two first (provably-)consistent confidence bounds in that context to our knowledge. This is an important contribution since the knockoff method is one of the leading methodology in the literature of the last decade. In addition, KR-U is shown to behave suitably, even for moderate sample size, see Section 5.
* Our study is based on dedicated tools of independent interest, based on uniform versions of classical deviation inequalities, see Corollary 2.1 (Wellner's inequality), Corollary C.5 (Freedman's inequality). Both can be seen as a form of'stitching' together elementary inequalities, see Howard et al. (2021) for recent developments of this principle. The bounds developed here are presented in a self-contained manner.
## 2 Results in the top-\(k\) case
### Top-\(k\) setting
We consider the classical multiple setting where we observe \(m\) independent \(p\)-values \(p_{1},\ldots,p_{m}\), testing \(m\) null hypotheses \(H_{1},\ldots,H_{m}\). The set of true nulls is denoted by \(\mathcal{H}_{0}\), which is of cardinal \(m_{0}\) and we denote \(\pi_{0}=m_{0}/m\in(0,1)\). We assume that the \(p\)-values are uniformly distributed under the null, that is, for all \(i\in\mathcal{H}_{0}\), \(p_{i}\sim U(0,1)\).
We consider here the task of building a \((1-\delta)\)-confidence envelope (1) for the top-\(k\) path
\[R_{k}=\{1\leq i\leq m\::\:p_{i}\leq p_{(k)}\},\ \ k=1,\ldots,m. \tag{6}\]
A rejection set of particular interest is the BH rejection set, given by \(R_{\hat{k}_{\alpha}}\) where
\[\hat{k}_{\alpha}=\max\Bigl{\{}k\in\mathbb{N}\::\:\widehat{\mathrm{FDP}}_{k} \leq\alpha\Bigr{\}},\ \widehat{\mathrm{FDP}}_{k}=mp_{k}/k, \tag{7}\]
(with the convention \(R_{0}=\emptyset\)).
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline & Simes & DKW & KR & Wellner (new) & Freedman (new) & KR-U (new) \\ \hline Top-\(k\) & No & Yes & No & Yes & & \\ Pre-ordered & & & No & Yes & Yes \\ Online & & & No & Yes & Yes \\ \hline \end{tabular}
\end{table}
Table 1: Consistency property (Yes or No) for different envelopes, depending on the considered contexts. ‘Consistent’ means consistent at least in a particular (reasonable) configuration. Unfilled means undefined in that context.
### Existing envelopes
Let us first review the prominent confidence envelopes that have been considered in the literature. Let \(U_{1},\ldots,U_{n}\) be \(n\geq 1\) i.i.d. uniform random variables. For \(\delta\in(0,1)\), each of the following (uniform) inequalities holds with probability at least \(1-\delta\):
* Simes (or Robbins, 1954): for all \(t\in(0,1)\), \(n^{-1}\sum_{i=1}^{n}\mathbf{1}\{U_{i}\leq t\}\leq t/\delta\).
* DKW (Massart, 1990): for all \(t\in(0,1)\), \(n^{-1}\sum_{i=1}^{n}\mathbf{1}\{U_{i}\leq t\}\leq t+\sqrt{\log(1/\delta)/2}\,n^ {-1/2}\).
* KR (Katsevich and Ramdas, 2020) (for \(\delta\leq 0.31\)), for all \(t\in(0,1)\), \(n^{-1}\sum_{i=1}^{n}\mathbf{1}\{U_{i}\leq t\}\leq\frac{\log(1/\delta)}{\log(1+ \log(1/\delta))}(1/n+t)\).
Taking \((U_{1},\ldots,U_{n})=(p_{i},i\in\mathcal{H}_{0})\), \(n=m_{0}\), and \(t=p_{(k)}\) in the bounds above gives the following confidence envelopes (in the sense of (1)) for the top-\(k\) path: for \(k\in\{1,\ldots,m\}\),
\[\overline{\mathrm{FDP}}_{k}^{\mathrm{Simes}} =1\wedge\frac{mp_{(k)}}{k\delta}; \tag{8}\] \[\overline{\mathrm{FDP}}_{k}^{\mathrm{DKW}} =1\wedge\bigg{(}\frac{mp_{(k)}}{k}+\frac{m^{1/2}\sqrt{0.5\log 1/ \delta}}{k}\bigg{)};\] (9) \[\overline{\mathrm{FDP}}_{k}^{\mathrm{KR}} =1\wedge\bigg{(}\frac{\log(1/\delta)}{\log(1+\log(1/\delta))} \Big{(}\frac{mp_{(k)}}{k}+1/k\Big{)}\bigg{)}, \tag{10}\]
the last inequality requiring in addition \(\delta\leq 0.31\). Please note that we can slightly improve these bounds by taking appropriate integer parts, but we will ignore this detail further on for the sake of simplicity.
### New envelope
In addition to the above envelopes, this section presents a new one deduced from a new 'uniform' variation of Wellner's inequality (recalled in Lemma D.2). Let us first define the function
\[h(\lambda)=\lambda(\log\lambda-1)+1,\ \ \lambda>1. \tag{11}\]
Lemma D.1 gathers some properties of \(h\), including explicit accurate bounds for \(h\) and \(h^{-1}\).
**Proposition 2.1** (Uniform version of Wellner's inequality).: _Let \(U_{1},\ldots,U_{n}\) be \(n\geq 1\) i.i.d. uniform random variables and \(\kappa=\pi^{2}/6\). For all \(\delta\in(0,1)\), we have with probability at least \(1-\delta\),_
\[\forall t\in(0,1),\ n^{-1}\sum_{i=1}^{n}\mathbf{1}\{U_{i}\leq t\}\leq t\,h^{- 1}\bigg{(}\frac{\log(\kappa/\delta)+2\log(\lceil\log_{2}(1/t)\rceil)}{ng(t)} \bigg{)}, \tag{12}\]
_for \(g(t)=2^{-\lceil\log_{2}(1/t)\rceil}/(1-2^{-\lceil\log_{2}(1/t)\rceil})\geq t/2\) and \(h(\cdot)\) defined by (11). In particular, with probability at least \(1-\delta\),_
\[\forall t\in(0,1),\ n^{-1}\sum_{i=1}^{n}\mathbf{1}\{U_{i}\leq t\}\leq t\,h^{- 1}\bigg{(}\frac{2\log(\kappa/\delta)+4\log(1+\log_{2}(1/t))}{nt}\bigg{)}. \tag{13}\]
The proof of Proposition 2.1 is given in Section B.1. It immediately leads to the following result.
**Theorem 2.2**.: _In the top-\(k\) setting of Section 2.1, the following quantity is a \((1-\delta)\)-confidence envelope in the sense of (1) for the top-\(k\) path:_
\[\overline{\operatorname{FDP}}_{k}^{\text{Well}}=1\wedge\Bigg{(}\frac{mp_{(k)} }{k}\;h^{-1}\Bigg{(}\frac{2\log(\kappa/\delta)+4\log\!\left(1+\log_{2}\!\left( 1/p_{(k)}\right)\right)}{mp_{(k)}}\Bigg{)}\Bigg{)}, \tag{14}\]
_with \(\kappa=\pi^{2}/6\)._
Proof.: We use (13) for \((U_{1},\ldots,U_{n})=(p_{i},i\in\mathcal{H}_{0})\), \(n=m_{0}\), and \(t=p_{(k)}\). We conclude by using \(m_{0}\leq m\) and the monotonicity property of Lemma D.1.
_Remark 2.3_.: Denoting by \(\overline{F}_{n}(t)\) the RHS of (13), we can easily check
\[\sup_{t\in((\log\log n)/n,1)}\!\left(\sqrt{n}\frac{\overline{F}_{n}(t)-t}{ \sqrt{t\log(1+\log_{2}(1/t))}}\right)=O(1),\]
with a constant possibly depending on \(\delta\). The iterated logarithm in the denominator is known from classical asymptotic theory (convergence to a Brownian bridge) to be unimprovable for a uniform bound in the vicinity of \(0\); in this sense the above is a 'finite law of the iterated logarithm (LIL) bound' (Jamieson et al., 2014).
### FDP confidence bounds for BH and consistency
Applying the previous bounds for the particular BH rejection sets \(R_{\hat{k}_{\alpha}}\) (see (7)) leads to the following result.
**Corollary 2.4**.: _In the top-\(k\) setting of Section 2.1, for any \(\alpha,\delta\in(0,1)\), the following quantities are \((1-\delta)\)-confidence bounds for \(\operatorname{FDP}(R_{\hat{k}_{\alpha}})\), the FDP of the BH procedure at level \(\alpha\):_
\[\overline{\operatorname{FDP}}_{\alpha}^{\text{Stimes}} =1\wedge(\alpha/\delta); \tag{15}\] \[\overline{\operatorname{FDP}}_{\alpha}^{\text{DKW}} =1\wedge\Bigg{(}\alpha+\frac{m^{1/2}\sqrt{0.5\log 1/\delta}}{1 \vee\hat{k}_{\alpha}}\Bigg{)};\] (16) \[\overline{\operatorname{FDP}}_{\alpha}^{\text{KR}} =1\wedge\Bigg{(}\frac{\log(1/\delta)}{\log(1+\log(1/\delta))} \Big{(}\alpha+1/(1\vee\hat{k}_{\alpha})\Big{)}\Bigg{)};\] (17) \[\overline{\operatorname{FDP}}_{\alpha}^{\text{Well}} =1\wedge\Bigg{(}\alpha\,h^{-1}\Bigg{(}\frac{2\log(\kappa/\delta) +4\log\!\left(1+\log_{2}\Big{(}\frac{m}{\alpha(1\vee\hat{k}_{\alpha})}\Big{)} \right)}{\alpha(1\vee\hat{k}_{\alpha})}\Bigg{)}\Bigg{)}, \tag{18}\]
_where \(\kappa=\pi^{2}/6\), \(\hat{k}_{\alpha}\) denotes the number of rejections of the BH procedure (7) at level \(\alpha\), and where the KR bound requires in addition \(\delta\leq 0.31\). Moreover, these bounds are also valid uniformly in \(\alpha\in(0,1)\), in the sense that_
\[\mathbf{P}(\forall\alpha\in(0,1),\operatorname{FDP}(R_{\hat{k}_{\alpha}}) \leq\overline{\operatorname{FDP}}_{\alpha}^{\text{Meth}})\geq 1-\delta,\;\; \text{Meth}\in\{\text{Simes},\text{DKW},\text{KR},\text{Well}\},\]
_and thus also when using a post hoc choice \(\alpha=\hat{\alpha}\) of the level._
Proof.: For (18), we use (13) for \((U_{1},\ldots,U_{n})=(p_{i},i\in\mathcal{H}_{0})\), \(n=m_{0}\), and \(t=\alpha(1\vee\hat{k}_{\alpha})/m\).
Let us now consider the consistency property (3). Among the four above bounds, it is apparent that Simes and KR are never consistent, because of the constant in front of \(\alpha\); namely, for all \(m\),
\[\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{Simes}}\wedge\overline{\mathrm{FDP }}_{\alpha}^{\mathrm{KR}}\geq 1\wedge(c\alpha),\]
for some constant \(c>1\). By contrast, \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{DKW}}\) and \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{Well}}\) are consistent in the sense of (3) in a regime such that \(m^{1/2}/\hat{k}_{\alpha_{0}}=o_{P}(1)\) and \((\log\log m)/\hat{k}_{\alpha_{0}}=o_{P}(1)\), respectively. The latter means that the BH procedure at level \(\alpha_{0}\) should make enough rejections. This is discussed for a particular setting in the next result.
**Proposition 2.5**.: _Let us consider the sequence of sparse one-sided Gaussian location models \((\mathbf{P}_{b,c,\beta}^{(m)},m\geq 1)\) with fixed parameters \(b\in\mathbb{R}\), \(c\in(0,1)\) and a sparsity parameter \(\beta\in[0,1)\), as defined in Section A.1. Then we have for all \(\alpha\in(0,1)\),_
\[\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{DKW}}-\alpha\asymp_{ \mathbf{P}_{b,c,\beta}^{(m)}}\ m^{-1/2+\beta};\] \[\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{Well}}-\alpha\asymp_{ \mathbf{P}_{b,c,\beta}^{(m)}}\sqrt{\log\log(m)}\,m^{-1/2+\beta/2},\]
_where \(u_{m}\asymp_{P}v_{m}\) stands for \(u_{m}=O_{P}(v_{m})\) and \(v_{m}=O_{P}(u_{m})\). In particular, concerning the consistency (3) for the sequence \((\mathbf{P}_{b,c,\beta}^{(m)},m\geq 1)\) and the BH procedure:_
* _for the DKW envelope (_9_) and the corresponding bound (_16_), the consistency (_3_) holds when_ \(\beta<1/2\) _but fails for_ \(\beta\geq 1/2\)_;_
* _for the Wellner envelope (_14_) and the corresponding bound (_18_), the consistency (_3_) holds for any arbitrary_ \(\beta\in(0,1)\)_._
Proof.: By Theorem A.2, we have \(\hat{k}_{\alpha}\asymp_{\mathbf{P}_{b,c,\beta}^{(m)}}m^{1-\beta}\). This gives the result (by applying in addition Lemma D.1 for the Wellner bound).
Proposition 2.5 shows the superiority of the Wellner bound on the DKW bound for achieving the consistency property on a particular sparse sequence models: while the DKW bound needs a model dense enough (\(\beta<1/2\)), the Wellner bound covers the whole sparsity range \(\beta\in(0,1)\).
### Adaptive envelopes
Let us consider the following upper-bounds for \(m_{0}\):
\[\hat{m}_{0}^{\text{\tiny Simes}} =m\wedge\inf_{t\in(0,\delta)}\frac{V_{t}}{1-t/\delta}; \tag{19}\] \[\hat{m}_{0}^{\text{\tiny DKW}} =m\wedge\inf_{t\in(0,1)}\Biggl{(}\frac{C^{1/2}}{2(1-t)}+\sqrt{ \frac{C}{4(1-t)^{2}}+\frac{V_{t}}{1-t}}\Biggr{)}^{2};\] (20) \[\hat{m}_{0}^{\text{\tiny KR}} =m\wedge\inf_{t\in(0,1/C^{\prime})}\frac{C^{\prime}+V_{t}}{1-C^{ \prime}t};\] (21) \[\hat{m}_{0}^{\text{\tiny Well}} =m\wedge\inf_{t\in(0,1)}\Biggl{(}\sqrt{\frac{tC_{t}}{2(1-t)^{2}}} +\sqrt{\frac{C_{t}}{2(1-t)^{2}}+\frac{V_{t}}{1-t}}\Biggr{)}^{2}, \tag{22}\]
where \(V_{t}=\sum_{i=1}^{m}\mathbf{1}\{p_{i}>t\}\), \(C=\log(1/\delta)/2\), \(C^{\prime}=\frac{\log(1/\delta)}{\log(1+\log(1/\delta))},C_{t}=2\log(\kappa/ \delta)+4\log(1+\log_{2}(1/t)),\)\(\kappa=\pi^{2}/6\). Since \(V_{t}/(1-t)\) corresponds to the so-called Storey estimator Storey (2002), these four estimators can all be seen as Storey-type confidence bounds, each including a specific deviation term that takes into account the probability error \(\delta\). Note that \(\hat{m}_{0}^{\text{\tiny DKW}}\) was already proposed in Durand et al. (2020).
**Proposition 2.6**.: _In the top-\(k\) setting of Section 2.1, the envelopes defined by (8), (9), (10) and (14) with \(m\) replaced by the corresponding bound \(\hat{m}_{0}^{\text{\tiny Simes}}\) (19), \(\hat{m}_{0}^{\text{\tiny DKW}}\) (20), \(\hat{m}_{0}^{\text{\tiny KR}}\) (21) or \(\hat{m}_{0}^{\text{\tiny Well}}\) (22), respectively, are also \((1-\delta)\)-confidence envelopes in the sense of (1) for the top-\(k\) path._
We can easily check that these four adaptive envelopes all uniformly improve their own non-adaptive counterpart. The proof of Proposition 2.6 is provided in Section B.2.
_Remark 2.7_.: In practice, the bounds \(\hat{m}_{0}^{\text{\tiny Simes}}\) (19), \(\hat{m}_{0}^{\text{\tiny DKW}}\) (20), \(\hat{m}_{0}^{\text{\tiny KR}}\) (21) or \(\hat{m}_{0}^{\text{\tiny Well}}\) (22) can be computed by taking an infimum over \(t=p_{(k)}\), \(1\leq k\leq m\) and by replacing \(V_{t}\) by \(m-k\).
Applying Proposition 2.6 for the BH procedure, this gives rise to the following adaptive confidence bounds.
**Corollary 2.8**.: _In the top-\(k\) setting of Section 2.1, for any \(\alpha,\delta\in(0,1)\), the following quantities are \((1-\delta)\)-confidence bounds for the FDP of the BH procedure at level \(\alpha\):_
\[\overline{\text{\rm FDP}}_{\alpha}^{\text{\tiny Simes-adapt}} =1\wedge\alpha(\hat{m}_{0}^{\text{\tiny Simes}}/m)/\delta; \tag{23}\] \[\overline{\text{\rm FDP}}_{\alpha}^{\text{\tiny DKW-adapt}} =1\wedge\Bigl{(}\alpha(\hat{m}_{0}^{\text{\tiny DKW}}/m)+\frac{( \hat{m}_{0}^{\text{\tiny DKW}})^{1/2}\sqrt{0.5\log 1/\delta}}{1\vee\hat{k}_{\alpha}} \Bigr{)};\] (24) \[\overline{\text{\rm FDP}}_{\alpha}^{\text{\tiny KR-adapt}} =1\wedge\Biggl{(}\frac{\log(1/\delta)}{\log(1+\log(1/\delta))} \Bigl{(}\alpha(\hat{m}_{0}^{\text{\tiny KR}}/m)+1/(1\vee\hat{k}_{\alpha}) \Bigr{)}\Biggr{)};\] (25) \[\overline{\text{\rm FDP}}_{\alpha}^{\text{\tiny Well-adapt}} =1\wedge\Biggl{(}\alpha(\hat{m}_{0}^{\text{\tiny Well}}/m)\;h^{-1 }\Biggl{(}\frac{2\log(\kappa/\delta)+4\log\Bigl{(}1+\log_{2}\Bigl{(}\frac{m}{ \alpha(1\vee\hat{k}_{\alpha})}\Bigr{)}\Bigr{)}}{\alpha(1\vee\hat{k}_{\alpha}) \hat{m}_{0}^{\text{\tiny Well}}/m}\Biggr{)}\Biggr{)}, \tag{26}\]
_where \(\kappa=\pi^{2}/6\), \(\hat{k}_{\alpha}\) denotes the number of rejections of BH procedure (7) at level \(\alpha\), and where the KR-adapt bound requires in addition \(\delta\leq 0.31\). Moreover, these bounds are also valid uniformly in \(\alpha\in(0,1)\) and thus also when using a post hoc choice \(\alpha=\hat{\alpha}\) of the level._
Proof.: For (26), we use (13) for \((U_{1},\ldots,U_{n})=(p_{i},i\in\mathcal{H}_{0})\), \(n=m_{0}\), \(t=\alpha(1\vee\hat{k}_{\alpha})/m\), and the fact that \(m_{0}\leq\hat{m}_{0}^{\text{\tiny{Well}}}\) on the considered event by the proof in Section B.2. The other bounds are proved similarly.
### Interpolated bounds
According to Remark 1.1, the coverage (1) is still valid after the interpolation operation given by (2). As a result, the above confidence envelopes can be improved as follows:
\[\widetilde{\text{\rm{FDP}}}_{k}^{\text{\tiny{Sines}}} =\min_{k^{\prime}\leq k}\{k-k^{\prime}+k^{\prime}\wedge(mp_{(k^{ \prime})}/\delta)\}/k; \tag{27}\] \[\widetilde{\text{\rm{FDP}}}_{k}^{\text{\tiny{DKW}}} =\min_{k^{\prime}\leq k}\{k-k^{\prime}+k^{\prime}\wedge(mp_{(k^{ \prime})}+m^{1/2}\sqrt{0.5\log 1/\delta})\}/k;\] (28) \[\widetilde{\text{\rm{FDP}}}_{k}^{\text{\tiny{KR}}} =\min_{k^{\prime}\leq k}\biggl{\{}k-k^{\prime}+k^{\prime}\wedge \biggl{(}\frac{\log(1/\delta)}{\log(1+\log(1/\delta))}\bigl{(}mp_{(k^{\prime}) }+1\bigr{)}\biggr{)}\biggr{\}}/k;\] (29) \[\widetilde{\text{\rm{FDP}}}_{k}^{\text{\tiny{Well}}} =\min_{k^{\prime}\leq k}\biggl{\{}k-k^{\prime}+k^{\prime}\wedge \biggl{(}mp_{(k^{\prime})}\,h^{-1}\biggl{(}\frac{2\log(\kappa/\delta)+4\log \bigl{(}1+\log_{2}(1/p_{(k^{\prime})})\bigr{)}}{mp_{(k^{\prime})}}\biggr{)} \biggr{)}\biggr{\}}/k, \tag{30}\]
respectively. When applied to BH rejection set, this also provides new confidence bounds \(\widetilde{\text{\rm{FDP}}}_{\alpha}^{\text{\tiny{Sines}}}\), \(\widetilde{\text{\rm{FDP}}}_{\alpha}^{\text{\tiny{DKW}}}\), \(\widetilde{\text{\rm{FDP}}}_{\alpha}^{\text{\tiny{KR}}}\), \(\widetilde{\text{\rm{FDP}}}_{\alpha}^{\text{\tiny{Well}}}\), that can further be improved by replacing \(m\) by the corresponding estimator \(\hat{m}_{0}\).
## 3 Results in the pre-ordered case
In this section, we build consistent envelopes in the case where the \(p\)-values are ordered a priori, which covers the famous 'knockoff' case.
### Pre-ordered setting
Let \(\pi:\{1,\ldots,m\}\to\{1,\ldots,m\}\) be some ordering of the \(p\)-values that is considered as given and deterministic (possibly coming from independent data). The pre-ordered setting is formally the same as the one of Section 2.1, except that the \(p\)-value set is explored according to \(\pi(1),\pi(2),\ldots,\pi(m)\). The rationale behind this is that the alternative null hypotheses \(\mathcal{H}_{1}=\{1,\ldots,m\}\backslash\mathcal{H}_{0}\) are implicitly expected to be more likely to have a small rank in the ordering \(\pi\) (although this condition is not needed for the future controlling results to hold).
Formally, the considered path is
\[R_{k}=\{\pi(i)\::\:1\leq i\leq k,\ p_{\pi(i)}\leq s\},\ \ k=1,\ldots,m, \tag{31}\]
for some fixed additional threshold \(s\in(0,1]\) (possibly coming from independent data) and can serve to make a selection. The aim is still to find envelopes \((\overline{\text{FDP}}_{k})_{k}\) satisfying (1) for this path while being consistent. To set up properly the consistency, we should consider an FDR controlling procedure that is suitable in this setting. For this, we consider the Lei Fithian (LF) adaptive Selective sequential step-up procedure (Lei and Fithian, 2016). The latter is defined by \(R_{\hat{k}_{\alpha}}\) where
\[\hat{k}_{\alpha}=\max\Bigl{\{}k\in\{0,\ldots,m\}:\widehat{\text{FDP}}_{k}\leq \alpha\Bigr{\}},\ \widehat{\text{FDP}}_{k}=\frac{s}{1-\lambda}\frac{1+\sum_{i=1}^{k}\mathbf{1} \bigl{\{}p_{\pi(i)}>\lambda\bigr{\}}}{1\vee\sum_{i=1}^{k}\mathbf{1}\bigl{\{}p_ {\pi(i)}\leq s\bigr{\}}}, \tag{32}\]
where \(\lambda\in[0,1)\) is an additional parameter. The 'knockoff' setting of Barber and Candes (2015) can be seen as a particular case of this pre-ordered setting, where the \(p\)-values are independent and binary, the ordering is independent of the \(p\)-values and \(s=\lambda=1/2\). The LF procedure reduces in that case to the classical Barber and Candes (BC) procedure.
### New confidence envelopes
The first envelope is as follows.
**Theorem 3.1**.: _Consider the pre-ordered setting of Section 3.1 with \(s\in(0,1]\). For all \(\delta\in(0,1)\), \(\lambda\in[0,1)\), the following is a \((1-\delta)\)-confidence envelope for the ordered path (31) in the sense of (1):_
\[\overline{\text{FDP}}_{k}^{\text{Fred}}=1\wedge\frac{\frac{s}{1-\lambda}\sum _{i=1}^{k}\mathbf{1}\bigl{\{}p_{\pi(i)}>\lambda\bigr{\}}+\Delta(\nu k)}{\sum_ {i=1}^{k}\mathbf{1}\bigl{\{}p_{\pi(i)}\leq s\bigr{\}}},\,k\geq 1, \tag{33}\]
_where \(\Delta(u)=2\sqrt{\varepsilon_{u}}\sqrt{(u\lor 1)}+\frac{1}{2}\varepsilon_{u}\), \(\varepsilon_{u}=\log((1+\kappa)/\delta)+2\log(1+\log_{2}(u\lor 1))\), \(u>0\), \(\kappa=\pi^{2}/6\) and \(\nu=s(1+\min(s,\lambda)/(1-\lambda))\)._
The proof of Theorem 3.1 is a direct consequence of a more general result (Theorem C.1), itself being a consequence of a uniform version of Freedman's inequality (see Section C.2).
The second result comes from the KR envelope (Katsevich and Ramdas, 2020):
\[\overline{\text{FDP}}_{k}^{\text{KR}}=1\wedge\Biggl{(}\frac{\log(1/\delta)}{a \log(1+\frac{1-\delta^{B/a}}{B})}\frac{a+\frac{s}{1-\lambda}\sum_{i=1}^{k} \mathbf{1}\bigl{\{}p_{\pi(i)}>\lambda\bigr{\}}}{1\vee\sum_{i=1}^{k}\mathbf{1} \bigl{\{}p_{\pi(i)}\leq s\bigr{\}}}\Biggr{)}, \tag{34}\]
where \(a>0\) is some parameter, \(B=s/(1-\lambda)\) and it is assumed \(\lambda\geq s\). While the default choice in KR is \(a=1\), we can build up a new envelope by taking a union bound over \(a\in\mathbb{N}\backslash\{0\}\):
**Theorem 3.2**.: _Consider the pre-ordered setting of Section 3.1 with \(s\in(0,1]\). For all \(\delta\in(0,1)\) and \(\lambda\in[s,1]\), the following is a \((1-\delta)\)-confidence envelope for the ordered path (31) in the sense of (1):_
\[\overline{\text{FDP}}_{k}^{\text{KR-U}}=1\wedge\min_{a\in\mathbb{N}\backslash \{0\}}\Biggl{\{}\frac{\log(1/\delta_{a})}{a\log(1+\frac{1-\delta_{a}^{B/a}}{B} )}\frac{a+\frac{s}{1-\lambda}\sum_{i=1}^{k}\mathbf{1}\bigl{\{}p_{\pi(i)}> \lambda\bigr{\}}}{1\vee\sum_{i=1}^{k}\mathbf{1}\bigl{\{}p_{\pi(i)}\leq s \bigr{\}}}\Biggr{\}},\,k\geq 1, \tag{35}\]
_for \(\delta_{a}=\delta/(\kappa a^{2})\), \(a\geq 1\), for \(B=s/(1-\lambda)\), \(\kappa=\pi^{2}/6\)._
The envelope (35) is less explicit than (33) but has a better behavior in practice, as we will see in the numerical experiments of Section 5.
### Confidence bounds for LF and consistency
Recall that the LF procedure (32) is the reference FDR-controlling procedure in this setting. Applying the above envelopes for the LF procedure gives the following confidence bounds.
**Corollary 3.3**.: _In the pre-ordered setting of Section 3.1 with a selection threshold \(s\in(0,1]\), for any \(\alpha,\delta\in(0,1)\), \(\lambda\in[s,1]\) the following quantities are \((1-\delta)\)-confidence bounds for the FDP of the LF procedure with parameters \(s,\lambda\) at level \(\alpha\):_
\[\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{KR-U}}} =1\wedge\left(\frac{\log(1/\delta)}{\log(1+\frac{1-\delta^{B}}{B} )}(\alpha+1/(1\vee\hat{r}_{\alpha}))\right); \tag{36}\] \[\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{Feed}}} =1\wedge\left(\alpha+\Delta(\nu\hat{k}_{\alpha})/(1\vee\hat{r}_{ \alpha})\right)\] (37) \[\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{KR-U}}} =1\wedge\min_{1\leq a\leq 1\lor\hat{r}_{\alpha}}\Biggl{\{} \frac{\log(1/\delta_{a})}{a\log(1+\frac{1-\delta_{a}^{B/a}}{B})}(\alpha+a/(1 \vee\hat{r}_{\alpha}))\Biggr{\}}, \tag{38}\]
_for \(\nu=s(1+s/(1-\lambda))\), \(B=s/(1-\lambda)\), \(\delta_{a}=\delta/(\kappa a^{2})\), \(a\geq 1\), \(\kappa=\pi^{2}/6\), \(\Delta(\cdot)\) defined in Theorem 3.1 and where \(\hat{k}_{\alpha}\) is as in (32) and \(\hat{r}_{\alpha}=\sum_{i=1}^{\hat{k}_{\alpha}}\mathbf{1}\bigl{\{}p_{\pi(i)} \leq s\bigr{\}}\) denotes the number of rejections of LF procedure at level \(\alpha\). In addition, these bounds are also valid uniformly in \(\alpha\in(0,1)\) in the sense that_
\[\mathbf{P}(\forall\alpha\in(0,1),\mathrm{FDP}(R_{\hat{k}_{\alpha}})\leq \overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{Meth}}})\geq 1-\delta,\ \ \mbox{for Meth}\in\{\mbox{KR},\mbox{Freed},\mbox{KR-U}\},\]
_and thus also when using a post hoc choice \(\alpha=\hat{\alpha}\) of the level._
Proof.: This is direct by applying (34) (\(a=1\)), (33) and (35) to the rejection set \(R_{\hat{k}_{\alpha}}\).
Let us now study the consistency property (3). It is apparent that KR is never consistent: namely, for all \(m\geq 1\),
\[\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{KR}}}\geq 1\wedge c\alpha,\]
for some constant \(c>1\). By contrast, \(\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{Feed}}}\) is consistent if \(\Delta(\nu m)/\hat{r}_{\alpha}\) tends to \(0\) in probability, that is, \((m\log\log m)^{1/2}/\hat{r}_{\alpha}=o_{P}(1)\). For \(\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{KR-U}}}\), we always have
\[\overline{\mathrm{FDP}}_{\alpha}^{\mbox{\tiny{KR-U}}}\leq\frac{\log(1/\delta_ {\hat{a}})}{\hat{a}\log(1+\frac{1-\delta_{\hat{a}}^{B/\hat{a}}}{B})}\Bigl{(} \alpha+1/(1\vee\hat{r}_{\alpha})^{1/2}\Bigr{)}\]
by considering \(\hat{a}=\lfloor(1\vee\hat{r}_{\alpha})^{1/2}\rfloor\). By Lemma D.3, this provides consistency (3) as soon as \(1/\hat{r}_{\alpha}=o_{P}(1)\). The following proposition gives an example where the latter condition holds in the varying coefficient two-groups (VCT) model of Lei and Fithian (2016), that we generalize to the possible sparse case in Section A.2.
**Proposition 3.4**.: _Consider the sequence of generalized VCT models \((\mathbf{P}^{(m)}_{\pi,\beta,F_{0},F_{1}},m\geq 1)\), as defined in Section A.2. Assume that the parameters \(\pi,\beta,F_{0},F_{1}\) satisfy the assumptions of Theorem A.4 given in Appendix A.2 (assuming in particular that \(\alpha_{0}>\underline{\alpha}\) where \(\underline{\alpha}\) is defined by (56)). Then the consistency (3) holds for the sequence \((\mathbf{P}^{(m)}_{\pi,\beta,F_{0},F_{1}},m\geq 1)\) and for any \(LF\) procedure using \(\lambda\geq s\) in either of the two following cases:_
* _for the KR-U envelope (_35_) and the corresponding bound (_38_)._
* _for the Freedman envelope (_33_) and the corresponding bound (_37_) if either_ \(\lambda=s\) _or_ \(\beta<1/2\)_;_
Proof.: This is a direct consequence of Theorem A.4 because \(m^{1-\beta}/\hat{r}_{\alpha}=O_{P}(1)\) in that context and \(\hat{r}_{\alpha}\) is nondecreasing in \(\alpha\). To see why the Freedman envelope is consistent when \(\lambda=s\), we note that in this case \(\hat{k}_{\alpha}=\sum_{i=1}^{\hat{k}_{\alpha}}\mathbf{1}\big{\{}p_{\pi(i)} \leq s\big{\}}+\sum_{i=1}^{\hat{k}_{\alpha}}\mathbf{1}\big{\{}p_{\pi(i)}> \lambda\big{\}}\leq(1+\alpha s/(1-\lambda))(1\vee\hat{r}_{\alpha})\), hence the quantity \(\Delta(\nu\hat{k}_{\alpha})/(1\vee\hat{r}_{\alpha})\) is \(o_{P}(1)\) as \(1/\hat{r}_{\alpha}=o_{P}(1)\).
We would like to emphasize that the power analysis made in Appendix A.2 provides new insights with respect to Lei and Fithian (2016). First, it accommodates the sparse case for which the probability of generating an alternative is tending to zero as \(m\) tends to infinity. Second, it introduces a new criticality-type assumption (see (56)), which was overlooked in Lei and Fithian (2016), but is necessary to get a non zero power at the limit (even in the dense case). Finally, it allows to deal with binary \(p\)-values, which corresponds to the usual 'knockoff' situation.
_Remark 3.5_.: Similarly to Section 2.6 in the top-\(k\) setting, the bounds KR, Freedman and KR-U can be improved by performing the interpolation operation (2) in the pre-ordered setting.
## 4 Results in the online case
### Online setting
We consider an infinite stream of \(p\)-values \(p_{1},p_{2},\dots\) testing null hypotheses \(H_{1},H_{2},\dots\), respectively. In the online setting, these \(p\)-values come one at a time and a decision should be made at each time immediately and irrevocably, possibly on the basis of past decisions.
The decision at time \(k\) is to reject \(H_{k}\) if \(p_{k}\leq\alpha_{k}\) for some critical value \(\alpha_{k}\) only depending on the past decisions. An online procedure is thus defined by a sequence of critical values \(\mathcal{A}=(\alpha_{k},k\geq 1)\), that is predictable in the following sense
\[\alpha_{k+1}\in\mathcal{G}_{k}=\sigma(\mathbf{1}\{p_{i}\leq\alpha_{i}\},i\leq k ),\,\,k\geq 1.\]
A classical assumption is that each null \(p\)-value is super-uniform conditionally on past decisions, that is,
\[\mathbf{P}(p_{k}\leq x\,|\,\mathcal{G}_{k})\leq x,\,\,k\in\mathcal{H}_{0}, \tag{39}\]
where \(\mathcal{H}_{0}=\{k\geq 1\,|\,H_{k}=0\}\). Condition (39) is for instance satisfied if the \(p\)-values are all mutually independent and marginally super-uniform under the null.
For a _fixed_ procedure \(\mathcal{A}\), we consider the path
\[R_{k}=\{1\leq i\leq k\,:\,p_{i}\leq\alpha_{i}\},\,\,\,k\geq 1. \tag{40}\]
We will also denote
\[R(k)=\sum_{i=1}^{k}\mathbf{1}\{p_{i}\leq\alpha_{i}\},\,\,\,k\geq 1, \tag{41}\]
the number of rejections before time \(k\) of the considered procedure. A typical procedure controlling the online FDR is the LORD procedure
\[\alpha_{k}=W_{0}\gamma_{k}+(\alpha-W_{0})\gamma_{k-\tau_{1}}+\alpha\sum_{j\geq 2 }\gamma_{k-\tau_{j}}, \tag{42}\]
where \(W_{0}\in[0,\alpha]\), each \(\tau_{j}\) is the first time with \(j\) rejections, \((\gamma_{k})_{k}\) is a non-negative ('spending') sequence with \(\sum_{k\geq 0}\gamma_{k}\leq 1\) and \(\gamma_{k}=0\) for \(k<0\). The latter has been extensively studied in the literature (Foster and Stine, 2008; Aharoni and Rosset, 2014; Javanmard and Montanari, 2018), and further improved by Ramdas et al. (2017). Under independence of the \(p\)-values and super-uniformity of the \(p\)-values under the null, the LORD procedure controls the online FDR in the sense of
\[\sup_{k\geq 1}\mathbf{E}[\mathrm{FDP}(R_{k})]\leq\alpha,\]
see Theorem 2 (b) in Ramdas et al. (2017). Here, we consider the different (and somehow more demanding) task of finding a bound on the realized online FDP, by deriving confidence envelopes (1). Note that this will be investigated for any online procedure and not only for LORD, see Section 4.2. Also, we will study the consistency of the envelope for any LORD-type procedure in Section 4.3.
### New confidence envelopes
The first envelope is a consequence of the general result stated in Theorem C.1.
**Theorem 4.1**.: _In the online setting described in Section 4.1, consider any online procedure \(\mathcal{A}=(\alpha_{k},k\geq 1)\) and assume (39). Then for any \(\delta\in(0,1)\), the following is a \((1-\delta)\)-confidence envelope for the path (40) in the sense of (1):_
\[\overline{\mathrm{FDP}}_{\mathcal{A},k}^{\text{Fred}}=1\wedge\frac{\sum_{i=1 }^{k}\alpha_{i}+\Delta\Bigl{(}\sum_{i=1}^{k}\alpha_{i}\Bigr{)}}{1\lor R(k)},\, \,\,k\geq 1, \tag{43}\]
_where \(R(k)\) is given by (41), \(\Delta(u)=2\sqrt{\varepsilon_{u}}\sqrt{u\lor 1}+\frac{1}{2}\varepsilon_{u}\), \(\varepsilon_{u}=\log((1+\kappa)/\delta)+2\log(1+\log_{2}(u\lor 1))\), \(u>0\) and \(\kappa=\pi^{2}/6\)._
Proof.: We apply Theorem C.1 in the online setting for \(\lambda=0\) (and further upper-bounding each term \(\mathbf{1}\bigl{\{}p_{\pi(i)}>0\bigr{\}}\) by 1), \(\pi(k)=k\), because (64) is satisfied by (39).
Next, the envelope of Katsevich and Ramdas (2020) is as follows
\[\overline{\mathrm{FDP}}^{{}^{\mathrm{KR}}}_{\mathcal{A},k}=1\wedge\left(\frac{ \log(1/\delta)}{a\log(1+\log(1/\delta)/a)}\frac{\left(a+\sum_{i=1}^{k}\alpha_{i }\right)}{1\lor R(k)}\right), \tag{44}\]
for some parameter \(a>0\) to choose. While the default choice in Katsevich and Ramdas (2020) is \(a=1\), applying a union w.r.t. \(a\in\mathbb{N}\backslash\{0\}\) provides the following result.
**Theorem 4.2**.: _In the online setting described in Section 4.1, and for any online procedure \(\mathcal{A}=(\alpha_{k},k\geq 1)\), for any \(\delta\in(0,1)\), the following is a \((1-\delta)\)-confidence envelope for the path (40) in the sense of (1):_
\[\overline{\mathrm{FDP}}^{{}^{\mathrm{KR}\cdot U}}_{\mathcal{A},k}=1\wedge \min_{a\in\mathbb{N}\backslash\{0\}}\Bigg{\{}\frac{\log(1/\delta_{a})}{a\log (1+\log(1/\delta_{a})/a)}\frac{\left(a+\sum_{i=1}^{k}\alpha_{i}\right)}{1\lor R (k)}\Bigg{\}},\,\,\,k\geq 1, \tag{45}\]
_where \(R(k)\) is given by (41), \(\delta_{a}=\delta/(\kappa a^{2})\), \(a\geq 1\), for \(\kappa=\pi^{2}/6\)._
_Remark 4.3_.: Note that the guarantee (1) is not uniform in the procedure \(\mathcal{A}\) (by contrast with the envelopes in top-\(k\) and preordered cases which were uniform in \(k\) and thus also in the cut-off procedure).
### Confidence envelope for LORD-type procedures and consistency
We now turn to the special case of online procedures satisfying the following condition:
\[\sum_{i=1}^{k}\alpha_{i}\leq\alpha(1\lor R(k)),\,\,\,\,k\geq 1. \tag{46}\]
Classically, this condition is sufficient to control the online FDR (if the \(p\)-values are independent and under an additional monotonicity assumption), see Theorem 2 (b) in Ramdas et al. (2017). In particular, it is satisfied by LORD (42).
**Corollary 4.4**.: _In the online setting described in Section 4.1, consider any online procedure \(\mathcal{A}=(\alpha_{k},k\geq 1)\), satisfying (46) for some \(\alpha\in(0,1)\), and assume (39). Then for any \(\delta\in(0,1)\), the following quantities are \((1-\delta)\)-confidence bounds for the FDP of the procedure: for all \(k\geq 1\),_
\[\overline{\mathrm{FDP}}^{{}^{\mathrm{KR}}}_{\alpha,k} =1\wedge\bigg{(}\frac{\log(1/\delta)}{\log(1+\log(1/\delta))}( \alpha+1/(1\lor R(k))\bigg{)};\] (47) \[\overline{\mathrm{FDP}}^{{}^{\mathrm{P}\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof.: This is direct by applying (44) (\(a=1\)), (48) and (49) and by using the inequality (46) in the corresponding bound.
Let us now consider these bounds for the LORD procedure (42), and study the consistency property (5). Clearly, we have \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{KR}}\geq 1\wedge(c\alpha)\) for all \(k\geq 1\), where \(c>1\) is a constant. Hence, the envelope KR is not consistent. By contrast, it is apparent that both the Freedman envelope and the uniform KR envelope are consistent provided that \(1/R(k)=o_{P}(1)\) as \(k\) tends to infinity (consider \(a=\sqrt{1\lor R(k)}\) and use Lemma D.3 for the KR-U envelope). This condition is met in classical online models, as the following result shows.
**Proposition 4.5**.: _Consider the online one-sided Gaussian mixture model \(\mathbf{P}_{\pi_{1},F_{1}}\) of Section A.3 and the LORD procedure with \(W_{0}\in(0,\alpha)\) and a spending sequence \(\gamma_{k}=\frac{1}{k(\log(k))^{\gamma}}\), \(k\geq 1\) for \(\gamma>1\). Then both the Freedman envelope (48) and the uniform KR envelope (49) are consistent in the sense of (5) for the model \(\mathbf{P}_{\pi_{1},F_{1}}\)._
Proof.: This is a direct consequence of Theorem A.8, which provides that \(k^{1/2}/R(k)=O_{\mathbf{P}_{\pi_{1},F_{1}}}\) when \(k\) tends to infinity.
_Remark 4.6_.: Similarly to Section 2.6 in the top-\(k\) setting, the bounds KR, Freedman and KR-U can be improved by performing the interpolation operation (2) in the online setting.
## 5 Numerical experiments
In this section, we illustrate our findings by conducting numerical experiments in each of the considered settings: top-\(k\), pre-ordered and online. Throughout the experiments, the default value for \(\delta\) is \(0.25\) and the default number of replications to evaluate each FDP bound is \(1000\).
### Top-\(k\)
Here, we consider the top-\(k\) setting of Section 2.1, for alternative \(p\)-values distributed as \(F_{1}(x)=\overline{\Phi}(\overline{\Phi}^{-1}(x)-\mu)\) (one sided Gaussian location model), and for different values of \(\mu\) and of \(\pi_{0}\). To investigate the consistency property, we take \(m\) varying in the range \(\{10^{i},2\leq i\leq 6\}\), and we consider the FDP bounds \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{Simes}}\) (15), \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{DKW}}\) (16), \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{KR}}\) (17), \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{Well}}\) (18) for \(\alpha\in\{0.05,0.1,0.15,0.2\}\). We also add for comparison the hybrid bound
\[\overline{\mathrm{FDP}}_{\alpha,\delta}^{\mathrm{Hybrid}}=\min\Bigl{(} \overline{\mathrm{FDP}}_{\alpha,\delta/2}^{\mathrm{KR}},\overline{\mathrm{ FDP}}_{\alpha,\delta/2}^{\mathrm{Well}}\Bigr{)},\]
which also provides the correct coverage while being close to the best between the Wellner and KR bounds.
Figure 1 displays boxplots of the different FDP bounds in the dense case for which \(\pi_{0}=1/2\), \(\mu=1.5\). When \(m\) gets large, we clearly see the inconsistency of the bounds Simes, KR and the consistency of the bounds Wellner, Hybrid, DKW, which corroborates the theoretical findings (Proposition 2.5). In sparser scenarios, Figure 2 shows that the consistency is less obvious for the Wellner and Hybrid bounds and gets violated for the DKW bound when \(m_{1}\propto m^{0.55}\), as predicted from Proposition 2.5 (regime \(\beta\geq 1/2\)). Overall, the new bounds are expected to be
better as the number of rejections gets larger and KR bounds remain better when the number of rejections is expected to be small. The hybrid bound hence might be a good compromise for a practical use.
The adaptive versions of the bounds (Section 2.5) are displayed on Figure 3. By comparing the left and the right panels, we see that the uniform improvement can be significant, especially for the Wellner and DKW bounds. By contrast, the improvement for KR is slightly worse. This can be explained from Figure 4, that evaluates the quality of the different \(\pi_{0}\) estimators. DKW, which is close to an optimized Storey-estimator, is the best, followed closely by the Wellner estimator.
Remark 5.1: For clarity, the bounds are displayed without the interpolation improvement
Figure 1: Top-\(k\) dense case (\(\pi_{0}=0.5\), \(\mu=1.5\)).
[MISSING_PAGE_POST]
(2) (for top-\(k\) and preordered). The figures are reproduced together with the interpolated bounds in Appendix E for completeness. In a nutshell, the interpolation operation improves significantly the bounds mainly when they are not very sharp (typically small \(m\) or very sparse scenarios). Hence, while it can be useful in practice, it does not seem particularly relevant to study the consistency phenomenon.
### Pre-ordered
We consider data generated as in the pre-ordered model presented in Section 3.1 and more specifically as in the VCT model of Section A.2. The trueness/falseness of null hypotheses are generated independently, and the probability of generating an alternative is decreasing with the position \(1\leq k\leq m\), and is given by \(\pi(m^{\beta-1}k)\), where \(\pi:[0,\infty)\to[0,1)\) is some function (see below) and \(\beta\in[0,1)\) is the sparsity parameter. Once the process of true/false nulls is given, the \(p\)-values are generated according to either:
* LF setting: \(\pi(t)=\pi_{1}e^{-bt}\frac{b}{1-e^{-b}}\), \(t\geq 0\), so that \(\Pi(1)=\pi_{1}\). Here \(\pi_{1}\) is equal to \(0.4\) and \(b\), measuring the quality of the prior ordering, is equal to \(2\). In addition, the alternative \(p\)-values are one-sided Gaussian with \(\mu=1.5\). Note that this is the setting considered in the numerical experiments of Lei and Fithian (2016).
* Knockoff setting: \(\pi(t)=1/2+(0\lor 1/2(\frac{z-t}{z-1}))\), \(t\geq 0\), with \(z>1\) a parameter that determines how slowly the probability of observing signal deteriorates, taken equal to \(30\). Then, the binary \(p\)-values are as follows: under the null \(p_{i}=1/2\) or \(1\) with equal
probability. Under the alternative, \(p_{i}=1/2\) with probability \(0.9\) and \(p_{i}=1\) otherwise.
In both settings, the dense (resp. sparse) case refers to the sparsity parameter value \(\beta=0\) (resp. \(\beta=0.25\)).
We consider the bounds \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{KR}}\) (36), \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{Fred}}\) (37) and \(\overline{\mathrm{FDP}}_{\alpha}^{\mathrm{KR.U}}\) (38) for the LF procedure across different values of \((\lambda,s)\in\{(1/2,0.1\alpha),(1/2,1/2)\}\), \(m\in\{10^{i},2\leq i\leq 6\}\), and \(\alpha\in\{0.05,0.1,0.15,0.2\}\). The procedure LF with \((\lambda,s)=(1/2,1/2)\) is referred to as the Barber and Candes (BC) procedure.
Figure 5 displays the boxplots of these FDP bounds for the LF procedure with \((\lambda,s)=(1/2,0.1\alpha)\) in the LF setting with \(\beta=0\) (dense case). It is apparent that KR is not consistent, while the new bounds Freedman and KR-U are. Also, the bound KR-U is overall the best, losing almost nothing w.r.t. KR when the number of rejections is very small (say \(m=100\) and \(\alpha=0.05\) or \(0.1\)) and making a very significant improvement otherwise. Similar conclusions hold for the case of BC procedure, see Figure 7. Next, to stick with a very common scenario, we also investigate the sparse situation where the fraction of signal is small in the data, see Figures 6 and 8. As expected, while the conclusion is qualitatively the same, the rejection number gets smaller so that the consistency is reached for largest values of \(m\) (i.e., the convergence is'slowed down').
### Online
We now consider the online case, by applying our method to the real data example coming from the International Mice Phenotyping Consortium (IMPC) (Munoz-Fuentes et al., 2018), which is a consortium interested in the genotype effect on the phenotype. This data is collected in an online fashion for each gene of interest and is classically used in online detection works (see Ramdas et al. (2017) and references therein).
Figure 9 displays the FDP time-wise envelopes \(k\mapsto\overline{\mathrm{FDP}}_{\alpha,k}^{\mathrm{KR}}\) (47), \(k\mapsto\overline{\mathrm{FDP}}_{\alpha,k}^{\mathrm{Fred}}\) (48) and \(k\mapsto\overline{\mathrm{FDP}}_{\alpha,k}^{\mathrm{KR.U}}\) (49), for the LORD procedure (42) (\(W_{0}=\alpha/2\) with the spending sequence \(\gamma_{k}=k^{-1.6}\), \(k\geq 1\)). As we can see, the Freedman and KR-U envelopes both tend to the nominal level \(\alpha\), as opposed to the KR envelope, which is well expected from the consistency theory. In addition, KR-U seems to outperform the Freedman envelope and while KR is (slightly) better than KR-U in the initial segment of the process (\(k<300\)), we can see that KR-U gets rapidly more accurate.
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
### Comparison to Li et al. (2022)
In this section, we compare the performances of the KR-U bound with respect to the recent bounds proposed in Li et al. (2022). For this, we reproduce the high dimensional Gaussian linear regression setting of Section 5.1 (a) therein, which generates binary \(p\)-values by applying the fixed-\(X\)'sdp' knockoffs and the signed maximum lambda knockoff statistic of Barber and Candes (2015). Doing so, the \(p\)-values follow the preordered setting of Section 3.1 and thus our bounds are non-asymptotically valid (note however that the \(p\)-values do not follow strictly speaking the VCT model of Section A.2). To be more specific, the considered Gaussian linear model \(Y\sim\mathcal{N}(X\beta,I_{n})\) is obtained by first generating \(X\) and \(\beta\) as follows: the correlated design matrix \(X\) of size \(n\times m\) is obtained by drawing \(n=1500\) i.i.d. samples from the multivariate \(m\)-dimensional distribution \(\mathcal{N}_{m}(0,\Sigma)\) where \(\Sigma_{i,j}=0.6^{|i-j|}\), \(1\leq i,j\leq m\); the signal vector \(\beta\in\mathbb{R}^{m}\) is obtained by first randomly sampling a subset of \(\{1,\ldots,m\}\) of size \(\lfloor\pi_{1}m\rfloor\) for the non-zero entries of \(\beta\) and then by setting all non-zero entries of \(\beta\) equal to \(a/\sqrt{n}\) for a given amplitude \(a>0\).
First, in the spirit of Figure 3 in Li et al. (2022), we display in Figure 10 the envelope \((\widehat{\text{FDP}}_{k}^{\text{KR-U}},k\geq 1)\) given by the interpolation (2) of the envelope \((\widehat{\text{FDP}}_{k}^{\text{KR-U}},k\geq 1)\) defined by (35) (with \(s=\lambda=1/2\)), and compare it to those obtained in Li et al. (2022) (namely, KJI A/B/C/D) for \(\pi_{1}\in\{0.1,0.5\}\), \(a\in\{15,25\}\). We also set here \(\delta=0.05\) to stick with the choice of Li et al. (2022) (note that this requires to further calibrate the parameters of their method according to this value of \(\delta\)) and the number of replications is here only taken equal to \(10\) for computational reasons. Markedly, the KR-U envelope becomes much better than KR and is competitive w.r.t. KJI A/B/C/D, at least when \(k\) is moderately large. As expected, the most favorable case for KR-U is when the signal has a large amplitude and is dense.
Second, to stick with the consistency-oriented plots of the previous sections, we also display the corresponding FDP bounds for the BC procedure at level \(\alpha\in\{0.15,0.2\}\) in Figure 11. The conclusions are qualitatively similar.
Figure 8: Pre-ordered sparse (\(\beta=0.25\)) knockoff setting with BC procedure (i.e., LF procedure with \(s=\lambda=0.5\)).
[MISSING_PAGE_POST]
Figure 10: Comparing the envelope \(\widetilde{\text{FDP}}_{k}^{\text{KR-U}}\), \(k\geq 1\) given by (2)-(35) (\(s=\lambda=0.5\)) to those of Li et al. (2022) in the Gaussian linear regression setting of Section 5.4 for \(m=1000\) (see text for more details).
## References
* [1]
Figure 11: Comparing the FDP bound \(\widetilde{\text{FDP}}_{\hat{k}_{\alpha}}^{\text{KR-U}}\) for \(\hat{k}_{\alpha}\) the BC procedure (32) (\(s=\lambda=0.5\)) to those of Li et al. (2022) with respect to \(\alpha\in\{0.15,0.2\}\) in the Gaussian linear regression setting of Section 5.4 for \(m=1000\) (see text for more details).
## 6 Conclusion
The main point if this paper is to provide another point of view on FDP confidence bounds: we introduced a notion of consistency, a desirable asymptotical property which should act as a guiding principle when building such bounds, by ensuring that the bound is sharp enough on particular FDR controlling rejection sets. Doing so, some previous bounds were shown to be inconsistent, as the original KR bounds; while some other known FDP confidence bounds, in particular based on the DKW inequality, are consistent under certain assumptions, we have introduced new ones shown to satisfy this condition under more general conditions (in particular high sparsity). New bounds based on the classical Wellner/Freedman inequalities showed interesting behaviors, however simple modifications of KR bounds Hybrid/KR-U by'stitching' have been shown to be the most efficient, both asymptotically and for moderate sample size.
Overall, this work shows that consistency is a simple and fruitful criterion, and we believe that using it will be beneficial in the future to make wise choices among the rapidly increasing literature on FDP bounds.
## Acknowledgements
GB acknowledges support from: Agence Nationale de la Recherche (ANR), ANR-19-CHIA-0021-01 'BiSCottE', and IDEX REC-2019-044; Deutsche Forschungsgemeinschaft (DFG) - SFB1294 - 318763901.
IM and ER have been supported by ANR-21-CE23-0035 (ASCAI) and by the GDR ISIS through the 'projets exploratoires' program (project TASTY). It is part of project DO 2463/1-1, funded by the Deutsche Forschungsgemeinschaft.
## References
* Abraham et al. (2021) Abraham, K., Castillo, I., and Roquain, E. (2021). Sharp multiple testing boundary for sparse sequences. _arXiv preprint arXiv:2109.13601_.
* Aharoni and Rosset (2014a) Aharoni, E. and Rosset, S. (2014a). Generalized \(\alpha\)-investing: definitions, optimality results and application to public databases. _Journal of the Royal Statistical Society: Series B: Statistical Methodology_, pages 771-794.
* Aharoni and Rosset (2014b) Aharoni, E. and Rosset, S. (2014b). Generalized alpha-investing: definitions, optimality results and application to public databases. _Journal of the Royal Statistical Society. Series B (Statistical Methodology)_, 76(4):771-794.
* Barber and Candes (2015) Barber, R. F. and Candes, E. J. (2015). Controlling the false discovery rate via knockoffs. _The Annals of Statistics_, 43(5):2055-2085.
* Benjamini and Hochberg (1995) Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. _Journal of the Royal Statistical Society. Series B_, 57(1):289-300.
* Blanchard et al. (2020) Blanchard, G., Neuvial, P., and Roquain, E. (2020). Post hoc confidence bounds on false positives using reference families. _The Annals of Statistics_, 48(3):1281-1303.
* Bogdan et al. (2011) Bogdan, M., Chakrabarti, A., Frommlet, F., and Ghosh, J. K. (2011). Asymptotic bayes-optimality under sparsity of some multiple testing procedures. _The Annals of Statistics_, 39(3):1551-1579.
* Candes et al. (2018) Candes, E., Fan, Y., Janson, L., and Lv, J. (2018). Panning for gold:'model-x' knockoffs for high dimensional controlled variable selection. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 80(3):551-577.
* Cui et al. (2021) Cui, X., Dickhaus, T., Ding, Y., and Hsu, J. C. (2021). _Handbook of multiple comparisons_. CRC Press.
* Dumbgen and Wellner (2023) Dumbgen, L. and Wellner, J. A. (2023). A new approach to tests and confidence bands for distribution functions. _The Annals of Statistics_, 51(1):260-289.
* Durand et al. (2020) Durand, G., Blanchard, G., Neuvial, P., and Roquain, E. (2020). Post hoc false positive control for structured hypotheses. _Scandinavian journal of Statistics_, 47(4):1114-1148.
* Foster and Stine (2008) Foster, D. P. and Stine, R. A. (2008). Alpha-investing: a procedure for sequential control of expected false discoveries. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 70(2):429-444.
* Freedman (1975) Freedman, D. A. (1975). On tail probabilities for martingales. _The Annals of Probability_, pages 100-118.
* Genovese and Wasserman (2004) Genovese, C. and Wasserman, L. (2004). A stochastic process approach to false discovery control. _The annals of statistics_, 32(3):1035-1061.
* Genovese and Wasserman (2006) Genovese, C. R. and Wasserman, L. (2006). Exceedance control of the false discovery proportion. _Journal of the American Statistical Association_, 101(476):1408-1417.
* Goeman et al. (2019) Goeman, J. J., Meijer, R. J., Krebs, T. J., and Solari, A. (2019). Simultaneous control of all false discovery proportions in large-scale multiple hypothesis testing. _Biometrika_, 106(4):841-856.
* Goeman and Solari (2011) Goeman, J. J. and Solari, A. (2011). Multiple testing for exploratory research. _Statistical Science_, 26(4):584-597.
* Hemerik et al. (2019) Hemerik, J., Solari, A., and Goeman, J. J. (2019). Permutation-based simultaneous confidence bounds for the false discovery proportion. _Biometrika_, 106(3):635-649.
* Jamieson et al. (2014) Jamieson, K., Malloy, M., Nowak, R., and Bubeck, S. (2014). lil'UCB: An optimal exploration algorithm for multi-armed bandits. In _Conference on Learning Theory_, pages 423-439. PMLR.
* Javanmard and Montanari (2018) Javanmard, A. and Montanari, A. (2018). Online rules for control of false discovery rate and false discovery exceedance. _The Annals of statistics_, 46(2):526-554.
* Katsevich and Ramdas (2020) Katsevich, E. and Ramdas, A. (2020). Simultaneous high-probability bounds on the false discovery proportion in structured, regression and online settings. _The Annals of Statistics_, 48(6):3465-3487.
* Lei and Fithian (2016) Lei, L. and Fithian, W. (2016). Power of ordered hypothesis testing. In _International conference on machine learning_, pages 2924-2932. PMLR.
* Li and Barber (2017) Li, A. and Barber, R. F. (2017). Accumulation tests for fdr control in ordered hypothesis
testing. _Journal of the American Statistical Association_, 112(518):837-849.
* Li et al. (2022) Li, J., Maathuis, M. H., and Goeman, J. J. (2022). Simultaneous false discovery proportion bounds via knockoffs and closed testing. _arXiv preprint arXiv:2212.12822_.
* Massart (1990) Massart, P. (1990). The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. _Ann. Probab._, 18(3):1269-1283.
* Meinshausen (2006) Meinshausen, N. (2006). False discovery control for multiple tests of association under general dependence. _Scandinavian Journal of Statistics_, 33(2):227-237.
* Meinshausen and Buhlmann (2005) Meinshausen, N. and Buhlmann, P. (2005). Lower bounds for the number of false null hypotheses for multiple testing of associations under general dependence structures. _Biometrika_, 92(4):893-907.
* Munoz-Fuentes et al. (2018) Munoz-Fuentes, V., Cacheiro, P., Meehan, T. F., Aguilar-Pimentel, J. A., Brown, S. D. M., Flenniken, A. M., Flicek, P., Galli, A., Mashhadi, H. H., Hrabe De Angelis, M., Kim, J. K., Lloyd, K. C. K., McKerlie, C., Morgan, H., Murray, S. A., Nutter, L. M. J., Reilly, P. T., Seavitt, J. R., Seong, J. K., Simon, M., Wardle-Jones, H., Mallon, A.-M., Smedley, D., and Parkinson, H. E. (2018). The International Mouse Phenotyping Consortium (IMPC): a functional catalogue of the mammalian genome that informs conservation \(\times\) the IMPC consortium. _Conservation Genetics_, 3(4):995-1005.
* Neuvial (2008) Neuvial, P. (2008). Asymptotic properties of false discovery rate controlling procedures under independence. _Electron. J. Statist._, 2:1065-1110.
* Neuvial (2013) Neuvial, P. (2013). Asymptotic results on adaptive false discovery rate controlling procedures based on kernel estimators. _Journal of Machine Learning Research_, 14:1423-1459.
* Neuvial and Roquain (2012) Neuvial, P. and Roquain, E. (2012). On false discovery rate thresholding for classification under sparsity. _The Annals of Statistics_, 40(5):2572-2600.
* Perrot-Dockes et al. (2021) Perrot-Dockes, M., Blanchard, G., Neuvial, P., and Roquain, E. (2021). Post hoc false discovery proportion inference under a hidden markov model. _arXiv preprint arXiv:2105.00288_.
* Ramdas et al. (2017) Ramdas, A., Yang, F., Wainwright, M. J., and Jordan, M. I. (2017). Online control of the false discovery rate with decaying memory. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, _Advances in Neural Information Processing Systems_, volume 30. Curran Associates, Inc.
* Robbins (1954) Robbins, H. (1954). A one-sided confidence interval for an unknown distribution function. _Annals of Mathematical Statistics_, 25(2):409-409.
* Robertson et al. (2022) Robertson, D. S., Wason, J., and Ramdas, A. (2022). Online multiple hypothesis testing for reproducible research. _arXiv preprint arXiv:2208.11418_.
* Shorack and Wellner (2009) Shorack, G. R. and Wellner, J. A. (2009). _Empirical processes with applications to statistics_. SIAM.
* Storey (2002) Storey, J. D. (2002). A direct approach to false discovery rates. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 64(3):479-498.
* Vesely et al. (2021) Vesely, A., Finos, L., and Goeman, J. J. (2021). Permutation-based true discovery guarantee by sum tests. _arXiv preprint arXiv:2102.11759_.
## Appendix A Power results
### Top-\(k\) setting
**Definition A.1**.: The sparse one-sided Gaussian location model of parameter \(m,b,c,\beta,\) denoted as \(\mathbf{P}_{b,c,\beta}^{(m)},\) is defined as follows: \(p_{i}=\overline{\Phi}(X_{i}),\)\(1\leq i\leq m,\) the \(X_{i}\)'s are independent, with \(X_{i}\sim\mathcal{N}(0,1)\) for \(i\in\mathcal{H}_{0}\) and \(X_{i}\sim\mathcal{N}(\mu_{m},1)\) otherwise, for \(\mu_{m}=\sqrt{2\beta\log m}+b,\)\(b>0,\) and \(m_{1}=|\mathcal{H}_{1}|=cm^{1-\beta},\)\(c\in(0,1),\)\(\beta\in[0,1).\)
Note that \(\beta=0\) is the dense case for which the alternative mean \(\mu_{m}=b>0\) is a fixed quantity, whereas \(\beta>0\) in the sparse case, for which \(\mu_{m}=\sqrt{2\beta\log m}+b\) tends to infinity. In both case, the magnitude of alternative mean is defined to be on the'verge of detectability' where the BH procedure has some non-zero power, see Bogdan et al. (2011); Neuvial and Roquain (2012); Abraham et al. (2021) for instance.
**Theorem A.2**.: _Let \(\alpha\in(0,1)\). In the above one-sided Gaussian location model \(\mathbf{P}_{b,c,\beta}^{(m)}\), the number of rejections \(\hat{k}_{\alpha}\) of the BH procedure is such that, as \(m\) grows to infinity,_
\[\mathbf{P}_{b,c,\beta}^{(m)}(t_{m}^{*}\leq\alpha\hat{k}_{\alpha}/m\leq t_{m}^{ \sharp})\geq 1-2e^{-dm_{1}},\text{ for }m_{1}/m\lesssim t_{m}^{*}\leq t_{m}^{\sharp}\lesssim m_{1}/m, \tag{50}\]
_for some constant \(d>0\) (depending on \(\alpha\), \(\beta\), \(b\)), where \(t_{m}^{*}\in(0,1)\) is the unique solution of \(G_{m}(t)=2t/\alpha\), \(t_{m}^{\sharp}\in(0,1)\) is the unique solution of \(G_{m}(t)=0.5t/\alpha\), and where \(G_{m}(t)=(m_{0}/m)t+(m_{1}/m)\overline{\Phi}(\overline{\Phi}^{-1}(t)-\mu_{m})\), with \(\overline{\Phi}(z)=\mathbf{P}(Z\geq z)\), \(z\in\mathbb{R}\)._
Proof.: First let \(F_{m}(t)=\overline{\Phi}(\overline{\Phi}^{-1}(t)-\mu_{m}),\)\(\Psi_{m}(t)=F_{m}(t)/t\) and observe that \(\Psi_{m}\) is continuous decreasing on \((0,1]\) with \(\lim_{0}\Psi_{m}=+\infty.\) This implies that \(t_{m}^{*},t_{m}^{\sharp}\in(0,1)\) as described in the statement both exist, with
\[t_{m}^{*}=\Psi_{m}^{-1}(\alpha/2),\text{ }t_{m}^{\sharp}=\Psi_{m}^{-1}(\tau_{ m}(2\alpha)),\text{ }\tau_{m}(\alpha)=\frac{m}{m_{1}}\bigg{(}\frac{1}{\alpha}-\frac{m_{0}}{m} \bigg{)}.\]
We first establish
\[t_{m}^{*}\gtrsim m_{1}/m \tag{51}\] \[t_{m}^{\sharp}\lesssim m_{1}/m. \tag{52}\]
If \(\beta=0\), then \(m_{0}/m=1-c,\)\(m_{1}/m=c,\)\(\mu_{m}=b,\)\(\tau_{m}>0,\)\(F_{m}(t)=\overline{\Phi}(\overline{\Phi}^{-1}(t)-b),\)\(\Psi_{m}(t)=\overline{\Phi}(\overline{\Phi}^{-1}(t)-b)/t,\)\(\tau_{m}(\alpha)\) all do not depend on \(m.\) Hence, \(t_{m}^{*}\) and \(t_{m}^{\sharp}\) are both constant, which establishes (51) and (52). Let us now turn to the sparse case, for which \(\beta\in(0,1).\) The inequality (52) follows from the upper bound
\[0.5t_{m}^{\sharp}/\alpha=G_{m}(t_{m}^{\sharp})\leq t_{m}^{\sharp}+m_{1}/m.\]
For (51), the analysis is slightly more involved. We first prove that for \(m\) large enough
\[\overline{\Phi}^{-1}(t_{m}^{*})\leq\mu_{m}-b. \tag{53}\]
This will establish (51), since it implies \(F_{m}(t_{m}^{*})\geq F_{m}(\overline{\Phi}(\mu_{m}-b))=\overline{\Phi}(-b)>0\) and also \(t_{m}^{*}=(\tau_{m}(\alpha/2))^{-1}F_{m}(t_{m}^{*})\gtrsim m_{1}/m\). On the one hand,
\[\Psi_{m}(\overline{\Phi}(\mu_{m}-b))=\frac{\overline{\Phi}(-b)}{ \overline{\Phi}(\mu_{m}-b)}\geq\overline{\Phi}(-b)\frac{\mu_{m}-b}{\phi(\mu_{ m}-b)}=\Phi(-b)m^{\beta}\sqrt{2\beta\log m}\]
because \(\mu_{m}-b=\sqrt{2\beta\log m}\) and \(\phi(\mu_{m}-b)=m^{-\beta}\), and by using \(\Phi(x)\leq\phi(x)/x\) for all \(x>0\). On the other hand,
\[\Psi_{m}(t_{m}^{*})=\tau_{m}(\alpha/2)\leq\frac{2}{\alpha}m^{\beta}.\]
Hence, for \(m\) large enough, we have \(\Psi_{m}(\overline{\Phi}(\mu_{m}-b))\geq\Psi_{m}(t_{m}^{*})=\Psi_{m}( \overline{\Phi}(\overline{\Phi}^{-1}(t_{m}^{*})))\), which in turn implies (53).
We now turn to prove the result (50) and follow for a classical concentration argument. Let
\[\hat{G}_{m}(t)=m^{-1}\sum_{i=1}^{m}\mathbf{1}\{p_{i}\leq t\},\ \ t\in[0,1],\]
so that \(G_{m}(t)=\mathbf{E}\hat{G}_{m}(t)\) for all \(t\in[0,1]\). Hence, for all \(t\in(0,1)\),
\[\mathbf{P}(\alpha\hat{k}_{\alpha}/m<t) \leq\mathbf{P}\Big{(}\hat{G}_{m}(t)\leq t/\alpha\Big{)}\] \[=\mathbf{P}\Big{(}\hat{G}_{m}(t)-G_{m}(t)\leq t/\alpha-G_{m}(t) \Big{)},\]
because \(\alpha\hat{k}_{\alpha}/m=\max\{t\in(0,1)\ :\ \hat{G}_{m}(t)\geq t/\alpha\}\) by definition of \(\hat{k}_{\alpha}\). Applying this with \(t=t_{m}^{*}\), this gives
\[\mathbf{P}(\alpha\hat{k}_{\alpha}/m<t_{m}^{*}) =\mathbf{P}\Big{(}\hat{G}_{m}(t_{m}^{*})-G_{m}(t_{m}^{*})\leq-G_{ m}(t_{m}^{*})\Big{)}\] \[\leq\exp(-cmG_{m}(t_{m}^{*}))\leq\exp(-Cm_{1}F_{m}(t_{m}^{*})),\]
for some constant \(C>0\), by applying Bernstein's inequality. Since \(F_{m}(t_{m}^{*})\geq\overline{\Phi}(-b)>0\), this gives \(\mathbf{P}(\alpha\hat{k}_{\alpha}/m<t_{m}^{*})\leq e^{-dm_{1}}\) for \(m\) large enough and some constant \(d>0\).
Next, for all \(t\in[t_{m}^{\sharp},1)\), still applying Bernstein's inequality,
\[\mathbf{P}(\alpha\hat{k}_{\alpha}/m>t)\] \[\leq\sum_{k=1}^{m}\mathbf{1}\{\alpha k/m>t\}\exp\!\left(-m\frac{( k/m-G_{m}(\alpha k/m))^{2}}{G_{m}(\alpha k/m)+(1/3)(k/m-G_{m}(\alpha k/m))} \right)\leq m\exp\!\left(-Cmt_{m}^{\sharp}\right)\!,\]
because for all \(\alpha k/m\geq t_{m}^{\sharp}\), \(k/m-G_{m}(\alpha k/m)\geq G_{m}(\alpha k/m)\geq G_{m}(t_{m}^{\sharp})=0.5t_{m} ^{\sharp}/\alpha\) (given the monotonicity of \(t\mapsto G_{m}(t)/t\)). Applying this for \(t=t_{m}^{\sharp}\in(0,1)\), we obtain
\[\mathbf{P}(\alpha\hat{k}_{\alpha}/m>t_{m}^{\sharp})\leq e^{-dm_{1}},\]
because \(t_{m}^{\sharp}\geq t_{m}^{*}\gtrsim m_{1}/m\). This proves the result.
### Pre-ordered setting
We introduce below a model generalizing the one of Lei and Fithian (2016) to the possibly sparse case. Here, without loss of generality we assume that the ordering \(\pi\) is identity, that is, \(\pi(i)=i\) for all \(i\in\{1,\ldots,m\}\). Below, with some abuse, the notation \(\pi\) will be re-used to stick with the notation of Lei and Fithian (2016).
**Definition A.3**.: The sparse VCT model of parameters \(m,\pi,\beta,F_{0},F_{1}\), denoted as \(\mathbf{P}_{\pi,\beta,F_{0},F_{1}}^{(m)}\), is the \(p\)-value mixture model where \((p_{k},H_{k})\in[0,1]\times\{0,1\}\), \(1\leq k\leq m\), are independent and generated as follows:
* the \(H_{k}\), \(1\leq k\leq m\), are independent and \(\mathbf{P}(H_{k}=1)=\pi_{m}(k/m)\), \(1\leq k\leq m\), with \(\pi_{m}(x)=\pi(m^{\beta}x)\), \(x\geq 0\), where \(\pi:[0,\infty)\to[0,1)\) is some measurable function (instantaneous signal probability function) with \(\pi(0)>0\) and \(\pi(x)=\pi(1)\) for \(x\geq 1\) and \(\beta\in[0,1)\) is a sparsity parameter.
* conditionally on \(H_{1},\ldots,H_{k}\), the \(p\)-values \(p_{k}\), \(1\leq k\leq m\), are independent, with a marginal distribution super-uniform under the null: \(p_{k}\,|\,H_{k}=0\sim F_{0}\), \(1\leq k\leq m\), where \(F_{0}\) is a c.d.f. with \(F_{0}(t)\leq t\) for all \(t\in[0,1]\); and \(p_{k}\,|\,H_{k}=1\sim F_{1}\), \(1\leq k\leq m\), where \(F_{1}\) is some alternative c.d.f.
We denote \(\Pi(t):=t^{-1}\int_{0}^{t}\pi(s)ds\), with \(\Pi(0)=\pi(0)\) and
\[\Pi_{m}(t):=t^{-1}\int_{0}^{t}\pi_{m}(s)ds=t^{-1}\int_{0}^{t}\pi(m^{\beta}s)ds =m^{-\beta}t^{-1}\int_{0}^{m^{\beta}t}\pi(s)ds=\Pi(m^{\beta}t)\]
the expected fraction of signal before time \(mt\). We also let \(\pi_{1}:=\Pi_{m}(1)=\int_{0}^{1}\pi_{m}(s)ds=m^{-\beta}\Pi(1)\) the overall expected fraction of signal. We consider the asymptotic where \(m\) tends to infinity and \(F_{0},F_{1}\) are fixed.
When \(\beta=0\), \(\pi_{m}\), \(\Pi_{m}\) are fixed and we recover the dense VTC model introduced in Lei and Fithian (2016) (also noting that we are slightly more general because \(F_{0}\) is possibly non-uniform and \(F_{1}\) not concave). Interestingly, the above formulation can also handle the sparse case for which \(\beta\in(0,1)\) and the probability to generate a signal is shrunk to \(0\) by a factor \(m^{\beta}\). For instance, if \(\pi(1)=0\), the model only generates null \(p\)-values \(p_{k+1},\ldots,p_{m}\) for \(k\geq m^{1-\beta}\).
We now analyze the asymptotic behavior of the number of rejections of the LF procedure. By following the same heuristic than in Lei and Fithian (2016) (which follows by a concentration argument), we have from (32) that for \(k=\lfloor mt\rfloor\),
\[\widehat{\mathrm{FDP}}_{k} =\frac{s}{1-\lambda}\frac{1+\sum_{i=1}^{k}\mathbf{1}\{p_{i}> \lambda\}}{1\vee\sum_{i=1}^{k}\mathbf{1}\{p_{i}\leq s\}}\] \[\approx\frac{s}{1-\lambda}\frac{\Bigl{(}\sum_{i=1}^{k}(1-\pi_{m}( i/m))\Bigr{)}(1-F_{0}(\lambda))+\Bigl{(}\sum_{i=1}^{k}\pi_{m}(i/m)\Bigr{)}(1-F_{1}( \lambda))}{\Bigl{(}\sum_{i=1}^{k}(1-\pi_{m}(i/m))\Bigr{)}F_{0}(s)+\Bigl{(}\sum_ {i=1}^{k}\pi_{m}(i/m)\Bigr{)}F_{1}(s)}\] \[\approx\frac{1+\Pi_{m}(t)\Bigl{(}\frac{1-F_{1}(\lambda)}{1- \lambda}-1\Bigr{)}}{1+\Pi_{m}(t)\Bigl{(}\frac{F_{1}(s)}{s}-1\Bigr{)}}= \mathrm{FDP}^{\infty}(m^{\beta}t),\]
by assuming \(F_{0}(s)=s\), \(F_{0}(\lambda)=\lambda\), \(F_{1}(s)>s\), \(F_{1}(\lambda)>\lambda\) and by letting
\[\mathrm{FDP}^{\infty}(t)=\frac{1+\Pi(t)\Big{(}\frac{1-F_{1}(\lambda)}{1-\lambda} -1\Big{)}}{1+\Pi(t)\Big{(}\frac{F_{1}(s)}{s}-1\Big{)}},\ \ t\geq 0. \tag{54}\]
By (32), the quantity \(\hat{k}_{\alpha}/m^{1-\beta}\) should be asymptotically close to
\[t_{\alpha}^{*}=\max\{t\in[0,+\infty)\::\:\mathrm{FDP}^{\infty}(t)\leq\alpha\}, \tag{55}\]
with the convention \(t_{\alpha}^{*}=+\infty\) if the set is not upper bounded. We should however ensure that the latter set is not empty. For this, we let
\[\underline{\alpha}=\frac{1+\pi(0)\Big{(}\frac{1-F_{1}(\lambda)}{1-\lambda}- 1\Big{)}}{1+\pi(0)\Big{(}\frac{F_{1}(s)}{s}-1\Big{)}}. \tag{56}\]
Hence, \(\hat{r}_{\alpha}=\sum_{i=1}^{\hat{k}_{\alpha}}\mathbf{1}\{p_{i}\leq s\}\), the number of rejections of LF procedure, should be close to \(\Big{(}\sum_{i=1}^{\hat{k}_{\alpha}}(1-\pi_{m}(i/m))\Big{)}F_{0}(s)+\Big{(} \sum_{i=1}^{\hat{k}_{\alpha}}\pi_{m}(i/m)\Big{)}F_{1}(s)\gtrsim\hat{k}_{\alpha }s\approx m^{1-\beta}t_{\alpha}^{*}s\). This heuristic is formalized in the next result.
**Theorem A.4**.: _Consider a sparse VCT model \(\mathbf{P}_{\pi,\beta,F_{0},F_{1}}^{(m)}\) with parameters \(\beta,\pi,F_{0},F_{1}\) (see Definition A.3) and the LF procedure with parameter \(s,\lambda\) (see (32)), with the assumptions:_
1. \(\Pi:t\in[0,\infty)\to\mathbb{R}_{+}\) _is continuous decreasing and_ \(L\)_-Lipschitz;_
2. \(F_{0}(s)=s\)_,_ \(F_{0}(\lambda)=\lambda\)_,_ \(F_{1}(s)>s\)_,_ \(F_{1}(\lambda)>\lambda\)_;_
3. \(\alpha>\underline{\alpha}\) _where_ \(\underline{\alpha}\) _is defined by (_56_)._
_Let \(\alpha^{\prime}=(\underline{\alpha}+\alpha)/2\in(\underline{\alpha},\alpha)\), \(t_{\alpha^{\prime}}^{*}\in(0,+\infty]\) given by (55), \(t_{m}^{*}=t_{\alpha^{\prime}}^{*}\wedge m^{\beta}\) and let \(a\geq 1\) be an integer \(a\leq m^{1-\beta}t_{m}^{*}\) such that \(r=\frac{4}{a^{1/4}}\Big{(}\frac{1}{s}+\frac{1}{1-\lambda}\Big{)}\) is small enough to provide \(r\leq(\alpha-\underline{\alpha})/4\). Then the number of rejections \(\hat{r}_{\alpha}=\sum_{i=1}^{\hat{k}_{\alpha}}\mathbf{1}\{p_{i}\leq s\}\) of the LF procedure (32) is such that_
\[\mathbf{P}_{\pi,\beta,F_{0},F_{1}}^{(m)}(\hat{r}_{\alpha}<r_{m}^{*})\leq 2(2+a^{ 1/2})e^{-2a^{1/2}},\ \ \ r_{m}^{*}=\lfloor m^{1-\beta}t_{m}^{*}\rfloor s/2. \tag{57}\]
_In particular, choosing \(a=1+\lfloor(\log m)^{2}\rfloor\), we have as \(m\) grows to infinity, \(m^{1-\beta}/\hat{r}_{\alpha}=O_{P}(1)\)._
Condition (ii) is more general that in Lei and Fithian (2016) and allows to handle binary \(p\)-values, like in the 'knockoffs' situation (for which \(F_{0}\) and \(F_{1}\) are not continuous). The condition (iii) was overlooked in Lei and Fithian (2016), but it is needed to ensure the existence of \(t_{\alpha}^{*}\). It reads equivalently
\[\pi(0)>\frac{1-\alpha}{1-\frac{1-F_{1}(\lambda)}{1-\lambda}+\alpha\Big{(} \frac{F_{1}(s)}{s}-1\Big{)}}, \tag{58}\]
which provides that the probability to generate a null is sufficiently large at the beginning of the \(p\)-value sequence, with a minimum amplitude function of \(F_{1}(s)\) and \(F_{1}(\lambda)\). Note that in the 'knockoffs' case where \(s=\lambda=1/2\), we have \(\underline{\alpha}=\frac{1-\pi(0)M}{1+\pi(0)M}\) where \(M=2F_{1}(1/2)-1>0\) can be interpreted as a'margin'. Hence, the critical level \(\underline{\alpha}\) is decreasing in \(\pi(0)M\). Hence, the setting is more favorable either when \(\pi(0)\) increases, or when the margin \(M\) increases.
Proof.: First note that \(\mathrm{FDP}^{\infty}(t)\) is an decreasing function of \(\Pi(t)\) because \(\frac{1-F_{1}(\lambda)}{1-\lambda}<1<\frac{F_{1}(s)}{s}\), see (54). Since \(\Pi(t)\) is decreasing from \(\pi(0)\) to \(\pi(1)=\Pi(+\infty)\), we have that \(\mathrm{FDP}^{\infty}:[0,+\infty)\to[\underline{\alpha},\overline{\alpha}]\) is continuous increasing, where \(\overline{\alpha}=\Big{(}1+\pi(1)\Big{(}\frac{1-F_{1}(\lambda)}{1-\lambda}-1 \Big{)}\Big{)}/\Big{(}1+\pi(1)\Big{(}\frac{F_{1}(s)}{s}-1\Big{)}\Big{)}\). Hence, if \(\alpha^{\prime}<\overline{\alpha}\), we have \(0<t_{\alpha^{\prime}}^{*}<+\infty\), \(t_{m}^{*}=t_{\alpha^{\prime}}^{*}\) for \(m\) large enough, and thus \(\mathrm{FDP}^{\infty}(t_{m}^{*})=\alpha^{\prime}\). If \(\alpha^{\prime}\geq\overline{\alpha}\), \(t_{\alpha^{\prime}}^{*}=+\infty\), \(t_{m}^{*}=m^{\beta}\) and \(\mathrm{FDP}^{\infty}(t_{m}^{*})\leq\alpha^{\prime}\). Both cases are considered in what follows. Consider the events
\[\Omega_{1} =\Bigg{\{}\sup_{a\leq k\leq m}\Bigg{|}k^{-1}\sum_{i=1}^{k}\mathbf{ 1}\{p_{i}>\lambda\}-k^{-1}\sum_{i=1}^{k}\mathbf{P}(p_{i}>\lambda)\Bigg{|}\leq 1 /a^{1/4}\Bigg{\}};\] \[\Omega_{2} =\Bigg{\{}\sup_{a\leq k\leq m}\Bigg{|}k^{-1}\sum_{i=1}^{k}\mathbf{ 1}\{p_{i}\leq s\}-k^{-1}\sum_{i=1}^{k}\mathbf{P}(p_{i}\leq s)\Bigg{|}\leq 1 /a^{1/4}\Bigg{\}}.\]
By Lemma A.6, the event \(\Omega_{1}\cap\Omega_{2}\) occurs with probability larger than \(1-2(2+a^{1/2})e^{-2a^{1/2}}\). Let
\[e_{1} =1+\Pi_{m}(m^{-\beta}t_{m}^{*})\bigg{(}\frac{1-F_{1}(\lambda)}{1- \lambda}-1\bigg{)}=1+\Pi(t_{m}^{*})\bigg{(}\frac{1-F_{1}(\lambda)}{1-\lambda} -1\bigg{)};\] \[e_{2} =1+\Pi_{m}(m^{-\beta}t_{m}^{*})\bigg{(}\frac{F_{1}(s)}{s}-1\bigg{)} =1+\Pi(t_{m}^{*})\bigg{(}\frac{F_{1}(s)}{s}-1\bigg{)},\]
be the numerator and denominator of \(\mathrm{FDP}^{\infty}(t_{\alpha^{\prime}}^{*})\), so that \(e_{1}/e_{2}=\mathrm{FDP}^{\infty}(t_{m}^{*})\leq\alpha^{\prime}\). Let \(k_{0}=\lfloor m^{1-\beta}t_{m}^{*}\rfloor\leq m\). Provided that \(k_{0}\geq a\), we have
\[\Bigg{|}k_{0}^{-1}\sum_{i=1}^{k_{0}}\mathbf{P}(p_{i}>\lambda)-(1- \lambda)e_{1}\Bigg{|} \leq\Bigg{|}k_{0}^{-1}\sum_{i=1}^{k_{0}}\pi_{m}(i/m)-\Pi_{m}(m^{- \beta}t_{m}^{*})\Bigg{|}\Big{|}(1-F_{1}(\lambda))-(1-\lambda)\Bigg{|}\] \[\leq\Bigg{|}k_{0}^{-1}\sum_{i=1}^{k_{0}}\pi_{m}(i/m)-\Pi_{m}(k_{0} /m)\Bigg{|}+\Big{|}\Pi_{m}(k_{0}/m)-\Pi_{m}(m^{-\beta}t_{m}^{*})\Big{|}\] \[\leq 1/a+L/m^{1-\beta},\]
by applying Lemma A.5 and using that \(\Pi(\cdot)\) is \(L\)-Lipschitz. Similarly,
\[\Bigg{|}k_{0}^{-1}\sum_{i=1}^{k_{0}}\mathbf{P}(p_{i}\leq s)-se_{2}\Bigg{|}\leq 1 /a+L/m^{1-\beta}.\]
We deduce that on \(\Omega_{1}\cap\Omega_{2}\) and when \(k_{0}\geq a\), we have
\[\widehat{\mathrm{FDP}}_{k_{0}}\leq\frac{e_{1}+\frac{1}{a(1-\lambda)}+\frac{L}{ m^{1-\beta}(1-\lambda)}+\frac{1}{k_{0}(1-\lambda)}+\frac{1}{a^{1/4}(1-\lambda)}}{ \frac{1}{as}\vee\Big{(}e_{2}-\frac{1}{as}-\frac{L}{m^{1-\beta}s}-\frac{1}{b_{ 0}s}-\frac{1}{a^{1/4}s}\Big{)}}\leq\frac{e_{1}+r}{e_{2}-r}\leq\frac{e_{1}}{ e_{2}}+4r,\]
provided that \(e_{2}\geq 2r\), because \(e_{1}\leq 1\), \(e_{2}\geq 1\), and by considering \(r\) as in the statement. Since \(e_{1}/e_{2}\leq\alpha^{\prime}\leq\alpha-4r\) and \(e_{2}\geq 1\geq 2r\) by assumption, we have \(\widehat{\mathrm{FDP}}_{k_{0}}\leq\alpha\) and thus \(\hat{k}_{\alpha}\geq k_{0}\) on \(\Omega_{1}\cap\Omega_{2}\). The result is proved by noting that \(\hat{r}_{\alpha}=\sum_{i=1}^{\hat{k}_{\alpha}}\mathbf{1}\{p_{i}\leq s\}\geq \sum_{i=1}^{k_{0}}\mathbf{1}\{p_{i}\leq s\}\geq(e_{2}-r)k_{0}s\geq k_{0}s/2\) on this event.
**Lemma A.5**.: _In the setting of Theorem A.4, we have for all \(a\geq 1\), \(m\geq a\),_
\[\sup_{a\leq k\leq m}\biggl{|}k^{-1}\sum_{i=1}^{k}\pi_{m}(i/m)-\Pi_{m}(k/m)\biggr{|} \leq 1/a. \tag{59}\]
Proof.: First note that because \(\pi_{m}\) is nonnegative continuous decreasing, we have for all \(k\geq 1\),
\[(1/k)\sum_{i=1}^{k}\pi_{m}(i/m)\leq\Pi_{m}(k/m)=(m/k)\int_{0}^{k/m}\pi_{m}(s)ds \leq(1/k)\sum_{i=0}^{k-1}\pi_{m}(i/m).\]
Since \(\pi_{m}(0)\leq 1\), the result is clear.
This following lemma is similar to Lemma 1 in Lei and Fithian (2016).
**Lemma A.6**.: _Let \(X_{i}\sim\mathcal{B}(p_{i})\), \(1\leq i\leq m\), be independent Bernoulli variables for \(p_{i}\in[0,1]\), \(1\leq i\leq m\). Then we have for all \(a\geq 1\) and \(m\geq a\),_
\[\mathbf{P}\Biggl{(}\sup_{a\leq k\leq m}\biggl{|}k^{-1}\sum_{i=1}^{k}X_{i}-p_{ i}\biggr{|}\geq 1/a^{1/4}\Biggr{)}\leq(2+a^{1/2})e^{-2a^{1/2}}. \tag{60}\]
Proof.: By Hoeffding's inequality, we have for all \(x>0\),
\[\mathbf{P}\Biggl{(}\sup_{1\leq k\leq a}\biggl{|}k^{-1}\sum_{i=1}^{k}(H_{i}-\pi _{m}(i/m))\biggr{|}\geq x\Biggr{)}\leq 2\sum_{k\geq a}e^{-2kx^{2}}=\frac{2}{1-e^{ -2x^{2}}}e^{-2ax^{2}}\leq(2+1/x^{2})e^{-2ax^{2}}.\]
We deduce the result by considering \(x=1/a^{1/4}\).
### Online setting
**Definition A.7**.: The online one-sided Gaussian mixture model of parameters \(\pi_{1},F_{1}\), denoted by \(\mathbf{P}_{\pi_{1},F_{1}}\), is given by the \(p\)-value stream \((p_{k},H_{k})\in[0,1]\times\{0,1\}\), \(k\geq 1\), which is i.i.d. with
* \(\mathbf{P}(H_{k}=1)=\pi_{1}\) for some _fixed_\(\pi_{1}\in(0,1)\);
* \(p\)-values are uniform under the null: \(p_{k}\mid H_{k}=0\sim U(0,1)\);
* \(p\)-values have the same alternative distribution: \(p_{k}\mid H_{k}=1\sim F_{1}\), where \(F_{1}\) is the c.d.f. corresponding to the one-sided Gaussian problem, that is, \(F_{1}(x)=\bar{\Phi}(\bar{\Phi}^{-1}(x)-\mu)\), \(x\in[0,1]\), for some \(\mu>0\).
Here, we make no sparsity assumption: \(\pi_{1}\) is assumed to be constant across time. This will ensure that the online procedure maintains a chance to make discoveries even when the time grows to infinity.
**Theorem A.8**.: _Consider the one-sided Gaussian online mixture model and the LORD procedure with \(W_{0}\in(0,\alpha)\) and a spending sequence \(\gamma_{k}=\frac{1}{k(\log(k))^{\gamma}}\), \(\gamma>1\). Then its rejection number \(R(k)\) at time \(k\) satisfies: for all \(a\in(0,1)\), \(k\geq 1\),_
\[\mathbf{P}(R(k)<k^{1-a})\leq ck^{-a}, \tag{61}\]
_where \(c\) is some constant only depending on \(\alpha\),\(W_{0}\), \(\gamma\), \(\mu\) and \(\pi_{1}\). In particular, \(k^{1-a}/R(k)=O_{P}(1)\) when \(k\) tends to infinity._
Proof.: We get inspiration from the power analysis of Javanmard and Montanari (2018). Let \(c=\min(\alpha-W_{0},W_{0})\). By definition (42), the LORD procedure makes (point-wise) more rejections than the procedure given by the critical values
\[\alpha_{T}=c\max\{\gamma_{T-\tau_{j}},j\geq 0\}, \tag{62}\]
where, for any \(j\geq 1\), \(\tau_{j}\) is the first time that the procedure makes \(j\) rejections, that is,
\[\tau_{j}=\min\{t\geq 0\,:\,R(t)\geq j\}\quad(\tau_{j}=+\infty\text{ if the set is empty}), \tag{63}\]
(note that \(\tau_{0}=0\)) for \(R(T)=\sum_{t=1}^{T}\mathbf{1}\{p_{t}\leq\alpha_{t}\}\). Let \(\Delta_{j}=\tau_{j}-\tau_{j-1}\) the time between the \(j\)-th rejection and the \((j-1)\)-th rejection. It is clear that \((R(t))_{t\geq 1}\) is a renewal process with holding times \((\Delta_{j})_{j\geq 1}\) and jump times \((\tau_{j})_{j\geq 1}\). In particular, the \(\Delta_{j}\)'s are i.i.d. As a result, we have for all \(r,k\geq 1\),
\[\mathbf{P}(R(k)<r)\leq\mathbf{P}(\tau_{r}\geq k)=\mathbf{P}(\Delta_{1}+\cdots+ \Delta_{r}\geq k)\leq r\mathbf{E}\Delta_{1}/k,\]
where
\[\mathbf{E}\Delta_{1}=\sum_{m\geq 1}\mathbf{P}(\Delta_{1}\geq m)=\sum_{m\geq 1} \prod_{\ell=1}^{m}(1-G(c\gamma_{\ell}))\leq\sum_{m\geq 1}e^{-mG(c\gamma_{m})}.\]
In addition, since \(G\) is concave,
\[\frac{G(x)}{x}\geq g^{\prime}(x)=\pi_{0}+\pi_{1}c\,e^{\mu\Phi^{-1}(x)}\geq e^{ c^{\prime}\sqrt{2\log(1/x)}}\geq(\log(1/x))^{\gamma+2},\]
for \(x\) small enough and \(c,c^{\prime}>0\) some constants. This gives for large \(m\geq M\), \(e^{-mG(c\gamma_{m})}\leq e^{-cm\gamma_{m}(\log(1/(c\gamma_{m})))^{2+\gamma}} \leq e^{-2\log m}\), for some \(M>0\), by the choice made for \(\gamma_{m}\). As a result,
\[\mathbf{E}\Delta_{1}\leq C+\sum_{m\geq M}e^{-mG(c\gamma_{m})}\leq C+\sum_{m \geq M}e^{-cm\gamma_{m}(\log(1/(c\gamma_{m})))^{\gamma+2}}\leq C+\sum_{m\geq 1 }e^{-2\log m}=C+\pi^{2}/6,\]
for some constant \(C>0\). This gives
\[\mathbf{P}(R(k)<r)\leq r(C+\pi^{2}/6)/k.\]
and taking \(r=k^{1-a}\) gives (61).
## Appendix B Proofs
### Proof of Proposition 2.1
For \(j\geq 1\), let \(\delta_{j}=\delta j^{-2}\), \(\tau_{j}=2^{-j}\) and
\[A_{j} =\bigg{\{}\forall t\in[\tau_{j},1],\,n^{-1}\sum_{i=1}^{n} \mathbf{1}\{p_{i}\leq t\}\leq t\,\lambda_{j}\bigg{\}};\] \[\lambda_{j} =h^{-1}\bigg{(}\frac{\log(1/\delta_{j})}{n\tau_{j}/(1-\tau_{j}) }\bigg{)},\]
so that by Wellner's inequality, we have \(\mathbf{P}(A_{j})\geq 1-\delta_{j}\) and with a union bound \(\mathbf{P}(\cap_{j\geq 1}A_{j})\geq 1-\delta\pi^{2}/6\). Now let \(t\in(0,1)\) and \(j_{0}=\min\{j\geq 1\,:\,t\geq\tau_{j}\}=\min\{j\geq 1\,:\,j\geq\log_{2}(1/t)\}\), so that \(j_{0}=\lceil\log_{2}(1/t)\rceil\geq 1\). This yields
\[\log(1/\delta_{j_{0}})=\log(1/\delta)+2\log(\lceil\log_{2}(1/t)\rceil).\]
On the event \(\cap_{j\geq 1}A_{j}\), we have, since \(t\in[\tau_{j_{0}},1]\) by definition,
\[n^{-1}\sum_{i=1}^{n}\mathbf{1}\{p_{i}\leq t\}\leq t\,\lambda_{j_{0}}=th^{-1} \bigg{(}\frac{\log(1/\delta_{j_{0}})}{n\tau_{j_{0}}/(1-\tau_{j_{0}})}\bigg{)}= th^{-1}\bigg{(}\frac{\log(1/\delta)+2\log(\lceil\log_{2}(1/t)\rceil)}{ng(t)} \bigg{)},\]
because \(\tau_{j_{0}}=2^{-\lceil\log_{2}(1/t)\rceil}\). The result then comes from replacing \(\delta\) by \(\delta 6/\pi^{2}\).
### Proof of Proposition 2.6
Let us prove it for the adaptive uniform Wellner envelope (the other ones being either simpler or provable by using a similar argument). The idea is to prove that on an event where the (non-adaptive) Wellner envelope (14) is valid, we also have \(m_{0}\leq\hat{m}_{0}^{\text{\tiny Well}}\). The result is implied just by monotonicity (Lemma D.1).
For this, we come back to apply (13) with \((U_{1},\ldots,U_{n})=(p_{i},i\in\mathcal{H}_{0})\), \(n=m_{0}\). Hence, on an event with probability at least \(1-\delta\), we have for all \(t\in(0,1)\),
\[m_{0}^{-1}\sum_{i\in\mathcal{H}_{0}}\mathbf{1}\{p_{i}\leq t\}\leq t\,h^{-1} \bigg{(}\frac{C_{t}}{tm_{0}}\bigg{)}\leq t\,\Big{(}1+\sqrt{C_{t}/(2tm_{0})} \Big{)}^{2},\]
where we apply an upper bound coming from Lemma D.1. This gives
\[V_{t}/m_{0}\geq 1-t\,\Big{(}1+\sqrt{C_{t}/(2tm_{0})}\Big{)}^{2}=1-t-\sqrt{2tC_ {t}/m_{0}}-C_{t}/(2m_{0}).\]
As a result, \(V_{t}\geq m_{0}(1-t)-\sqrt{2tC_{t}m_{0}}-C_{t}/2\) and thus \((1-t)m_{0}-\sqrt{2tC_{t}}m_{0}^{1/2}-C_{t}/2-V_{t}\leq 0\), which gives
\[m_{0}\leq\Bigg{(}\frac{\sqrt{2tC_{t}}+\sqrt{2tC_{t}+4(1-t)(C_{t}/2+V_{t})}}{2 (1-t)}\Bigg{)}^{2}=\Bigg{(}\sqrt{\frac{tC_{t}}{2(1-t)^{2}}}+\sqrt{\frac{C_{t} }{2(1-t)^{2}}+\frac{V_{t}}{1-t}}\Bigg{)}^{2}.\]
Since this is uniform in \(t\), we can take the minimum over \(t\), which gives the \(m_{0}\) confidence bound \(\hat{m}_{0}^{\text{\tiny Well}}\).
## Appendix C Tools of independent interest
### A general envelope for a sequence of tests
An important basis for our work is the following theorem, which has the flavor of Lemma 1 of Katsevich and Ramdas (2020), but based on a different martingale inequality, derived from a Freedman type bound (see Section C.2).
**Theorem C.1**.: _Consider a potentially infinite set of null hypotheses \(H_{1},H_{2},\dots\) for the distribution \(P\) of an observation \(X\), with associated \(p\)-values \(p_{1},p_{2},\dots\) (based on \(X\)). Consider an ordering \(\pi(1),\pi(2),\dots\) (potentially depending on \(X\)) and a set of critical values \(\alpha_{1},\alpha_{2},\dots\) (potentially depending on \(X\)). Let \(\lambda\in[0,1)\) be a parameter and assume that there exists a filtration_
\[\mathcal{F}_{k}=\sigma\big{(}(\pi(i))_{1\leq i\leq k},(\mathbf{1}\big{\{}p_{ \pi(i)}\leq\alpha_{i}\big{\}})_{1\leq i\leq k},(\mathbf{1}\big{\{}p_{\pi(i)}> \lambda\big{\}})_{1\leq i\leq k}\big{)},\,\,\,k\geq 1,\]
_such that for all \(k\geq 2\),_
\[\mathbf{P}(p_{\pi(k)}\leq t\,|\,\mathcal{F}_{k-1},H_{\pi(k)}=0)\leq t\text{ for all }t\in[0,1]. \tag{64}\]
_Then, for any \(\delta\in(0,1)\), with probability at least \(1-\delta\), it holds_
\[\forall k\geq 1,\,\,\sum_{i=1}^{k}(1-H_{\pi(i)})\mathbf{1}\big{\{}p_{\pi(i)} \leq\alpha_{i}\big{\}}\leq\overline{V}_{k},\]
_for_
\[\overline{V}_{k}=\sum_{i=1}^{k}(1-H_{\pi(i)})\mathbf{1}\big{\{}p_{\pi(i)}> \lambda\big{\}}\frac{\alpha_{i}}{1-\lambda}+\Delta\Bigg{(}\sum_{i=1}^{k}(1-H_ {\pi(i)})\nu_{i}\Bigg{)}, \tag{65}\]
_where \(\Delta(u)=2\sqrt{\varepsilon_{u}}\sqrt{u\lor 1}+\frac{1}{2}\varepsilon_{u}\), \(\varepsilon_{u}=\log((1+\kappa)/\delta)+2\log(1+\log_{2}(u\lor 1))\), \(u>0\), \(\kappa=\pi^{2}/6\). and \(\nu_{i}=\alpha_{i}(1+\min(\alpha_{i},\lambda)/(1-\lambda))\), for \(i\geq 1\)._
Proof.: By Lemma C.2, we can apply Corollary C.5 (it self coming from Freedman's inequality) with
\[\xi_{i}=(1-H_{\pi(i)})\Bigg{(}\mathbf{1}\big{\{}p_{\pi(i)}\leq\alpha_{i} \big{\}}-F_{i}(\alpha_{i})\frac{\mathbf{1}\big{\{}p_{\pi(i)}>\lambda\big{\}}} {1-F_{i}(\lambda)}\Bigg{)},\]
where \(F_{i}(\alpha_{i})\) and \(F_{i}(\lambda)\) are defined by (67). First note that \(\xi_{i}\leq 1=:B\) almost surely. Let us now prove
\[\mathbf{E}(\xi_{i}^{2}\,|\,\mathcal{F}_{i-1})\leq(1-H_{\pi(i)})\nu_{i}. \tag{66}\]
Indeed, assuming first \(\alpha_{i}\leq\lambda\), we have by (64),
\[\mathbf{E}(\xi_{i}^{2}\,|\,\mathcal{F}_{i-1})=(1-H_{\pi(i)})\Bigg{(} \mathbf{E}(\mathbf{1}\big{\{}p_{\pi(i)}\leq\alpha_{i}\big{\}}\,|\,\mathcal{F} _{i-1})+(F_{i}(\alpha_{i}))^{2}\frac{\mathbf{E}(\mathbf{1}\big{\{}p_{\pi(i)}> \lambda\big{\}}\,|\,\mathcal{F}_{i-1})}{(1-F_{i}(\lambda))^{2}}\Bigg{)}\] \[\leq(1-H_{\pi(i)})(\alpha_{i}+\alpha_{i}^{2}/(1-\lambda))=(1-H_{ \pi(i)})\nu_{i}.\]
which gives (66). Now, if \(\alpha_{i}>\lambda\), still by (64),
\[\mathbf{E}(\xi_{i}^{2}\mid\mathcal{F}_{i-1}) =(1-H_{\pi(i)})\Bigg{(}\mathbf{E}(\mathbf{1}\big{\{}p_{\pi(i)}\leq \alpha_{i}\big{\}}\mid\mathcal{F}_{i-1})+(F_{i}(\alpha_{i}))^{2}\frac{\mathbf{E }(\mathbf{1}\big{\{}p_{\pi(i)}>\lambda\big{\}}\mid\mathcal{F}_{i-1})}{(1-F_{i} (\lambda))^{2}}\] \[\quad-2\frac{F_{i}(\alpha_{i})}{1-F_{i}(\lambda)}\mathbf{E}( \mathbf{1}\big{\{}\lambda<p_{\pi(i)}\leq\alpha_{i}\big{\}}\mid\mathcal{F}_{i-1 })\Bigg{)}\] \[=(1-H_{\pi(i)})\big{[}F_{i}(\alpha_{i})+(F_{i}(\alpha_{i}))^{2}/( 1-F_{i}(\lambda))-2F_{i}(\alpha_{i})(F_{i}(\alpha_{i})-F_{i}(\lambda))/(1-F_{i }(\lambda))\big{]}\] \[=(1-H_{\pi(i)})F_{i}(\alpha_{i})\big{[}1+(2F_{i}(\lambda)-F_{i}( \alpha_{i}))/(1-F_{i}(\lambda))\big{]}\] \[\leq(1-H_{\pi(i)})F_{i}(\alpha_{i})\big{[}1+F_{i}(\lambda)/(1-F_{ i}(\lambda))\big{]}\leq(1-H_{\pi(i)})\nu_{i},\]
which implies (66) also in that case. Finally, (66) is established, which yields
\[\forall k\geq 1,\,\,\,S_{k}\leq 2\sqrt{\varepsilon_{k}(\delta)}\sqrt{\sum_{i=1 }^{k}(1-H_{\pi(i)})\nu_{i}}+4\varepsilon_{k}(\delta)\]
and thus (65).
**Lemma C.2**.: _In the setting of Theorem C.1, let_
\[F_{k}(\alpha_{k})=\mathbf{P}(p_{\pi(k)}\leq\alpha_{k}\mid\mathcal{F}_{k-1},H_ {\pi(k)}=0),\,\,\,F_{k}(\lambda)=\mathbf{P}(p_{\pi(k)}\leq\lambda\mid\mathcal{ F}_{k-1},H_{\pi(k)}=0) \tag{67}\]
_the process \((S_{k})_{k\geq 1}\) defined by_
\[S_{k}=\sum_{i=1}^{k}(1-H_{\pi(i)})\Bigg{(}\mathbf{1}\big{\{}p_{\pi(i)}\leq \alpha_{i}\big{\}}-F_{i}(\alpha_{i})\frac{\mathbf{1}\big{\{}p_{\pi(i)}>\lambda \big{\}}}{1-F_{i}(\lambda)}\Bigg{)},\quad k\geq 1,\]
_is a martingale with respect to the filtration \((\mathcal{F}_{k})_{k\geq 1}\)._
Proof.: First, \(S_{k}\) is clearly \(\mathcal{F}_{k}\) measurable. Second, we have for all \(k\geq 2\),
\[\mathbf{E}(S_{k}\mid\mathcal{F}_{k-1}) =\mathbf{E}\Bigg{(}S_{k-1}+(1-H_{\pi(k)})\Bigg{(}\mathbf{1}\big{\{} p_{\pi(k)}\leq\alpha_{k}\big{\}}-F_{k}(\alpha_{k})\frac{\mathbf{1}\big{\{}p_{ \pi(k)}>\lambda\big{\}}}{1-F_{k}(\lambda)}\Bigg{)}\mid\mathcal{F}_{k-1}\Bigg{)}\] \[=S_{k-1}+(1-H_{\pi(k)})(F_{k}(\alpha_{k})-F_{k}(\alpha_{k}))=S_{k -1}.\]
### Uniform-Empirical version of Freedman's inequality
We establish a time-uniform, empirical Bernstein-style confidence bound for bounded martingales. Various related inequalities have appeared in the literature, in particular in the online learning community. The idea is based on'stitching' together time-uniform bounds that are accurate on different segments of (intrinsic) time. The use of the stitching principle has been
further pushed and developed into many refinements by Howard et al. (2021), who also propose a uniform empirical Bernstein bound as a byproduct. The version given here, based on a direct stitching of Freedman's inequality, has the advantage of being self-contained with an elementary proof (though the numerical constants may be marginally worse than Howard et al.'s).
We first recall Freedman's inequality in its original version (Freedman, 1975). Let \((\xi_{i},\mathcal{F}_{i})_{i\geq 1}\) be a supermartingale difference sequence, i.e. \(\mathbb{E}[\xi_{i}|\mathcal{F}_{i-1}]\leq 0\) for all \(i\). Define \(S_{n}:=\sum_{i=1}^{n}\xi_{i}\) (then \((S_{n},\mathcal{F}_{n})\) is a supermartingale), and \(V_{n}:=\sum_{i=1}^{n}\operatorname{Var}[\xi_{i}|\mathcal{F}_{i-1}]\).
**Theorem C.3** (Freedman's inequality; Freedman, 1975, Theorem 4.1).: _Assume \(\xi_{i}\leq 1\) for all \(i\geq 1\). Then for all \(t,v>0\):_
\[\mathbb{P}[S_{n}\geq t\text{ and }V_{n}\leq v\text{ for some }n\geq 1]\leq\exp(-\varphi(v,t)), \tag{68}\]
_where_
\[\varphi(v,t):=(v+t)\log\biggl{(}1+\frac{t}{v}\biggr{)}-t. \tag{69}\]
We establish the following corollary (deferring the proof for now):
**Corollary C.4**.: _Assume \(\xi_{i}\leq 1\) for all \(i\geq 1\). Then for all \(\delta\in(0,1)\) and \(v>0\):_
\[\mathbb{P}\biggl{[}S_{n}\geq\sqrt{2v\log\delta^{-1}}+\frac{\log\delta^{-1}}{2} \text{ and }V_{n}\leq v\text{ for some }n\geq 1\biggr{]}\leq\delta. \tag{70}\]
Following the stitching principle applied to the above we obtain the following.
**Corollary C.5**.: _Assume \(\xi_{i}\leq B\) for all \(i\geq 1\), where \(B\) is a constant. Put \(\widetilde{V}_{k}:=(V_{k}\lor B^{2})\) and \(\kappa=\pi^{2}/6\). Then for all \(\delta\in(0,1/(1+\kappa))\), with probability at least \(1-(1+\kappa)\delta\) it holds_
\[\forall k\geq 1:S_{k}\leq 2\sqrt{\widetilde{V}_{k}\varepsilon(\delta,k)}+\frac{1 }{2}B\varepsilon(\delta,k),\]
_where \(\varepsilon(\delta,k):=\log\delta^{-1}+2\log(1+\log_{2}(\widetilde{V}_{k}/B^{ 2}))\)._
Proof.: Denote \(v_{j}^{2}:=2^{j}B^{2}\), \(\delta_{j}:=(j\lor 1)^{-2}\delta\), \(j\geq 0\), and define the nondecreasing sequence of stopping times \(\tau_{-1}=1\) and \(\tau_{j}:=\min\left\{k\geq 1:V_{k}>v_{j}^{2}\right\}\) for \(j\geq 0\). Define the events for \(j\geq 0\):
\[A_{j} :=\biggl{\{}\exists k\geq 1:S_{k}\geq\sqrt{2v_{j}^{2}\log\delta_{j }^{-1}}+\frac{1}{2}B\log\delta_{j}^{-1}\text{ and }V_{k}\leq v_{j}^{2}\biggr{\}},\] \[A_{j}^{\prime} :=\biggl{\{}\exists k\text{ with }\tau_{j-1}\leq k<\tau_{j}:S_{k} \geq 2\sqrt{\widetilde{V}_{k}\varepsilon(\delta,k)}+\frac{1}{2}B\varepsilon( \delta,k)\biggr{\}}.\]
From the definition of \(v_{j}^{2},\delta_{j}\), we have \(j=\log_{2}(v_{j}^{2}/B^{2})\) for \(j\geq 1\). For \(j\geq 1\), \(\tau_{j-1}\leq k<\tau_{j}\) implies \(\widetilde{V}_{k}=V_{k}\), \(v_{j-1}^{2}=v_{j}^{2}/2<\widetilde{V}_{k}\leq v_{j}^{2}\), and further
\[\log\delta_{j}^{-1}=\log\delta^{-1}+2\log\log_{2}(v_{j}^{2}/B^{2})\leq \varepsilon(\delta,k).\]
Therefore it holds \(A_{j}^{\prime}\subseteq A_{j}.\) Furthermore, for \(j=0\), we have \(v_{0}^{2}=B^{2},\delta_{0}=\delta.\) Further, if \(k<\tau_{0}\) it implies \(V_{k}<B^{2}\) and therefore \(\widetilde{V}_{k}=B^{2}\), thus \(\varepsilon(\delta,k)=\log\delta^{-1}.\) Hence
\[A_{0}^{\prime}\subseteq\left\{\exists k\text{ with }k<\tau_{0}:S_{k} \geq 2\sqrt{B^{2}\log\delta_{0}^{-1}}+\frac{1}{2}B\log\delta_{0}^{-1}\right\}\\ \subseteq\left\{\exists k\geq 1:S_{k}\geq\sqrt{2v_{0}^{2}\log \delta_{0}^{-1}}+\frac{1}{2}B\log\delta_{0}^{-1}\text{ and }V_{k}\leq v_{0}^{2}\right\}=A_{0}.\]
Therefore, since by (70) it holds \(\mathbb{P}[A_{j}]\leq\delta_{j}\) for all \(j\geq 0\):
\[\mathbb{P}\Big{[}\exists k\leq n:S_{k}\geq 2\sqrt{V_{k}\varepsilon(\delta,k )}+B\varepsilon(\delta,k)\Big{]}=\mathbb{P}\Bigg{[}\bigcup_{j\geq 0}A_{j}^{ \prime}\Bigg{]}\leq\mathbb{P}\Bigg{[}\bigcup_{j\geq 0}A_{j}\Bigg{]}\leq \delta\sum_{j\geq 0}(j\lor 1)^{-2}\leq 3\delta.\]
Proof of Corollary c.4.: It can be easily checked that \(\varphi(v,t)\) is increasing in \(t\) (for \(v,t>0\)). Thus \(S_{n}\geq t\Leftrightarrow\varphi(p,(S_{n})_{+})\geq\varphi(p,t).\) Since \(\varphi(v,0)=0\), and \(\lim_{t\to\infty}\varphi(v,t)=\infty\), it follows that for any \(\delta\in(0,1]\), there exists a unique real \(t(v,\delta)\) such that \(\varphi(v,t(v,\delta))=-\log\delta.\) It follows that (68) is equivalent to:
\[\forall v>0,\forall\delta\in(0,1]:\qquad\mathbb{P}[A_{v,\delta}]\leq\delta, \tag{71}\]
where
\[A_{v,\delta}:=\{\varphi(v,(S_{n})_{+})\geq-\log\delta\text{ and }T_{n}\leq v \text{ for some }n\geq 1\}.\]
Observe that \(\varphi(v,t)=vh\big{(}\frac{v+t}{v}\big{)}\), where \(h\) is the function defined by (11). Since \(h(\lambda)\geq 2(\sqrt{\lambda}-1)^{2}\) from Lemma D.1, we deduce \(\varphi(v,t)\geq 2(\sqrt{v+t}-\sqrt{v})^{2}\) thus, whenever \(\varphi(v,(S_{n})_{+})\leq-\log\delta\), we have:
\[\sqrt{v+(S_{n})_{+}}\leq\sqrt{v}+\sqrt{\frac{\log\delta^{-1}}{2}};\]
taking squares on both sides entails
\[S_{n}\leq\sqrt{2v\log\delta^{-1}}+\frac{\log\delta^{-1}}{2},\]
proving (70).
## Appendix D Auxiliary results
**Lemma D.1**.: _The function \(h\) defined by (11) is increasing strictly convex from \((1,\infty)\) to \((0,\infty)\), while \(h^{-1}\) is increasing strictly concave from \((0,\infty)\) to \((1,\infty)\). The functions \(h\) and \(h^{-1}\) satisfy the following upper/lower bounds:_
\[2(\sqrt{\lambda}-1)^{2} \leq h(\lambda)\leq(\lambda-1)^{2}/2,\quad\lambda>1\] \[1+\sqrt{2y} \leq h^{-1}(y)\leq(1+\sqrt{y/2})^{2},\quad y>0\]
_In particular, \(h^{-1}(y)-1\leq\sqrt{2y}+\mathcal{O}(y)\) as \(y\to 0\). In addition, for any \(c>0\), \(x\in(1,+\infty)\mapsto xh^{-1}(c/x)\) is increasing._
Proof.: Clearly, \(h^{\prime}=\log\), which is positive and increasing on \((1,\infty)\). This gives the desired property for \(h\) and \(h^{-1}\). Next, the bounds can be easily obtained by studying the functions \(\lambda\mapsto(\lambda-1)^{2}/2-h(\lambda)\) and \(\lambda\mapsto h(\lambda)-2(\sqrt{\lambda}-1)^{2}\). For the last statement, since \(h^{-1}\) is strictly concave and \(h^{-1}(0)=1\), we have that \(y\in(0,\infty)\mapsto(h^{-1}(y)-1)/y\) is decreasing. Since \(y\in(0,\infty)\mapsto 1/y\) is also decreasing, this gives that \(y\in(0,\infty)\mapsto h^{-1}(y)/y\) is decreasing. This gives the last statement.
**Lemma D.2** (Wellner's inequality, Inequality 2, page 415, with the improvement of Exercise 3 page 418 of Shorack and Wellner, 2009).: _Let \(U_{1},\ldots,U_{n}\) be \(n\geq 1\) i.i.d. uniform random variables. For all \(\lambda\geq 1\), \(a\in[0,1)\), we have_
\[\mathbf{P}\Biggl{(}\exists t\in[a,1],n^{-1}\sum_{i=1}^{n}\mathbf{1}\{U_{i} \leq t\}/t\geq\lambda\Biggr{)}\leq e^{-{\it n}ah(\lambda)/(1-a)},\]
_for \(h(\cdot)\) defined by (11)._
**Lemma D.3**.: _The KR constants in (34) and (44) satisfy, as \(a\to\infty\),_
\[\frac{\log(1/\delta_{a})}{a\log(1+\frac{1-\delta_{a}^{B/a}}{B})} =1+O\biggl{(}\frac{\log(a)}{a}\biggr{)};\] \[\frac{\log(1/\delta_{a})}{a\log(1+\log(1/\delta_{a})/a)} =1+O\biggl{(}\frac{\log(a)}{a}\biggr{)},\]
Fig 12: Displaying \(h\) (left) and \(h^{-1}\) (right). Bounds of Lemma D.1 are displayed in blue.
_where \(\delta_{a}=c\delta/a\), \(c=\pi^{2}/6\) and the \(O(\cdot)\) depends only on the constants \(\delta>0\) and \(B>0\)._
## Appendix E Additional experiments
We reproduce here the figures of the numerical experiments in the top-\(k\) and preordered settings, by adding the interpolated bounds. On each graph, the median of the generated interpolated bound is marked by a star symbol, which is given in addition to the former boxplot (of the non-interpolated bound). By doing so, we can evaluate the gain brought by the interpolation operation in each case. Note that the interpolated bound is not computed for \(m\geq 10^{5}\) for computational cost reasons.
[MISSING_PAGE_POST]
## References
* [1] A. A. Abrahams, J. M. Barabasi, and A. A. Abrahams, "The \( |
2307.08803 | An Exploration Study of Mixed-initiative Query Reformulation in
Conversational Passage Retrieval | In this paper, we report our methods and experiments for the TREC
Conversational Assistance Track (CAsT) 2022. In this work, we aim to reproduce
multi-stage retrieval pipelines and explore one of the potential benefits of
involving mixed-initiative interaction in conversational passage retrieval
scenarios: reformulating raw queries. Before the first ranking stage of a
multi-stage retrieval pipeline, we propose a mixed-initiative query
reformulation module, which achieves query reformulation based on the
mixed-initiative interaction between the users and the system, as the
replacement for the neural reformulation method. Specifically, we design an
algorithm to generate appropriate questions related to the ambiguities in raw
queries, and another algorithm to reformulate raw queries by parsing users'
feedback and incorporating it into the raw query. For the first ranking stage
of our multi-stage pipelines, we adopt a sparse ranking function: BM25, and a
dense retrieval method: TCT-ColBERT. For the second-ranking step, we adopt a
pointwise reranker: MonoT5, and a pairwise reranker: DuoT5. Experiments on both
TREC CAsT 2021 and TREC CAsT 2022 datasets show the effectiveness of our
mixed-initiative-based query reformulation method on improving retrieval
performance compared with two popular reformulators: a neural reformulator:
CANARD-T5 and a rule-based reformulator: historical query reformulator(HQE). | Dayu Yang, Yue Zhang, Hui Fang | 2023-07-17T19:38:40Z | http://arxiv.org/abs/2307.08803v2 | # An Exploration Study of Mixed-initiative Query Reformulation in Conversational Passage Retrieval
###### Abstract
In this paper, we report our methods and experiments for the TREC Conversational Assistance Track (CAsT) 2022. In this work, we aim to reproduce multi-stage retrieval pipelines and explore one of the potential benefits of involving mixed-initiative interaction in conversational passage retrieval scenarios: reformulating raw queries. Before the first ranking stage of a multi-stage retrieval pipeline, we propose a mixed-initiative query reformulation module, which achieves query reformulation based on the mixed-initiative interaction between the users and the system, as the replacement for the neural reformulation method. Specifically, we design an algorithm to generate appropriate questions related to the ambiguities in raw queries, and another algorithm to reformulate raw queries by parsing users' feedback and incorporating it into the raw query. For the first ranking stage of our multi-stage pipelines, we adopt a sparse ranking function: BM25, and a dense retrieval method: TCT-ColBERT. For the second-ranking step, we adopt a pointwise reranker: MonoT5, and a pairwise reranker: DuoT5. Experiments on both TREC CAsT 2021 and TREC CAsT 2022 datasets show the effectiveness of our mixed-initiative-based query reformulation method on improving retrieval performance compared with two popular reformulators: a neural reformulator: CANARD-T5 and a rule-based reformulator: historical query reformulator(HQE).
## 1 Introduction
The TREC Conversational Assistance Track (CAsT) is a task to facilitate the study of conversational information systems, which are information systems that adopt a conversational modality to enable conversational exchanges between the system and its users. The main objective of conversational information seeking is to satisfy users' information needs in an evolutionary fashion, which is formalized or expressed through conversation turns. It can be beneficial for many information retrieval tasks, such as sophisticated information searching, exploratory information collecting, multi-turn retrieval task completion, and recommendation. Although conversation can also exhibit other types of interactions with different characteristics and modalities, such as clicks, multi-choice selections, and other forms of feedback [6], we mainly focus on natural language conversation in the Text REtrieval Conference(TREC) Conversational Assistance Track(CAsT). Specifically, following the problem settings of TREC CAsT, a user can initialize an open-domain information request to the system, and the system is expected to retrieve relevant passages from a gigantic corpus. During the conversation, the user is free to continue on the previous topic, provide feedback on the previously retrieved passage, or shift from one topic to another.
The overall approach of us is a multi-stage retrieval architecture that contains four main stages: the query reformulation stage, the first-ranking stage, and the second-ranking stage with an affiliated stage called fusion. In addition to adopting existing query reformulation methods: CANARD-T5 and HQE[3], to enable the mixed-initiative interaction and make the interaction helpful to the query reformulation task, we design an algorithm to generate questions seeking clarification of three types of ambiguities in the raw queries: incomplete, reference, and descriptive. After the question is generated, it will be sent to the user. Once the answer from the user is received, the answer will be parsed by another algorithm. The new clarification information will be combined with the raw query to formulate the reformulated query.
## 2 Method
### Multi-stage Retrieval Pipeline
First, in order to achieve state-of-the-art retrieval performance, we construct an efficient and effective multi-stage retrieval pipeline. Specifically, we use a four-stage cascade structure. The first stage will be the query reformulation stage, where we implement two popular query reformulation methods: CANARD-T5 rewriter and HQE [3] to eliminate ambiguities in the raw utterances. The pipeline we build can be illustrated in Figure 1.
To improve the efficiency of the multi-stage pipeline, instead of reformulating the query when we run the pipeline, we first implement query reformulations on all the queries and store the reformulated queries for later usage. This reduces the intensity of communication between the CPU and GPU and avoids unnecessary repeated query reformulation work when trying different ranking functions. For the CANARD-T5 rewriter [3] which we use for query reformulation, instead of using the parameters, we fine-tune it on TREC 2019 and 2020 data to further improve the reformulation on retrieval. The experiment on the TREC CAST 2021 dataset shows our fine-tuned T5 rewriter outperforms the T5 with original weights.
After the ambiguities are resolved by the query reformulation stage, the first ranking stage generates the initial ranked document list and delivers it to the re-ranking stage. there will be \(\gamma_{1}\) numbers of documents to be retrieved as the candidates for the following stage. We apply multiple ranking methods based on sparse methods and dense methods to overcome the limitation of the lexical and semantic matching capability of a single ranking function. Each \(stage_{i}\) takes \(\gamma_{i-1}\) numbers of documents to ranking and outcome \(\gamma_{i}\) numbers of documents to the subsequent stage, where \(\gamma_{i-1}\leq\gamma_{i}\). Specifically, we use BM25 ranking function as the sparse retrieval method we used in the first ranking stage. For the dense ranking in the first ranking stage, we use TCT-ColBERT [2] to independently encode queries and the documents and formulate the dense representations of
Figure 1: Demonstration of our multi-stage conversational text retrieval pipeline for creating non-mixed-initiative runs
documents and queries. After the first ranking stages are finished, we use a pointwise re-ranker MonoT5 [4] and a pairwise re-ranker DuoT5 [5] to re-rank the relevant documents (passages) that are outputted from the first ranking stage. Finally, we apply reciprocal rank fusion1, which is efficient for combining the ranked lists obtained from re-ranking. (If an early fusion is applied, the fusion stage will behave before re-ranking and after the first ranking. After fusion, the final ranked document list will output as result. Figure 1 shows our multi-stage conversational text retrieval pipeline for creating non-mixed-initiative runs.
Footnote 1: We do not use early fusion. The fusion step is always employed after the second-stage ranking
### Mixed-initiative Query Reformulation
We observe that, for the TREC CAsT 2021 dataset, in some cases, certain ambiguities are not successfully clarified by the CANARD-T5 reformulators. Since CANARD-T5 is a generative model that samples the reformulated queries from high-dimensional distributions. It is hard for us to explicitly explore why sometimes CANARD-T5 fails to correctly clarify ambiguities. However, what we can observe is that: the automatic reformulator performs well on certain queries but not well on the rest. Therefore, with the introduction of the mixed-initiative task in the TREC CAsT 2022, we intend to explore the potential of clarifying the ambiguity with the help of users. In order to generate the question, we designed an algorithm to identify the ambiguity a raw query has and formulate the corresponding question.
Specifically, we design three questions that correspond to three types of ambiguities: references, descriptive, and incomplete sentences. When the algorithm detects an ambiguity, the system will generate a corresponding question from the template and send it to the user for further clarification. _Incomplete_ ambiguity is defined as the raw query does not have any nouns. For example, the user may ask "How's that?" after the previous retrieval turns. _Reference_ ambiguity is defined as the algorithm finding at least one pronoun in the raw query(and it is not at the beginning of the raw query). _Descriptive_ ambiguity means the noun with a high BM25 score in the raw query does not have any descriptive information following it. For example, the raw query could be "What kind of innovation do we have?" which is missing the description of the noun "innovation". The descriptive information of "innovation" can be "innovation of US banks".
Since only one interaction is allowed for each raw query, if multiple types of ambiguities are detected, the algorithm will follow the priority order: incomplete \(>\) reference \(>\) descriptive to generate questions. Here are some more concrete examples from the TREC CAsT 2022 dataset, in _utterance 3-1, turn 142_, the raw query is _You've mentioned that several times now. Tell me more.._ The algorithm will first identify if this sentence is a complete query. In this case, the raw query is a complete query, so the algorithm will move on to detect the reference ambiguity. Since there is a pronoun "that" in the raw query, that means the raw query has a reference ambiguity. Then the algorithm will extract the pronoun "that" and ask the user, _What does "that" refer to in your raw query?_. If there is no reference ambiguity found by the algorithm, it will continue to try to identify whether raw utterances have descriptive ambiguities. Taking utterances 3-3, turn 132 as an example, the raw utterance is _How did other parties respond?_. The algorithm detected that an important word, _parties_ does not have any
Figure 2: A workflow for asking clarifying questions in an open-domain conversational search system
descriptive information in the raw utterance. Therefore, the system will ask the user to specify the word _parties_. In this case, the answer received by the system is _parties of the Paris Agreement_.
After the answer of obtained, a reformulating algorithm will add the newly-obtained information from the answer to the original query. For an original query with the reference ambiguity, we expect the user's answer to be the entity a pronoun is referring to. So the pronoun will be directly replaced by the answer. For an incomplete original query, we expect the user's answer to be the complete query. So the entire original query will be replaced by the answer. For an original query with descriptive ambiguity, the algorithm will append the descriptive information to the corresponding noun or verb. In some cases, the user may refuse to answer the question, the answer will be "I don't know" if the user refuse to answer in TREC CAsT 2022. The algorithm will keep the original query intact if it meets the answer "I don't know".
In summary, the algorithm can enable mixed-initiative interaction with users while resolving the ambiguities that existing generative and classification query reformulation methods have difficulty resolving. The overall workflow of our mixed-initiative algorithm is shown in Figure 2.
Overall, the original automatic query reformulation steps of the multi-stage retrieval pipeline introduced in Figure 1 are replaced with the mixed-initiative query reformulation module, which means the change is made only in the "query reformulation" stage. The multi-stage conversational text retrieval pipeline we used for creating mixed-initiative runs for TREC CAsT 2022 can be illustrated in Figure 3.
## 3 Experimental Setup
### CAsT Datasets
Since the qrel file of the TREC CAsT 2022 dataset is not available when we participate in the CAsT track. We finish experiments about hyperparameter tuning on the TREC CAsT 2021 dataset.
### Query Reformulation Setup
The original T5 rewriter has trained on CANARD [3] dataset. Although many experiments from the runs of TREC CAsT 2021 show the knowledge T5 learns from CANARD transfers well to the TREC CAsT dataset, we still want to figure out _if it is beneficial to fine-tune the T5 rewriter on the previous TREC CAsT datasets: 2019 and 2020?_ Therefore, we start with the weights that we borrowed from [1] and fine-tune the T5 on structured TREC 2019 and 2020 data. Table 1 shows:
Figure 3: Demonstration of our multi-stage conversational text retrieval pipeline for creating mixed-initiative runs
by using the query reformulated by the fine-tuned T5 rewriter, the retrieval performance on the first ranking stage can surpass the original T5 rewriter.
For the default setting of a T5 rewriter, it will consider all the canonical passages \(A_{<i}:A_{1},A_{2},\ldots,A_{i-1}\) before the focal query \(Q_{i}\). However, when applying it to the TREC CasT dataset, the canonical passage is usually much longer than the canonical passage in the CANARD dataset. The excessive length brings up the issue that the total length of the input of T5 will sometimes surpass the maximum length of the token input of T5: 512. And T5 will cut off all the tokens after the \(512_{th}\) token. This makes some latest users' utterances may be abandoned by the T5's tokenizer. However, the intuition is that the user's information needs are more likely to be contained in users' utterance \(Q_{<i}:Q_{1},Q_{2},\ldots,Q_{i-1}\) instead of the canonical passages \(A_{<i}\). Another issue is that the long canonical passages may bring T5 extra challenges to locate useful information from the context that contains the information about users' implicit information needs. Therefore, we think concatenating all previous canonical passages may harm the retrieval performance. The results of experiments on the TREC CasT 2021 dataset are shown in Table 2. As we can see, the best-performing reformulated queries only take the most recent canonical passage into consideration.
Another experiment we did was to explore "_if it is beneficial to consider not only the most probable output of the generative reformulator but other outputs?_" The intuition behind this experiment is that the target of a generative model is to generate the sentence with the highest probabilities instead of a sentence with more information that represents users' information needs. By our observation, we find, sometimes, the probability of generating the reformulated sentence: "_What is the price of the bike?_" is larger than the probability of "_What is the price of the sport bike of Trek?_" Therefore, we design an experiment to fuse top-probable sentences from the generative reformulator. The results are shown in Table 3. We can see that the recall can be largely improved if we consider multiple top-probable outputs and fuse them at the end of the first ranking stage. The results indicate that it may be beneficial to consider multiple generated sentences with the largest probabilities from a generative reformulator such as T5 rewriter.
Historical query reformulation(HQE) is a method that uses BM25 scores to identify if a word in the previous passage retrieval log is important or not. The words that are classified as important will be appended at the end of the raw query. We use the same BM25 threshold setting with the paper introducing HQE[3].
### Tuning BM25 Parameters
For sparse ranking functions, we use the BM25 ranking function with the following parameter setting: k1=1.24, b=0.9 after hyperparameter tuning on the TREC CasT 2021 dataset. Table 4 shows the improvement in retrieval performance using the aforementioned BM25 parameters compared with the default settings in the TREC CasT 2021 dataset. For both sparse retrieval and dense retrieval in the first ranking stage, our pipeline will only return the top 2000 ranked passages to improve efficiency.
\begin{table}
\begin{tabular}{c|c c c c} \hline Run Name & Recall@1000 & Recall@500 & MAP@2000 & NDCG@3 \\ \hline The original T5 rewriter & 0.5873 & 0.5502 & 0.1378 & 0.2335 \\ \hline Fine-tuned T5 rewriter & **0.6365** & **0.5892** & **0.1484** & **0.2525** \\ \hline \end{tabular}
\end{table}
Table 1: The retrieval performance on TREC CasT 2021 dataset of the reformulated queries reformulated by the original T5 rewriter from [1] and our fine-tuned one.
\begin{table}
\begin{tabular}{c|c c c c} \hline No. Canonical Psg* & Recall@500 & MAP@500 & NDCG@500 & NDCG@3 \\ \hline
0 & 0.5330 & 0.1237 & 0.3384 & 0.2180 \\ \hline
1 & **0.5459** & **0.1356** & **0.3591** & **0.2474** \\ \hline
2 & 0.4688 & 0.1111 & 0.3012 & 0.2102 \\ \hline
3 & 0.4688 & 0.1111 & 0.3012 & 0.2102 \\ \hline \end{tabular}
\end{table}
Table 2: The retrieval performances on TREC CasT 2021 dataset of the reformulated queries considering the different numbers of canonical passages(*No. Canonical Psg = Number of canonical passages to consider in the T5 rewriter)
Also, during the first retrieval, instead of waiting for the retrieval process to finish for a single query, we delay all the retrieval processes for many reformulated queries and process them together to take full advantage of the multiprocessing capability of our CPU.
### Re-ranking Setup
After the first ranking stage, we implement MonoT5 as the pointwise re-ranker and DuoT5 as the pairwise re-ranker. Due to the expensive computational requirement that come with the re-ranking process and re-rankers require online computation. We only use MonoT5 to re-rank the first 1000 documents and use DuoT5 to re-rank the first 200 documents from MonoT5.
### Fusion Setup
After the re-ranking stage, we have a total of six re-ranking runs since we have three different versions of reformulated queries: G0, G1, and Hqe, and two different first ranking methods: sparse and dense ranking methods, where"G" stands for generative neural reformulator. In our case, we use the T5 rewriter specifically. We consider both "G0" and "G1", where "G0" stands for the most probable output and "G1" stands for the second most probable output. Although we observe from Table 3 that fusing more runs using different output of the generative reformulator could be beneficial to the retrieval performance. Due to the time restriction and for the simplicity of our method, we do not fuse runs using top-probable sentences generated by the reformulator for creating our submitted runs. The following chapter will include a detailed description of the four submitted runs and how we finished the fusion step.
## 4 Submitted Runs
For TREC CAsT 2022 participation, our team finally submitted two automatic runs: UDInfo-best2021, UDInfo-only and two mixed-initiative runs: UDInfo-mi-b2021, UDInfo-only-mi. Since we cannot
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Recall@500 & Recall@1000 & Recall@3000 \\ \hline A non-tuning run & 0.6664 & 0.7121 & 0.7790 \\ \hline The best run(k1=1.24, b=0.9). & **0.6730** & **0.7518** & **0.8101** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The improvement in retrieval performance using the aforementioned BM25 parameters compared with the default settings in the TREC CAsT 2021 dataset
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline No. sentences to fusion & Recall@500 & MAP@500 & NDCG@500 & NDCG@3 \\ \hline
1 & 0.4854 & 0.1294 & 0.3252 & 0.2429 \\ \hline
3 & 0.5663 & 0.1376 & 0.3662 & 0.2466 \\ \hline
5 & 0.5793 & 0.1431 & 0.3758 & 0.2607 \\ \hline
7 & 0.5845 & **0.1503** & **0.3833** & **0.2722** \\ \hline
10 & **0.6037** & 0.1423 & 0.3804 & 0.2587 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The retrieval performances on TREC CAsT 2021 dataset of runs fusing different number of top-probable sentences in the first ranking stage (using BM25 as the ranking function)
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline & Recall@500 & Recall@1000 & Recall@3000 \\ \hline A non-tuning run & 0.6664 & 0.7121 & 0.7790 \\ \hline The best run(k1=1.24, b=0.9). & **0.6730** & **0.7518** & **0.8101** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The improvement in retrieval performance using the aforementioned BM25 parameters compared with the default settings in the TREC CAsT 2021 dataset
obtain user feedback for the TREC CasT 2021 dataset, we only report the two pipelines using CANARD-T5 or HQE query reformulators. The details of how to create runs in 4 are described later.
What we can observe from 4 is that: first, although the two methods we used for creating runs have a higher Recall@1000 on the TREC 2021 dataset compared with the TREC 2022 dataset, we obtain higher NDCG@3 on the TREC 2022 dataset. This could indicate that the ranking methods we used in the first ranking stage, which mainly focused on increasing Recall, can handle the TREC 2021 dataset better than the TREC 2022. On the contrary, the ranking methods we used for the second-ranking stage, which mainly focuses on increasing NDCG, can handle the TREC 2022 dataset better than the 2021 one. Secondly, the methods using the mixed-initiative query reformulation module achieve much higher retrieval performance compared with the runs using CANARD-T5 and HQE, which indicates that incorporating mixed-initiative interaction into conversational passage retrieval systems has the potential to improve retrieval performance2.
Footnote 2: Since some of the original answers we received have unexpected bad quality(for example, many answers are “This question is not related to my search.”), we manually replace those bad-quality answers by mimicking the behavior of a user. The answer file, which includes the answers we used for query reformulation and our generated questions, can be found in this link.
* Run #1 (UDInfo-best2021): reciprocal fusion of three top-ranked methods on NDCG@3 in the TREC CasT 2021 dataset(The reason for only fuse top three runs is because fusing more runs can only harm the performance by experiment on the TREC CasT 2021 dataset), which are:
* Using "G1" as the query reformulation method; sparse first ranking stage; Pointwise and Pairwise re-ranking.
* Using "G0" as the query reformulation method; dense first ranking stage; Pointwise and Pairwise re-ranking.
* Using "Hqe" as the query reformulation method; dense first ranking stage; Only re-ranking on Pointwise method.
* Run #2 (UDInfo-mi-b2021): using the clarification answers from users after the system proactively elicits; other remains the same as Run #1.
* Run #3 (UDInfo-onlyd): reciprocal fusion of all three dense methods, which are:
* Using "G1" as the query reformulation method; dense first ranking stage; Pointwise and Pairwise re-ranking.
* Using "G0" as the query reformulation method; dense first ranking stage; Pointwise and Pairwise re-ranking.
* Using "Hqe" as the query reformulation method; dense first ranking stage; Pointwise and Pairwise re-ranking.
* Run #4 (UDInfo-onlyd-mi): using the clarification answers from users after the system proactively elicits; other remains the same as Run #3.
## 5 Conclusion
In this paper, we introduced our multi-stage retrieval pipeline that can tackle conversational search tasks. Our pipeline consists of four stages: query reformulation, first ranking, re-ranking, and fusion. In addition to the multi-stage retrieval pipeline, we also introduced our implementation of mixed-initiative interaction on query reformulation, where we design an algorithm to generate questions and seek answers from users to explicitly resolve the ambiguities in the raw queries. In the future, we will explore more methods that can enable mixed-initiative interactions, which can possibly benefit retrieval performance in conversational search. |
2303.04516 | Time-Optimal Control via Heaviside Step-Function Approximation | Least-squares programming is a popular tool in robotics due to its simplicity
and availability of open-source solvers. However, certain problems like sparse
programming in the $\ell_0$- or $\ell_1$-norm for time-optimal control are not
equivalently solvable. In this work, we propose a non-linear hierarchical
least-squares programming (NL-HLSP) for time-optimal control of non-linear
discrete dynamic systems. We use a continuous approximation of the heaviside
step function with an additional term that avoids vanishing gradients. We use a
simple discretization method by keeping states and controls piece-wise constant
between discretization steps. This way, we obtain a comparatively easily
implementable NL-HLSP in contrast to direct transcription approaches of optimal
control. We show that the NL-HLSP indeed recovers the discrete time-optimal
control in the limit for resting goal points. We confirm the results in
simulation for linear and non-linear control scenarios. | Kai Pfeiffer, Quang-Cuong Pham | 2023-03-08T11:13:49Z | http://arxiv.org/abs/2303.04516v3 | # Time-Optimal Control via Heaviside Step-Function Approximation
###### Abstract
Least-squares programming is a popular tool in robotics due to its simplicity and availability of open-source solvers. However, certain problems like sparse programming in the 0- or 1-norm for time-optimal control are not equivalently solvable. In this work we propose a non-linear hierarchical least-squares programming (NL-HLSP) for time-optimal control of non-linear discrete dynamic systems. We use a continuous approximation of the heavide step function with an additional term that avoids vanishing gradients. We use a simple discretization method by keeping states and controls piece-wise constant between discretization steps. This way we obtain a comparatively easily implementable NL-HLSP in contrast to direct transcription approaches of optimal control. We show that the NL-HLSP indeed recovers the discrete time-optimal control in the limit for resting goal points. We confirm the results in simulation for linear and non-linear control scenarios.
## I Introduction
Time-optimal control can be considered a powerful tool when as fast as possible task fulfillment of a dynamic system is desired. However, optimal control methods based on direct methods for problem discretiziation are not easily implementable [1] or their solution relies on proprietary software [2]. In this work we propose a non-linear hierarchical least-squares programming (NL-HLSP) that can be easily implemented (\(\sim\)20 lines of code if a non-linear solver and a task library are available) and that is provably convergent to the true discrete time-optimal control in the limit and for resting goal points (that is the system is able to remain at these points; for example a robot is not able to remain at a Cartesian point if at the same time the desired velocity is not zero).
Optimal control is the problem of identifying a control \(u\) such that a dynamic system with states \(x\) is driven to a desired goal state \(f_{d}\) while minimizing a defined cost on control \(u\), the state \(x\) or linear / non-linear functions of it. A desired goal is thereby a state that the system should end up in while considering constraints on the controls \(u\) and / or the states \(x\). This boils down to a constrained optimization problem with Ordinary Differential Equations (ODE), that describe the dynamic system behavior, and algebraic equations, that describe physical relations [3].
Analytic solutions to certain specifications of the continuous optimal control problem have been proposed. A solution to the linear quadratic regulator (LQR) with linear dynamics can be found in [4]. The authors in [5] propose a solution with limits on the controls. However, for complicated non-linear systems, and especially constrained problems, analytical solutions are usually too difficult or impossible to formulate. Instead, in order to be machine solvable the original optimal control problem can be discretized and then be solved as a non-linear programming (NLP), commonly referred to as direct transcription method [6]. Different methods have been proposed which differ in the polynomials and collocation points (at which the functions are evaluated) that are used to approximate the continuous controls and states. The authors in [7] use Legendre-Gauss-Radau quadrature as it allows easy constraint formulation and is shown to possess high stability for systems of high order ODE's. Legendre-Gauss-Lobatto points however provide the smallest interpolation error in a least-squares sense [8]. The open-source matlab implementation GPOPS of a direct transcription method is described in [1]. However, the corresponding C++ implementation [2] is proprietary and furthermore relies on the proprietary NLP solver SNOPT [9].
A specific form of optimal control is time-optimal control [10]. Here the cost function specifically aims to minimize the time at which a desired goal state is reached. The resulting control usually exhibits a 'bang-bang' profile, that is a limit control for given bounds on the control inputs [11].
Time-optimal control is a complex problem, especially when the controls and states are considered at the same time, for example for whole-body robot trajectory optimization [12]. One way to reduce its complexity is to reduce the discrete time-optimal control problem to a simpler one by only considering the controls while the states are assumed to be known. This method, referred to as time-optimal path parametrization, has seen significant leaps in the recent past in terms of accuracy, convergence and computational complexity [13]. However, how to choose the states is oftentimes not entirely clear but might be taken for example from kinematic solutions or motion capture.
Aside direct collocation methods, optimal control can also be discretized to reasonable accuracy for example by the explicit Euler-method [14, 15, 16]. In this case time-optimal control can be considered a mixed-integer non-linear programming [17] where the discrete optimal time is represented as an integer and the corresponding states and controls are continuous. Such problems, even in the linear case [18] or for example expressed as optimization problems with \(\ell_{0}\)-norm cost functions for sparsity enhancing linear regression [19], are very expensive to solve due to their combinatorial nature. A simplified approach only considering a robot's end-effector position in a traveling salesman scenario has been treated in [20]. Another approach is to turn the \(\ell_{0}\)- into a
weighted \(\ell_{1}\)-norm optimization problem. Thereby, different weights have been proposed in the literature [19, 21, 22] which aim to represent the original problem as closely as possible. This approach has been borrowed for discrete linear dynamic systems for example in [14] with a globally converging weight series. A similar approach is followed in [23], confirming that \(\ell_{1}\)-norm optimization can retrieve an equivalent solution to discrete time-optimal control. The authors in [24] propose a similar method based on a sliding window but formulated as an \(\ell_{2}\)-norm optimization problem. The window is iteratively shifted until the time-optimal control is contained within. However, no global convergence guarantee is given.
Control problem formulation as least-squares problems (\(\ell_{2}\)-norm) is oftentimes appropriate and sufficient in robot control and planning [25, 26] and enjoys great popularity because of its simplicity and availability of open-source solvers [27]. A special formulation of least-squares programming is hierarchical least-squares programming [28]. Here constraints and objectives can be further prioritized within such that a more efficient robot control can be achieved [29]. Especially the separation of regularization tasks is oftentimes very helpful as the actual objectives can be fulfilled to higher accuracy as has been demonstrated in [30]. However, there are problem settings that are not equivalently solvable in the \(\ell_{2}\)-norm. An example would be the above mentioned problem in the \(\ell_{1}\)-norm for discrete time-optimal control [23, 14]. In this work we propose a globally converging approximation of the discrete time-optimal control problem in the \(\ell_{2}\)-norm based on the approximate heaviside function [31]. The heaviside function has been treated in different works. The authors in [32] use a heaviside step-function approximation in order to indicate mechanical stress violations in topology optimization. The authors in [33] directly optimize over the discontinuous Heaviside step-function and propose an appropriate Newton's method to do so.
Our contribution is therefore threefold:
* Discrete time-optimal control as an easily implementable NL-HLSP that can be solved by off the shelf non-linear least-squares solvers.
* Applicability to both linear and non-linear systems.
* Global convergence for discrete time-optimal control formulated as a least-squares problem.
First, we outline continuous time-optimal control and our discretization of it, see sec. II. Secondly, we describe our approximated discrete time-optimal control problem and the weight function based on the heaviside step-function approximation (sec. III). In sec. IV we show the equivalence to the true discrete time-optimal control problems in the limit and for resting goal points. Lastly, the algorithm is evaluated for linear and non-linear discrete dynamic systems (sec. V).
## II Problem definition
In this work we consider time-optimal control (TOC) problems of the form
\[\min_{x,u,T} \int_{0}^{T}dt\] (TOC) s.t \[\dot{x}(t)=f_{\mathsf{dyn}}(x(t),u(t))\] \[f_{\mathsf{ter}}(x(T))=0\] \[x(t),u(t)\in\Omega\]
The states \(x(t)\in\mathbb{R}^{n_{x}}\) and \(u(t)\in\mathbb{R}^{n_{u}}\) are continuous in time \(t\). The goal is to minimize the time \(T\) that it takes to reach the terminal (ter.) state \(f_{\mathsf{ter}}(x(T))=0\). We define
\[f_{\mathsf{ter}}(x(t))\coloneqq f_{\mathsf{task}}(x(t))-f_{d}(t) \tag{1}\]
where \(f_{\mathsf{task}}(x(t))\) is some task function and \(f_{d}\) is the corresponding desired value. The dynamics \(f_{\mathsf{dyn}}\) determine the behavior of the control system by some possibly non-linear relationship. \(\Omega\) is a constraint polytope which both the states and controls are constrained to.
The problem can be discretized for example by direct transcription methods. These methods provide an equivalent solution to the original continuous optimal control problem (TOC) with strong convergence and stability properties [7]. However, they are not easily implementable or rely on proprietary software. We therefore discretize our problem by the explicit Euler method by assuming the piece-wise constant states and controls over each discritization instance \(i=0,\ldots,N-1\)[14, 15, 16] (in contrast to higher order polynomials as in direct transcription methods). We refer to the following optimization problem as the 'true discrete time-optimal control' (DTOC) throughout the remainder of the paper:
\[\min_{x,u,N^{*}} \quad T=N^{*}\Delta t\] (DTOC) s.t \[f_{\mathsf{dyn}}(x(0),u(0),x(1),...,u(N-1),x(N))=0\] \[f_{\mathsf{ter}}(x(N^{*}+1),x)=0\] \[x(0)=x_{0}\] \[x(i+1),u(i)\in\Omega,\quad i=0,\ldots,N-1\]
\(\Delta t\) is the discretization time step. \(N\) is the number of collocation points of the discrete problem. \(N^{*}\) is the last time step at which \(f_{\mathsf{ter}}(x(N^{*}),x)\neq 0\) while at the consequent step we have \(f_{\mathsf{ter}}(x(N^{*}+1),x)=0\). The dependence \(f_{\mathsf{ter}}(x(i),x)\) indicates that \(f_{\mathsf{ter}}\) is necessarily dependent of \(x(i)\) but possibly also from \(x\) (for example in case of finite differences or Euler integration schemes). The discrete states \(x\) and controls \(u\) are defined as
\[x \coloneqq\big{[}x(1)^{T} \cdots x(N)^{T}\big{]}^{T}\] \[u \coloneqq\big{[}u(0)^{T} \cdots u(N-1)^{T}\big{]}^{T}\]
The initial state is given by \(x(0)=x_{0}\in\mathbb{R}^{n_{x}}\).
DTOC can be solved by means of \(\ell_{0}\) optimization. However, due to its combinatorial nature an approximation based on the \(\ell_{1}\)-norm is usually solved instead. Different weight functions have been proposed which aim to approximate the original \(\ell_{0}\)-problem as close as possible. However, such \(\ell_{1}\)-problems can not be equivalently solved by least-squares programming. The reason is that the Hessian of the Taylor approximation of the problem is always positive-definite and not zero as required by \(\ell_{1}\) programming (\(\ell_{1}\) problems can be for example be solved by quadratic programming,
or specialized linear programming solvers). Least-squares programming is a very popular tool due to its simplicity and availability of open-source solvers. Furthermore, it allows for hierarchical optimization, a tool that has gained considerable attention in the recent past, especially in the context of robot control. In this work we therefore propose a NL-HLSP for time-optimal control which is applicable both to linear and non-linear dynamic systems and task functions.
## III Approximate discrete time-optimal control via heaviside step-function approximation
In this section we recast DTOC into an NL-HLSP of the following form:
\[\min_{z,v_{l}} \frac{1}{2}\left\|v_{l}\right\|^{2} l=1,\ldots,p\] (NL-HLSP) s.t. \[f_{l}(z)\;\leqq\;v_{l}\] \[\underline{f}_{l-1}(z)\;\leqq\;\underline{v}_{l-1}^{*}\]
Each of the \(p\) priority level contains constraints of the form \(f_{l}(z)\;\leqq\;v_{l}\). The symbol \(\leqq\) indicates both equality and inequality constraints. \(v_{l}\) is a slack variable of level \(l\) that is minimized in a least-squares sense while being subject to the constraints of the levels \(1\) to \(l\) while not increasing the already identified optimal slacks \(\underline{v}_{l-1}^{*}=\left[{v_{1}^{*}}^{T}\;\cdots\;\;{v_{l-1}^{*}}^{T} \right]^{T}\) of the previous levels \(1\) to \(l-1\)[30, 34]. This problem can be solved by sequential hierarchical least-squares programming (S-HLSP) as proposed in [29, 35].
Concretely, the approximate discrete time-optimal control (ADTOC) as an NL-HLSP is given in tab. I. The variable vector is chosen as \(z=\left[{x^{T}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \
with a small weight. However, this would negatively influence the task performance of the second level [30].
## IV Convergence of the approximate discrete time-optimal control problem
In this section we investigate the convergence behavior of ADTOC (tab. I). Similarly to [3] we proceed from a numerical standpoint. We show that the true time-optimal control \(\hat{u}\) and \(\hat{x}\) poses a global KKT point to tab. I. For this we look at following simplified optimization problem by assuming \(x,u\in\Omega\) where \(\Omega\) is the feasible constraint polytope with respect to the control, state and dynamics constraints:
\[\min_{x,u\in\{\Omega\},N^{*}} \frac{1}{2}\left\|\begin{bmatrix}N^{*}\Delta t\\ w(0,N^{*})f_{\text{fer}}(x)\\ \vdots\\ w(N-1,N^{*})f_{\text{fer}}(x)\end{bmatrix}\right\|^{2} \tag{7}\]
We assume that \(N>\hat{N}^{*}\) is reasonably chosen such that a feasible solution to the time-optimal control problem exists. Therefore, in the following we only consider the unconstrained version of (7) (we could argue that for example controls at the limits are simply removed from the optimization problem in the sense of null-space methods [35]). The corresponding first order optimality conditions \(K_{x,u,N^{*}}\coloneqq\nabla_{x,u,N^{*}}\mathcal{L}=0\) derived from the Lagrangian \(\mathcal{L}\coloneqq\frac{1}{2}\|\begin{bmatrix}N^{*}\Delta t&\cdots&w(N-1,N^{* })f_{\text{fer}}(N-1)^{T}\end{bmatrix}^{T}\|^{2}\) are
\[\begin{bmatrix}K_{x}\\ K_{u}\\ K_{N^{*}}\end{bmatrix}\coloneqq\begin{bmatrix}\sum_{i=0}^{N-1}w(i,N^{*})^{2}f_ {\text{fer}}(x(i))^{T}\nabla_{x}f_{\text{fer}}(x(i))\\ \sum_{i=0}^{N-1}w(i,N^{*})^{2}f_{\text{fer}}(x(i))^{T}\nabla_{u}f_{\text{fer}}( x(i))\\ N^{*}\Delta t^{2}+\Sigma(x,N^{*})\end{bmatrix}=0\]
\(\Sigma(x,N^{*})\) is defined as
\[\Sigma(x,N^{*})\coloneqq\sum_{i=0}^{N-1}w(i,N^{*})\nabla_{N^{*}}w(i,N^{*})\|f_ {\text{fer}}(x(i))\|^{2} \tag{8}\]
We now proceed as follows. We first show that the solution \(N^{*}\) is contained within the horizon \(0\leq N^{*}\leq N\), see. IV-A. This is important for algorithm coherence since a negative \(N^{*}\) would be an irrational solution. Similarly, \(N^{*}>N\) would mean that the terminal constraint \(f_{\text{fer}}\) vanishes from the optimization problem. We then proceed by showing the convergence of ADTOC (tab. I) to DTOC in the limit and for resting goal points, see sec. IV-B.
### _Coherent solution \(0\leq N^{*}\leq N\)_
We can make following statement about \(\Sigma(x,N^{*})\):
**Theorem 1**: _The sum \(\Sigma(x,N^{*})\) is negative for all \(i\) for given \(N^{*}\) and \(k=k_{\epsilon}\)._
We look at the expression
\[\sum_{i=0}^{N-1}w(i,N^{*})\nabla_{N^{*}}w(i,N^{*})= \tag{9}\] \[\sum_{i=0}^{N-1}\left(\nabla_{N^{*}}h(i,N^{*})-\frac{h(i,N^{*})^{ 2}}{i-N^{*}+1}\right)(i-N^{*}+1)^{2k}<0\]
The first term is negative since \(\nabla_{N^{*}}h(i,N^{*})<0\) for \(k=k_{\epsilon}\). Secondly, the expression \((i-N^{*}+1)^{2k-1}\) is negative for all \(i<N^{*}-1\). Since \(h(i<N^{*}-1,N^{*})<\epsilon\) vanishes for \(i<N^{*}-1\) with \(k_{\epsilon}\) the second expression is negative as well.
We show in the evaluation section V that we achieve good convergence even with rather large \(\epsilon\) (and therefore not strict negativity of \(\Sigma(x,N^{*})\))
Importantly, due to the additional term \((i-N^{*}+1)\) in \(w\) any \(f_{\text{fer}}(x(i))\neq 0\) with \(i>N^{*}\) contributes to the sum \(\Sigma(x,N^{*})\). This would not be the case for the pure heaviside function with \(k=k_{\epsilon}\). Here any Newton step \(K_{N^{*}}(u+\Delta u)\), \(K_{N^{*}}(x+\Delta x)\) or \(K_{N^{*}}(N^{*}+\Delta N^{*})\) would not lead to reduction of the error \(w(i,N^{*})f_{\text{fer}}(x(i))\neq 0\) with \(i>N^{*}\).
The reason why we do not make the expression \((i-N^{*}+1)^{k}\) fully positive over the range \(0\leq N^{*}\leq N\) (for example by \((i-N^{*}+N)\)) is that we would create a function without a crossing at \((N^{*},0.5)\) (see fig. 1 for \(h(t-N^{*})\)). With our chosen value 1 we still approximate the original desired function \(h(t-N^{*})\) while not neglecting the value \(h(N^{*},N^{*})(N^{*}-N^{*}+1)=0.5\) at \(i=N^{*}\).
We now make a statement about the range of the obtained solution \(N^{*}\).
**Theorem 2**: _The obtained \(N^{*}\) is within the range \(0\leq N^{*}\leq N\) for \(k=k_{\epsilon}\)._
The Newton step \(\Delta N^{*}\) with respect to \(N^{*}\) is
\[K_{N^{*}}(N^{*}+\Delta N^{*})\approx K_{N^{*}}(N^{*})+\Delta N^{* }\nabla_{N^{*}}K_{N^{*}}(N^{*})=\] \[N^{*}\Delta t^{2}+\Sigma(x,N^{*})+\Delta N^{*}(\Delta t^{2}+ \tag{10}\] \[\sum_{i=0}^{N}\left(\nabla_{N^{*}}w(i,N^{*})^{2}+w(i,N^{*})\nabla _{N^{*}}^{2}w(i,N^{*})\right)\|f_{\text{fer}}(x(i))\|^{2})\]
For \(k=k_{\epsilon}\) and \(N^{*}<0\) the above sum is positive for all \(i\) and given \(N^{*}\). More specifically, we have for the second derivative \(\nabla_{N^{*}}^{2}w(i,N^{*})=0.5k(1-\tanh(k(i-N^{*}))^{2})>0\) since with (5) and \(k=k_{\epsilon}\) any (higher order) derivative of
\(h\) is zero for \(i\neq N^{*}\). The Newton step with respect to \(N^{*}\) then becomes
\[\Delta N^{*}=\frac{-N^{*}\Delta t^{2}-\sum(x,N^{*})}{\sum_{i=0}^{N}\Delta t^{2}+ \nabla_{N^{*}}^{2}w(i,N^{*})}>0 \tag{11}\]
with both positive numerator (using theorem 1 for \(\sum(x,N^{*})<0\)) and denominator. This means that for any negative \(N^{*}\) we get a positive Newton step \(\Delta N^{*}\) until \(N^{*}+\Delta N^{*}\geq 0\). For \(N^{*}>N\) the sum in (10) vanishes entirely. The resulting Newton step \(\Delta N^{*}<0\) is negative such that \(N^{*}+\Delta N^{*}=N\).
The latter ensures that the constraint \(f_{\text{fer}}=0\) never entirely vanishes from the optimization problem.
### _Convergence to the true discrete time-optimal control_
We first make a statement about the possible set of KKT points within the given control horizon.
**Theorem 3**: _Any \(0\leq N^{*}\leq N\) poses a KKT point to ADTOC (tab. I) if any \(\|f_{\text{fer}}(x(i))\|^{2}>0\), with \(i\geq N^{*}\) and some \(x\), poses a feasible point to the constraint polytope \(\Omega\)._
Since \(N^{*}\Delta t^{2}>0\) and \(\Sigma(x,N^{*})<0\) we get \(K_{N^{*}}=0\) for any \(N^{*}\) if we can find a corresponding \(x\), \(u\in\Omega\) (and therefore \(\|f_{\text{fer}}(x)\|\)).
With this foundation let's look at the overall convergence behavior of ADTOC (tab. I).
**Theorem 4**: _ADTOC (tab. I) represents DTOC if \(\Delta t\to 0\), \(k=k_{\text{c}}\) and the desired point \(f_{d}\) is a resting point such that \(\|f_{\text{fer}}(i>\hat{N}^{*})\|^{2}=0\)._
Assume the time-optimal control \(\hat{u}\) and state \(\hat{x}\) such that \(\|f_{\text{fer}}(i>\hat{N}^{*})\|^{2}=0\) (which implies a resting goal point \(f_{d}\)).
First, we consider the case \(N^{*}\geq\hat{N}^{*}+1\). Then the sum \(\Sigma_{k_{\text{c}}}=0\) (8) vanishes for all \(N^{*}\geq\hat{N}^{*}+1\) since \(\|f_{\text{fer}}(i>\hat{N}^{*})\|^{2}=0\). The resulting Newton step (10) is negative with \(\Delta N^{*}=-\Delta N^{*}<0\) and drives \(N^{*}\to\hat{N}^{*}+1\).
Secondly, we consider the case \(N^{*}\leq\hat{N}^{*}\). The corresponding Lagrangian becomes \(\mathcal{L}(N^{*})=0.5(N^{*}\Delta t)^{2}+\sum_{i=0}^{N^{*}}w(i,N^{*})^{2}\|f _{\text{fer}}(x(i))\|^{2}\). We now consider the limit case \(\Delta t\to 0\). We then have \(\mathcal{L}(N^{*})=\sum_{i=0}^{N^{*}}w(i,N^{*})^{2}\|f_{\text{fer}}(x(i))\|^{2 }>0\). Furthermore, for \(N^{*}=\hat{N}^{*}\) we have \(\mathcal{L}(\hat{N}^{*})=(\hat{N}^{*}\Delta t)^{2}\) and limit \(\mathcal{L}(\hat{N}^{*})\to 0\). Since \(\mathcal{L}(N^{*})\geq 0\) for all \(N^{*}\in\mathbb{Z}_{\geq 0}\) this is a global minimum \(\min\mathcal{L}\). This means that a global KKT point is within the interval \(N^{*}\in]\hat{N}^{*},\hat{N}^{*}+1[\) since \(\min\mathcal{L}\). Due to theorem 3 we therefore find an optimal point \(\hat{x},\hat{u}\in\Omega\) if it exists.
Now assume a sub-optimal (s) \(\hat{u}_{s}\) and \(\hat{x}_{s}\) that is optimal to \(\hat{N}^{*}_{s}>\hat{N}^{*}\) with \(\|f_{\text{fer}}(i>\hat{N}^{*}_{s})\|^{2}=0\) but sub-optimal to \(\hat{N}^{*}\). We then clearly have for the sub-optimal Lagrangian \(\mathcal{L}_{s}(\hat{N}^{*}_{s})=\hat{N}^{*}_{s}\Delta t^{2}>\mathcal{L}(\hat{N} ^{*})=\hat{N}^{*}\Delta t^{2}\). This means that the true time-optimal \(\hat{N}^{*}\) poses a global optimum to ADTOC (tab. I).
Note that in the above we do not make any assumptions about the structure of \(f_{\text{dyn}}\) or \(f_{\text{fer}}\). Our algorithm is therefore applicable to both linear and non-linear systems as we demonstrate in the evaluation sec. V.
## V Evaluation
We apply our method to both a linear point-mass and a non-linear robot manipulator control example. In order to solve the NL-HLSP's we use the globally converging sequential hierarchical least-squares programming solver S-HLSP with trust-region and hierarchical step-filter proposed in [35]. It is based on the sparse HLSP solver s-\(\diagup\)VIPM-HLSP [36, 35]. All gradients and Hessians of the dynamics and task functions are computed analytically, for example according to [37, 38]. We choose \(\Delta t=0.1\) s, \(N=25\) (or \(\Delta t=0.01\) s with \(N=100\) and \(N=80\)) and \(k=4\) with \(\epsilon=3.4\cdot 10^{-4}\). While \(\epsilon\) is relatively large (meaning that \(\Sigma(x,N^{*})\) can become positive, in contrast to theorem 1) we did not experience any negative influence on the algorithm convergence.
In order to confirm our results we compare our algorithm to the two optimization problems given in tab. II. Both problems' decision variables do not include \(N^{*}\) which is rather fixed at a chosen value. The first problem therefore corresponds to DTOC with the terminal constraint at \(N^{*}+1\). The other optimization problem is a 'padded' version of DTOC with terminal constraints on the collocation points \(N^{*}+1\) to \(N\). This reproduces the behavior of ADTOC (tab. I) which can solve the original problem (DTOC) equivalently only if the desired state \(f_{d}\) is a resting goal point (see theorem 4). Furthermore, we report the results of GPOPS [1] which implements a direct transcription method for optimal control problems of the form (TOC). The resulting NLP is solved by SNOPT [9].
In order to preserve sparsity we set \(w\) to zero when it is close to 0 up to a numerical threshold
\[w(t,N^{*})\leftarrow\begin{cases}0&\text{if }w(t,N^{*})<10^{-20}\\ w(t,N^{*})&\text{otherwise}\end{cases} \tag{12}\]
Note that we do not cut values that are close to 1 since this would reintroduce the discontinuity that we are trying to circumvent from the begin with.
We display the variable \(T^{*}=N^{*}\Delta t\) which indicates the optimal time at which still \(f_{\text{fer}}\neq 0\) (the time (\(N^{*}+1\))\(\Delta t\) is the time where we first have \(f_{\text{fer}}=0\)).
\begin{table}
\begin{tabular}{l l} \hline \hline \(l\) & \(f_{l}(x,u)\leq v_{l}\) \\ \hline \multicolumn{3}{c}{with \(t=0,\ldots,N-1\)} \\
1 & \(\begin{bmatrix}u_{t}-u_{\max}\\ u_{\min}-u_{t}\end{bmatrix}\leq v_{1,t+1}\) \\
1 & \(f_{\text{dyn}}(x_{0},u_{0},\ldots,u_{N-1},x_{N})=v_{1}\) \\
2 & \(f_{\text{fer}}(x(N^{*}+1),x)=v_{2}\) \\
2 & \(\begin{bmatrix}f_{\text{fer}}(x(N^{*}+1),x)\\ \vdots\\ f_{\text{fer}}(x(N),x)\end{bmatrix}=v_{2}\) \\
3 & Regularization of \(x\), \(u\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: DTOC (blue) and ‘padded’ DTOC (red); \(N^{*}\) is fixed.
### _Point mass_
First we apply our method on a simple point mass of \(m=1\) kg moving along a horizontal line with single-input force \(u\) and state \(x=\begin{bmatrix}q&\dot{q}\end{bmatrix}\). The state \(q\) is the position of the point-mass. The state \(\dot{q}\) is the point-mass velocity. The initial state is chosen as \(x_{0}=\begin{bmatrix}1&0\end{bmatrix}\). The point-mass dynamics are linear in the state \(x\) and control \(u\) (using explicit Euler integration \(x_{i+1}=x_{i}+\Delta t\dot{x}_{i}\) and \(m\ddot{q}_{i}=u_{i}\) with \(i=0,\ldots,N-1\))
\[f_{\text{dyn}}(i)=x_{i+1}+\begin{bmatrix}-1&-\Delta t\\ 0&-1\end{bmatrix}x_{i}-\begin{bmatrix}0\\ \Delta/m\end{bmatrix}u_{i} \tag{13}\]
The control limit is chosen to \(10\) N. We set our task function to \(f_{\text{ter}}=x\) with \(f_{d}=0\).
The results of ADTOC (tab. I) are given in fig. 2. \(N^{*}\) is identified as \(N^{*}=5.1\) for \(\Delta t=0.1\) s and \(N^{*}=6.2\) for \(\Delta t=0.01\) s. As can be seen from the top graph, the identified control trajectories (\(T^{*}_{\text{ADTOC},\Delta t=0.1}=0.51\) and \(T^{*}_{\text{ADTOC},\Delta t=0.01}=0.62\)) follow a bang-bang profile which indicates time-optimality. For \(\Delta=0.01\) s (computation time 4.5 s with 75 solver iterations, \(N=100\)) the control is at the limit \(-10\) N until control time 0.3 s and then at the limit \(10\) N from time 0.33 s. The control is zero from time 0.65 s onwards. A similar but less pronounced profile is observed for \(\Delta t=0.1\) s (computation time 0.7 s with 64 solver iterations, \(N=25\)), seemingly confirming our convergence theorem 4 of equivalence of ADTOC (tab. I) with DTOC for \(\Delta t\to 0\) (note that \(\Delta t\to 0\) is prohibitive due to memory limitations since \(N\rightarrow\infty\)).
For comparison, we give the results of GPOPS (100 transcription nodes). The corresponding control profile is very similar to that of our method with \(\Delta t=0.01\) s but with a slightly sharper switching point from control on the lower bound \(-10\) N to the upper bound \(10\) N. This leads to a sharp drop of \(\|f_{\text{ter}}\|\) at \(t=0.63\) s. On the other hand, the task error for our method \(T^{*}_{\text{ADTOC},\Delta t=0.01}=0.62\) stays elevated until 0.62 s but then declines sharply to \(\|f_{\text{ter}}\|\approx 10^{-11}\). For \(\Delta t\to 0\) (and therefore DTOC according to theorem 4) we would expect a behavior similar to GPOPS with a finite jump of \(\|f_{\text{ter}}\|\) to zero between the corresponding control iterations \(\hat{N}^{*}\) and \(\hat{N}^{*}+1\).
Furthermore, we give the results of the problem tab. II with fixed \(N^{*}=4\) and \(N^{*}=5\). For \(N^{*}=4\) (\(T^{*}_{tab.2,\Delta t=0.1}=0.4\)) the control profile is sharper than for \(T^{*}_{\text{ADTOC},\Delta t=0.1}=0.51\). The control then drops to zero at time 0.7 s. However, from the bottom graph it can be observed that the task error is only reduced to \(\|f_{\text{ter}}\|=0.1\). This is in contrast to the results for \(T^{*}_{tab.2,\Delta t=0.1}=0.5\) where the task error is reduced to \(\|f_{\text{ter}}\|=10^{-13}\) at time 0.6 s. However, the corresponding control profile is very conservative and does not follow a bang-bang profile. A good middle ground between time optimality and error reduction is identified for ADTOC (tab. I) \(T^{*}_{\text{ADTOC},\Delta t=0.1}=0.51\) (while not being as computationally expensive as \(T^{*}_{\text{ADTOC},\Delta t=0.01}=0.62\)). The task error at time 0.6 s is approximately of order \(\|f_{\text{ter}}\|=5\cdot 10^{-4}\) which can be considered sufficiently accurate for this control application.
The task error for \(T^{*}_{tab.2,\Delta t=0.1}=0.5\) declines the fastest with \(\|f_{\text{ter}}\|\approx 10^{-13}\) at time 0.6 s. However, while this is closest to the desired solution of DTOC it does not fulfill higher order constraints like zero acceleration or jerk at convergence. These are implicitly fulfilled for ADTOC (tab. I) and padded DTOC with fixed \(N^{*}\) in tab. II, depending on how much bigger \(N>N^{*}\). Note that this is a matter of defining a more embracing task function \(f_{\text{task}}\) including acceleration and jerk regularization. On the other hand, due to theorem 4 requiring a resting goal point \(f_{d}\), ADTOC (tab. I) is not able to achieve the same results as DTOC with fixed \(N^{*}\) (tab. II).
### _Planar manipulator with two joints_
In this section we want to identify the time-optimal control for moving a 2D planar manipulator with fixed base and two joints from its initial task space position \(f_{\text{task}}(x)=\begin{bmatrix}2&0\end{bmatrix}^{T}\) m to \(f_{d}=\begin{bmatrix}1&1\end{bmatrix}^{T}\) m. The robot's link lengths are given by \(L_{1}=1.25\) m and \(L_{2}=0.75\) m. The robot state is given by \(x=\begin{bmatrix}q_{0,0}&q_{1,0}&\dot{q}_{0,0}&\dot{q}_{1,0}&\cdots&\dot{q}_{0, T-1}&\dot{q}_{1,T-1}\end{bmatrix}\) where \(q_{0}\) and \(q_{1}\) are the joint angles of joint one and two, respectively. The initial state is given by \(x_{0}=\begin{bmatrix}0&0&0&0\end{bmatrix}\) rad. Both joints are actuated by the torques \(u=\begin{bmatrix}\tau_{1}&\tau_{2}\end{bmatrix}^{T}\). The joint torques are limited to \(5\) Nm. We use the dynamics
Fig. 2: Control \(u\), states \(x\) and norm of task error \(\|f_{\text{ter}}\|\) for a linear point mass moving on a plane for ADTOC (tab. I) (\(T^{*}_{\text{ADTOC}}\)) and problem tab. II (\(T^{*}_{tab.2}\)).
as described in [39] with link masses \(m_{1}=m_{2}=1\) kg for the Euler explicit integration scheme \(x_{i+1}=x_{i}+\Delta t\dot{x}_{i}\). Due to the more generic form of the dynamics of DTOC compared to the continuous one (TOC) we do not rely on matrix inversion in order to obtain an explicit formulation of the joint accelerations \(\ddot{q}_{1}\) and \(\ddot{q}_{2}\) (instead we use \(f_{\text{dyn}}=M(\dot{q}_{i+1}-\dot{q}_{i})+\Delta t\{M\ddot{q}_{i}\}\) where \(q=\begin{bmatrix}q_{1}&q_{2}\end{bmatrix}^{T}\), \(M\) is the mass matrix for example computed by the Newton-Euler equations together with the vector \(\{M\ddot{q}_{i}\}\) representing the forces acting on the robot).
The results are given in fig. 3. Our algorithm ADTOC (tab. I) with \(\Delta t=0.01\) s (\(T^{*}_{\text{ADTOC},\Delta t=0.01}=0.66\)) recovers a sharp bang-bang control profile for both \(\tau_{1}\) and \(\tau_{2}\) (computation time 22 s with 121 solver iterations, \(N=80\)). The control vanishes at approximately 0.72 s with \(\|f_{\text{ter}}\|\!\approx 10^{-15}\). This is in contrast to \(\Delta t=0.1\) s (\(T^{*}_{\text{ADTOC},\Delta t=0.1}=0.57\), computation time 1.5 s with 88 solver iterations, \(N=25\)) with the control and task error vanishing only at approximately 0.8 s.
Next, we report the results for GPOPS with 50 collocation points. We have \(T^{*}=1.04\) s (the last time with non-zero control and task error). As can be seen from the corresponding joint trajectory, a different inverse kinematics solution to the boundary condition \(f_{\text{ter}}=0\) is found which is less time-optimal than the one found for our method (note that there is an infinite number of solutions due to robot redundancy [40]). This is due to the non-linear optimizer not finding a better (global) solution to the problem. Nonetheless, the bang-bang control profile is time-optimal with respect to the identified joint trajectory.
Furthermore, the results of the problem tab. II with fixed \(N^{*}=5\) and \(N^{*}=6\) are given. For \(N^{*}=5\) (\(T^{*}_{tab.2,\Delta t=0.1}=0.5\)) we have a limit control at the upper bound 5 Nm. However, the task error at 0.6 s is only decreased to \(7\cdot 10^{-3}\). This is in contrast to the control profile of \(T^{*}_{tab.2,\Delta t=0.1}=0.6\) with the more conservatively chosen terminal point \(N^{*}=6\). There is enough time to reduce the error to \(2\cdot 10^{-7}\). However, from the conservative control profile it can be concluded that \(N^{*}=6\) is not time-optimal. Unlike ADTOC (tab. I) and padded DTOC with known \(N^{*}\) (tab. II) acceleration and jerk constraints are not implicitly fulfilled for DTOC with known \(N^{*}\) (tab. II) and therefore enables quicker task convergence. This means that the point \(f_{d}\) is reached only instantaneously at times 0.6 s and 0.7 s with divergence afterwards, as can be seen from the non-stationary state trajectories in the second graph from the bottom.
The results of the padded DTOC (tab. II) with fixed \(N^{*}=6\) and \(N^{*}=7\) follow closely the ones of our method ADTOC (tab. I) with \(\Delta t=0.1\) s in terms of controls, states and task error reduction. Thereby, the same pattern as with the results for tab. II is observed: for too low \(N^{*}=6\) a bang-bang control profile is obtained but the task error is reduced insufficiently (\(\|f_{\text{ter}}\|\!=5\cdot 10^{-3}\)); for too high \(N^{*}=7\) the task error is reduced significantly (\(\|f_{\text{ter}}\|\!=8\cdot 10^{-9}\)) but the control profile is not time-optimal / not a limit one.
## VI Conclusion
In this work we have formulated, implemented and evaluated an easily implementable NL-HLSP for approximate discrete time-optimal control. We showed that the method corresponds to the true discrete time-optimal control problem in the limits and for resting goal points. This behavior was confirmed in simulations with linear and non-linear systems, recovering the bang-bang control profiles typically seen in time-optimal control.
In future work we would like to investigate whether the current limitation to resting goal points can be relaxed for a broader applicability of our method. Furthermore, while we demonstrated the applicability of our method to whole-body motion control it is limited by its computational complexity. We therefore would like to investigate methods to reduce the computational burden, especially with respect to large robotic systems like humanoid robots.
Fig. 3: Controls \(u\), states \(x\) and norm of task error \(\|f_{\text{ter}}\|\) for a non-linear 2D manipulator with two joints for ADTOC (tab. I), DTOC and padded DTOC (tab. II).
## VII Acknowledgments
We would like to thank Dr. Bing Song for her insightful comments on our work.
|
2302.13343 | Adaptive Control of IoT/M2M Devices in Smart Buildings using
Heterogeneous Wireless Networks | With the rapid development of wireless communication technology, the Internet
of Things (IoT) and Machine-to-Machine (M2M) are becoming essential for many
applications. One of the most emblematic IoT/M2M applications is smart
buildings. The current Building Automation Systems (BAS) are limited by many
factors, including the lack of integration of IoT and M2M technologies,
unfriendly user interfacing, and the lack of a convergent solution. Therefore,
this paper proposes a better approach of using heterogeneous wireless networks
consisting of Wireless Sensor Networks (WSNs) and Mobile Cellular Networks
(MCNs) for IoT/M2M smart building systems. One of the most significant outcomes
of this research is to provide accurate readings to the server, and very low
latency, through which users can easily control and monitor remotely the
proposed system that consists of several innovative services, namely smart
parking, garden irrigation automation, intrusion alarm, smart door, fire and
gas detection, smart lighting, smart medication reminder, and indoor air
quality monitoring. All these services are designed and implemented to control
and monitor from afar the building via our free mobile application named Raniso
which is a local server that allows remote control of the building. This
IoT/M2M smart building system is customizable to meet the needs of users,
improving safety and quality of life while reducing energy consumption.
Additionally, it helps prevent the loss of resources and human lives by
detecting and managing risks. | Rania Djehaiche, Salih Aidel, Ahmad Sawalmeh, Nasir Saeed, Ali H. Alenezi | 2023-02-26T16:30:39Z | http://arxiv.org/abs/2302.13343v1 | # Adaptive Control of IoT/M2M Devices in Smart Buildings using Heterogeneous Wireless Networks
###### Abstract
With the rapid development of wireless communication technology, the Internet of Things (IoT) and Machine-to-Machine (M2M) are becoming essential for many applications. One of the most emblematic IoT/M2M applications is smart buildings. The current Building Automation Systems (BAS) are limited by many factors, including the lack of integration of IoT and M2M technologies, unfriendly user interfacing, high costs, using no more than one or two wireless communication networks, limited wireless transmission range, and the lack of a convergent solution. Therefore, this paper proposes a better approach of using heterogeneous wireless networks consisting of Wireless Sensor Networks (WSNs) and Mobile Cellular Networks (MCNs) for IoT/M2M smart building systems. The proposed system is an inexpensive embedded system that comprises Arduino and NodeMCT boards with several compatible sensors, actuators, and modules for controlling and collecting data over heterogeneous communication technologies (RFID, Bluetooth, Wi-Fi, GSM, LTE). All collected data is uploaded to the Thinging Speak platform, allowing the building system to be monitored via the Thing Speak webpage or the Raniso app. One of the most significant outcomes of this research is to provide accurate readings to the server, and very low latency, through which users can easily control and monitor remotely the proposed system that consists of several innovative services, namely smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring. All these services are designed and implemented to control and monitor from afar the building via our free mobile application named "Raniso" which is a local server that allows remote control of the building via RFID/Bluetooth/Wi-Fi connectivity and cellular networks. This IoT/M2M smart building system is customizable to meet the needs of users, improving safety and quality of life while reducing energy consumption. Additionally, it helps prevent the loss of resources and human lives by detecting and managing risks.
Smart building, IoT/M2M, MCNs, WSNs, Converged Networks, Mobile Application, Heterogeneous Networks.
## I Introduction
The Internet of Things (IoT) and Machine-to-Machine (M2M) are the fastest growing areas of cutting-edge technologies, with the number of connected devices now massively outnumbering humans. The International Telecommunication Union (ITU) defines the IoT as a global infrastructure for the information society, enabling advanced services by interconnecting physical and virtual things based on existing and evolving interoperable information and communication technologies [1]. M2M is considered an integral part of the IoT ecosystem, defined as a communication that allows devices to connect with one another across a wired or wireless communication network without the need for human intervention [2]. Intelligent sensors, advanced wireless technologies, and autonomic computing software are used to assist a network device in understanding and transmitting data and making decisions in an M2M system [3]. Recently, the evolution of numerous wireless communication technologies, such as intelligent reflecting surfaces [4], massive MIMO [5], and ambient backscattering [6] have substantially increased the potential capabilities of IoT/M2M technologies and made them more common than ever, as their convergence has become necessary to meet the ever-increasing requirements of IoT/M2M. Among different wireless communication technologies, ambient backscatter has been emerging for large-scale IoT/M2M applications that can allow transmission of data without requiring power from the IoT device and facilitates device-to-device (D2D) and even multi-hop communications. By reflecting far-field electromagnetic (EM) waves from ambient radio frequency (RF) transmitters, a tiny passive gadget can send data with very low power consumption. This cutting-edge technology with can effectively solve the battery problem faced by massive low-power devices in large-scale IoT/M2M communications [7, 8, 9]. The heterogeneous networks (HetNets) operating on different wireless communication technologies, especially wireless sensor networks (WSNs) and mobile cellular networks (MCNs), offer numerous applications in many industrial sectors such as home automation, industry, healthcare, agriculture, and smart cities [10].
Among these applications, smart buildings are emerging, which support the flow of information throughout the building, providing advanced functionalities and services, allowing the automatic control, monitoring, management, and maintenance of the various subsystems or applications of the building in an optimal and integrated way, locally and/or remotely [11]. There are many research papers dealing with wireless network-based IoT/M2M smart building systems, whether MCN or WSN [12, 13, 14, 15, 16, 17, 18, 19, 20], which will be discussed in detail in the related work part. Comparing the proposed system with all
these research works, the drawbacks of existing building management systems include the lack of using IoT and M2M technologies together, reliance on one or two wireless communication technologies at most, a limited wireless transmission range, mostly inconveniently designed user interface, and excessive costs. Therefore, our approach presents a hybrid (local and remote) and a low-cost IoT/M2M-based smart building system with a user-pleasant smartphone interface. The proposed system comprises several main services: smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring. All these services can be controlled and monitored remotely by our mobile application named 'Raniso', which is used as a local server to control the building via different HetNets such as RFID/Bluetooth/WiFi connectivity and cellular networks like GSM, 4G, or 5G. In addition, the proposed system connects all smart devices in an energy-efficient, secure, convenient, and cost-effective manner. This paper is an extension of our previous works in [11, 21] to develop an IoT/M2M smart building prototype using several compatible sensors, actuators, and modules, besides the Raniso App. In addition to receiving data from the smart building, the system also stores data in the cloud database and displays it visually on the ThingSpeeak webpage, as well as remotely monitoring it via the Raniso App.
## II State of the Art
### _Motivations and Research Contributions_
Heterogeneous wireless networks are of growing interest in several areas, including IoT/M2M smart building automation which has developed at a rapid pace over the past few years. The main benefits of smart buildings are reduced maintenance expenses, reduced energy consumption, comfort, tranquility, entertainment, safety, security, improved productivity, more livable structures, and higher resale value. Despite the great applicability of the IoT in the implementation of smart building systems, there is a lack of strategies that involve the integration of IoT and M2M technologies in most existing systems. Besides relying on only one or two wireless communication technologies, most of the existing studies also suffer from limited wireless range connectivity, poorly designed user interfaces, and high costs. As a means of addressing these issues and reducing existing smart building system limitations, this study proposes an adaptive control of IoT/M2M devices in smart buildings using heterogeneous wireless networks to expand the scope of connectivity and allow users to control their building easily and efficiently by choosing the appropriate connection, whether it is WSN or MCN, through a user-friendly interface using smartphones via the Raniso App, regardless of time and location. Costs are reduced by using effective and affordable components, as well as the Raniso App, which is free to manage, monitor, and control building devices and conditions over various heterogeneous networks. The research contribution is summarized as a design for adaptive control of IoT/M2M devices in smart buildings using our developed mobile application Raniso via heterogeneous wireless networks. As a result, we have listed the contributions of this research work below:
* Discussing and comparing related work on IoT/M2M smart building-based wireless technologies with the proposed system.
* Development, implementation, and design of adaptive control for IoT/M2M devices connected to heterogeneous networks in smart building systems using Arduino and NodeMCU boards with heterogeneous low-cost and small-sized sensors and actuators.
* Dynamic use and intelligent management of IoT/M2M devices to facilitate monitoring and control of smart building systems via IoT platform ThingSpeak and via our free mobile app "Raniso".
* Proposing eight innovative services for smart buildings, including smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring.
* Validation of the functionality of the proposed system in terms of adaptation, control, automation, safety, comfort, energy efficiency, and performance.
### _Related Works_
Recently, with the evolution of IoT/M2M applications based on heterogeneous wireless networks, there has been a rapid progression of smart home applications with a gradual increase in the size of the automated environments, starting from the living room and moving to the apartment, then to entire buildings, and finally to the more general scenario of smart cities. In this section, previous research works related to IoT/M2M smart building-based wireless technologies, either WSN or MCNs or both, are reviewed. Researchers in [12] present the development of an Artificial Intelligence-Based Smart Building Automation Controller (AIBSBAC) that includes an intelligent user identification subsystem, an intelligent decision-making subsystem, internal and external environment observation subsystems, and a universal infrared communication system. The AIBSBAC system can dynamically adapt to the user's choices, which are aimed at improving energy efficiency, user comfort, and security. Heterogeneous wireless connectivity, represented by RF, Ethernet, and Bluetooth technologies, connects AIBSBAC to electrical devices. In contrast, the proposed system only included a few applications and ignored many of the other important applications (such as fire detection and gas detection). The work in [13], presents a smart building system adaptable to office environments. The proposed system is based on the ZigBee wireless sensor network has been coupled with Java and Android; to develop a platforme for automated and manual control of devices to track real-time environmental data of smart building systems. However, using only ZigBee as a WSN in the system has disadvantages, including short range, low data transmission speed, high maintenance cost, low transmission, as well as low network stability.
In [14], the authors focused on monitoring home appliances via IoT using various sensors to monitor temperature, fire, and gas, and an LCD screen to display the sensor's values. When gas leaks or fires are discovered, the system quickly
sends a text message to the user's cell phone, activates the siren, and the spray engine is activated and displays a message on an LCD screen to warn users. While the IoT is used to improve security, where communication between sensors and transducers is accomplished wirelessly using a single chip via Wi-Fi. Despite this, the system has some limitations: The WiFi data transfer rate decreases as the number of users increases, and GSM is expensive due to SMS charges.
A home prototype system based on heterogeneous wireless networks (Wi-Fi, Bluetooth, and RFID) for control and monitoring is proposed in [15]. The system is based on two components: the first one is an automation system that was built using an Arduino UNO, which is responsible for reading and processing different types of sensor values. The second component is the security system and outdoor lighting using NodeMCU to monitor the home security status from anywhere through a specialized Graphical User Interface (GUI) that was programmed in HTML and allows the user to monitor the home security and turn ON and OFF the outdoor lights using a particular internet protocol (IP) address provided by NodeMCU. However, the applications implemented are very basic and not innovative, in addition to neglecting the user's right to choose their wireless communication preferences.
In [16], an IoT-based portable automation system called IoT@HoMe for smart homes was designed and manufactured using NodeMCU as the microcontroller and Internet gateway. In addition to using several sensors to monitor various parameters related to the building, various actuators perform control activities of home devices. The main objective of this research is to create an efficient, affordable, and portable system that can easily monitor home conditions and operate home devices over the Internet at any time and from any location. Similarly, another research work [17] presented a multifunctional smart home system using a micro-web server based on an Arduino Yun microcontroller with Internet connectivity via an Android-based mobile application. The system can be controlled even if there is no WiFi connection available, as it can be accessed via MCNs (e.g. 3G or 4G networks). Despite this, both of these researches [16, 17] remain limited in that they rely solely on the Internet for their communications and did not take into account the unavailability or interruptions of the Internet in some regions.
In [18], an IoT-based smart building system for indoor environmental management was proposed and implemented using a Raspberry Pi 3 with a camera, an obstacle sensor, and several environmental sensors to detect smart building parameters such as temperature, humidity, brightness, and air quality. The Raspberry Pi was programmed to collect data from the sensors and a photo taken by the camera every 15 minutes. In addition, the Raspberry Pi was equipped with a light neural network that could count the number of people in the room based on the photos taken by the camera. When the user is outside the coverage area of the Wi-Fi AP, he is unable to contact the server and cannot directly use the smartphone to transmit commands to the Raspberry Pi controller. Researchers in [19] discuss an affordable, safe, and energy-efficient WiFi-based smart home system that enables homeowners to monitor their appliances from their mobile phones at home or remotely. This system uses Raspberry Pi and Arduino Mega as microcontrollers and internet gateways for IoT automation monitoring and control. Numerous sensors and actuators are used in the system to monitor multiple home-related parameters via the Blynk app. Despite this, sometimes Blynk App becomes unresponsive and displays data/output that could be false in some cases. Also, in some cases, Blynk App's notifications are delayed in sending to the user. In [20] a smart building fire and gas leakage alert system namely SB112, which combines a small-size multisensor-based scheme with an open-source edge computing framework and an automated next-generation (NG) \(112\) emergency call functionality was proposed using ESP32 as microcontrollers units and Raspberry Pi as Edge gateway. As part of an end-to-end scenario, crucial actors such as IoT devices, public safety answering points (PSAP), middleware for a smart city platform, and relevant operators are involved. However, only the fire and gas leakage alert system was used, while other important systems that should be in the smart building, related to energy consumption and comfort, for example, were ignored.
Table I summarizes and reviews previous research works related to wireless network-based IoT/M2M smart building systems. The comparison of our proposed IoT/M2M smart building using heterogeneous wireless networks with existing systems also highlights its main advantages over these systems. As shown in Table I, the proposed system aims to overcome the limitations of the existing systems. It is based on a simulated scenario with a real implementation for validation purposes. In addition, the main advantages of our system include the ability to connect to the Internet, cloud platforms, and various heterogeneous wireless networks, with support for multiple heterogeneous devices (sensors, actuators, and shields). Users also have the option to control IoT/M2M devices using either push buttons, WSN (Bluetooth or Wi-Fi), MCN (GSM or LTE), or voice commands via Google Assistant. In addition, the Raniso app, which consists of user-friendly graphical interfaces that control and monitor building system devices and alert users in case of danger, as well as a visual display that allows users to monitor all building parameters in real-time without the Internet, both indoors and outdoors, such as temperature, humidity, air quality, building condition, whether it is safe or not, etc. Furthermore, the system we proposed consists of many smart applications and services that should be part of any smart building system including smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring. All of these proposed services contributed to improving safety, comfort, energy savings, and internal and external care in the building system.
## III Proposed IoT/M2M-based smart building system using heterogeneous networks
The proposed heterogeneous network infrastructure for the IoT/M2M smart building consists of wireless sensor networks and mobile-cellular networks. Wireless sensor networks are concerned with communication between devices in the building and include wireless local area networks (WLANs) such as
Wi-Fi (IEEE802.11x), which support small area connectivity, and wireless personal area networks (WPANs) like Bluetooth (IEEE 802.15. 1) and RFID, which support short-range connectivity for communication between personal devices. In contrast, mobile cellular networks are the communication network between devices in the building and devices in the outside network and include wireless wide area networks (WWANs) that use mobile telecommunication cellular network technologies such as 2G (GSM), 3G (UMTS), 4G (LTE), and 5G. The following expression describes the proposed heterogeneous network infrastructure for the IoT/M2M smart building system.
\[\begin{split} HetNets=WSNs_{WLANs_{WiFi}}+WPANs_{Blutooth,RFID} \\ +MCNs_{WWANs_{GSM,UMTS,LTE}}\end{split} \tag{1}\]
The convergence of two heterogeneous networks, like WSNs and MCNs, enhances M2M communication and IoT technology. Converged networks can be divided into four categories: device convergence, protocol convergence, service convergence, and full convergence of any user's devices, communication protocols, and network services [22, 23]. The full convergence can be expressed as:
\[\begin{split} Full_{Convergence}=Device_{Convergence}\\ +Protocol_{Convergence}+Service_{Convergence}\end{split} \tag{2}\]
In WSNs, the smart mobile user equipment (UE) gateway moves into the coverage area of the sensor nodes, broadcasts beacon packets to these nodes, and provides backhaul access to these WSN nodes. MCN can send the detected WSN data directly to a central data center [24]. The main disadvantages of WSNs include less mobility robustness, small coverage, and weak terminals. MCNs, on the other hand, provide more layer control, a longer network lifetime, and quality of service (QoS) for WSNs applications and have the advantages of reliable mobility, broad coverage, and powerful user terminals, but are expensive and difficult to deploy and manage [25]. The convergence of MCN and WSN can benefit each other: MCNs provide a greater level of layer control and optimization to increase network life and WSN performance and to provide better QoS with the use of WSNs; WSNs may operate as cognitive and intelligent enablers of MCNs, wireless services and increasingly data-centric applications are enabled by the architecture of WSN and MCN convergence networks, MCNs have the potential to make WSNs more energy efficient and improve network performance, and mobile MCN terminals act both as sensor nodes and gateways for WSNs [26]. In addition, sensor nodes in WSN and MCN convergence networks collect data and send it to a data server through MCN [27].
### _System Architecture Design_
The proposed architecture design for IoT/M2M smart building system based on heterogeneous networks uses different known hardware to collect data and manage it according to a business' functions and services. The main services offered are smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring. All these services are designed and implemented to remotely control and monitor the building through the Raniso App via RFID/Bluetooth/Wifi connectivity and cellular networks such as GSM, 4G, or 5G. This IoT/M2M smart building infrastructure design helps owners, operators, and facility managers improve asset reliability and performance and is beneficial in preventing the loss of resources and human lives caused by undesired events. Moreover, the proposed system is energy-efficient and low-cost that can be used in various buildings, including hospitals, hotels, universities, businesses, etc. Figure 1 shows the proposed architectural design.
### _System Hardware and Software_
In the proposed IoT/M2M smart building system based on HetNets networks, we use different sensors and actuators
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Reference & [12] & [13] & [14] & [15] & [16] & [17] & [18] & [19] & [20] & Proposed System \\ \hline Smart system & Building & Building & Home & Home & Home & Building & Home & Building & Building \\ \hline Year & 2015 & 2018 & 2018 & 2018 & 2019 & 2021 & 2021 & 2022 & 2022 \\ \hline Controller & Arduino & STM32 & PC Server & Arduino & NodeMCU & Arduino & RaspberryPi & Arduino & ESP3P2 & Arduino \\ & & & NodeMCU & & & & & & & \\ \hline Protocol & AHSBAC & ZigBee & Wi-FiGSM & RFID/Wi-Fi & Wi-Fi-Fi & Wi-Fi & Wi-Fi & Wi-Fi & RFID-Wi-Fi \\ & & Bluetooth (BT) & & & & & & & & & BTGSMATE \\ \hline WSN & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline MCN & & ✓ & ✓ & ✓ & ✓ & ✓ & & & ✓ \\ \hline Outdoor Control & ✓ & & & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Indoor Control & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Outdoor Care & & & ✓ & & & & & & ✓ \\ \hline Indoor Care & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Safety & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Energy Efficient & ✓ & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Control & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Monitoring & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Smartphone & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ & & ✓ \\ \hline Web-based & & & ✓ & ✓ & ✓ & & & & ✓ \\ \hline Google Assistant & & & & ✓ & ✓ & & & & & ✓ \\ \hline User Preferences & ✓ & & & & & & & & ✓ \\ \hline Virtual Simulation & & ✓ & & & & & & ✓ & & ✓ \\ of system design & & & & & & & & & ✓ & \\ \hline Real Implementation & ✓ & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table} TABLE I: COMPARISON OF RELATED WORK ON.
to collect data and manage it according to services, where the Arduino microcontroller represents the brain. Besides, the NodeMCU board and other modules like Bluetooth HC-06, Sim800l, and RFID are used for wireless communication. The main hardware components used are described in Table II whereas the software used is defined in Table III. Next, we discuss the design and implementation of the proposed system.
## IV Designing and Implementing IoT/M2M Smart Building services
This section is devoted to the simulations and practical implementation of IoT/M2M smart building services. There are two different forms of main processing units used in the deployment of all these services, which are Arduino and NodeMCU. The Arduino will be used as the controller of the devices, while NodeMCU will be the controller of the phone devices so that it can control the appliances from far away. In the following, we discuss each smart application prototype explicitly.
### _Smart Parking_
The smart parking involves an IoT/M2M-based system that sends data on the availability of all parking places in real-time and selects the optimal one. The system comprises an Arduino Uno microcontroller, NodeMCU board, infrared sensors (TCR5000), servo motor, LCD, speaker, and a battery. Figure 3 shows the proposed system prototype for smart parking.
Using infrared sensors, the system identifies whether parking spaces are occupied, then scans the number of available spaces and updates data with the cloud server every 30 seconds. The parking slot availability may be checked online from anywhere, and perform hass-free parking through the smart parking interface of the Raniso App. Also, the LCD placed in front of the gate will show which slot is free; besides, when a vehicle is detected in a specific slot, the corresponding LED in the Raniso App lights up. If all spaces are occupied by vehicles, the parking gate will not open, and the audio system will say: "sorry, this parking lot is full, you can search for another available parking lot via the Raniso app." If any spaces are available, the parking gate will open, allowing the vehicle to pass, and the audio system will welcome the user. As a result, this system solves the city parking problem and provides users with a reliable M2M/IoT-based parking management solution. With this solution, users can easily find available, nearby and cheap parking using the interface of the Raniso app equipped with GPS technology, wherever they are and whenever they want as shown on Figure 4. The following Figure shows the overall proposed solution for a smart parking system.
Fig. 1: IoT/M2M smart building architecture.
Fig. 4: Smart parking interface on the Raniso App.
Fig. 3: The proposed system prototype for smart parking.
Fig. 2: The different interfaces of the Raniso App used for the proposed IoT/M2M smart building.
### _Garden Irrigation Automation System_
The proposed irrigation automation system for gardens refers to the system's functionality with no or just a minimum of manual intervention, where it monitors the plant's temperature, humidity, light levels, and soil moisture (see Figure 6). The system includes a soil moisture sensor which is used to measure the percentage of soil moisture to optimize the irrigation dosage to eliminate water waste, a DHT11 sensor is used to monitor temperature and humidity amounts, an LDR sensor to sense the intensity of light, two yellow LEDs for controlling the photosynthesis process during the night to make the plant grow faster, the ultrasonic sensor (HC-SR04) to measure the level of water inside the tank,
\begin{table}
\begin{tabular}{|p{56.9pt}|p{28.5pt}|p{28.5pt}|} \hline
**Components** & **Type** & **Specifications** \\ \hline Arduino & Mega & It is an ATmega2560-based microcontroller board. There are 54 digital input/output pins, 16 analog inputs, 4 UARTs ( Hardware serial ports), a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button on this board [28]. \\ \hline Arduino & UNO & It is an open-source microcontroller board based on the ATmega328 processor and it is created by “Arduino.cc”. The board contains 14 digital input/output pins, six analog inputs, a 16 MHz ceramic resonator, a USB connection, a power jack, an ICSP header, and a reset button [29]. \\ \hline NodeMCU & V3 & It is an open-source board comprised of a physical programmable circuit board [16] created by a Chinese firm named Espressent makes a System on Chip (SoC) called ESP8266. This board is defined as a low-cost Wi-Fi chip in which uses the Lua programming language and includes a full TCP/IP stack and a microcontroller. \\ \hline Modules & RCS22 RFID & The term RFID stands for Radio Frequency Identification. The RFID reader can be used to read and retrieve information from RFID cards. \\ \hline Modules & SIM800L & It is a miniature GSM modem that helps in making calls and sending messages and it can access the internet through GPRS, TCP/IP. This module can be used in a wide variety of IoT/M2M projects. \\ \hline Modules & GPS & The global positioning system (GPS) module uses satellite technology to determine continuously data like longitude, latitude, speed, distance traveled, etc. \\ \hline Modules & SD Card & It enables communication with the memory cards as well as the writing and reading of data on them. \\ \hline Modules & Bluetooth HC-06 & is a basic Bluetooth SPP (Serial Port Protocol) module that is used to transmit data within a small area in the ISM band between 2.4 and 2.48 GHz [15]. \\ \hline Modules & DS3231 & The DS3231 Real Time Clock (RTC) is a low-power clock/calendar with a battery backup SRAM of 56 bytes. The clock/calendar gives certified data for seconds, minutes, hours, days, dates, months, and years. \\ \hline Sensors & TCRT5000 & It is a sensor that detects infrared light. It is simply a photodiode and a phototransistor combined [11, 30]. \\ \hline Sensors & Ultrason HC-SR04 & It is an electronic device that calculates the distance between physical objects using sonar. It has a wide object detecting range of 2 cm to 400 cm with great accuracy [21]. \\ \hline Sensors & Soil moisture & It measures the content of water and humidity in the soil. As the soil moisture increases, the sensor allows current to pass through, and the greater the moisture, the greater the proportion of the connection [15, 31]. \\ \hline Sensors & DHT11 & It is a digital sensor that detects the temperature and humidity of the place. It provides high levels of dependability, accuracy, and long-term stability [11]. \\ \hline Sensors & SW-420 & It measures the vibration level and tilts motion continuously. This sensor is used for security purposes. \\ \hline Sensors & MQ2 & It is a gas detection sensor that senses or measures various types of gasses like LPG, Alcohol, Propane, H2, CO, and even methane [15]. \\ \hline Sensors & Flame & It is designed to find out if there is a flame as well as regular light with a detection range of up to 100 cm and a wavelength ranging from 760 to 1100 nm. \\ \hline Sensors & PIR HC-SR501 & PIR sensor refers to Passive Infrared that is designed to sense the motion reliant on the infrared generated by the human body and other living creatures. \\ \hline Sensors & MQ135 & It is a gas sensor used for air quality in which it detects or measures NOx, Nh3, CO2, Benzene, Alcohol, and Smoke. \\ \hline Actuators & Servo motor & It is an electrical engine that can be utilized to push or rotate an object with high precision [15] \\ \hline Actuators & LED & It is a type of tool for displaying characters in which the primary viewer is a liquid crystal [5]. \\ \hline Actuators & Fan & It is a device used to create air movement for ventilation in hot climates to create a relaxing environment. \\ \hline Actuators & Keyboard & It is a contact matrix of buttons made up of rows and columns. It is used to enter numbers and letters. \\ \hline Actuators & Speaker & It consists of power amplification and voice outputs, it is used to emit a variety of sounds. \\ \hline Actuators & Buzzer & It is an audio signaling gadget that produces sound. In the proposed system, the buzzer is utilized as a notification alarm. \\ \hline Actuators & LDR & Light Dependent Resistor (LDR) is an electrical component whose resistance varies in response to the perceived light [11]. \\ \hline Actuators & LED & A light-emitting diode (LED) is a semiconductor light source that emits erratic radiation as a result of spontaneous photon emissions. It is utilized to represent light [21]. \\ \hline \end{tabular}
\end{table}
Table II: THE MAIN SPECIFICATIONS OF Hardware USED.
GSM module to make calls and send text messages to the farmer when the pump changes its state or when there is any watering problem, the LCD screen is used to display the level of water and moisture content together with the pump status, and two pumps one for watering the plants and the second one for supplying water to the tank. If the soil is dry, the irrigation fountain operates, watering the garden at the time of need and switches OFF when the soil is wet to save water. In addition, this solution allows for remote control and monitoring of the garden irrigation system via the Raniso app (see Figure 7), and via our channel in ThingSpeak which is used to view data from the proposed smart garden system remotely (see Figure 8), making it easier to manage the irrigation systems and make necessary adjustments in real-time. This system optimizes resources (water, energy, and fertilizers) and irrigation scheduling through an intelligent monitoring system.
### _Intrusion Alarm System_
The increased number of robberies in buildings makes people worried about losing their property. Hence, the proposed service presented in Figure 9 enables the protection of the building from theft or attack by using a GSM module, ultrasonic sensor, vibration sensor, speaker, and red LED. The ultrasonic sensor is used to capture the distance and the vibration sensor to measure the vibrations. When the distance is close and vibrations are detected, the GSM module sends calls and short messages to the owner of the building, the LCD screen shows an alarm state, the red LED lights up, and the speaker emits a sound indicating that there is a theft, where the resulting sound is obtained from a micro SD memory card in MP3 format, which allows users to use the sound of screams, a guard dog or a siren to scare away the thief. Additionally, users can monitor their building from anywhere at any time via our channel in the ThingSpeak platform as illustrated in Figure 11, and via the intrusion alarm interface in the Raniso app that sends real-time alerts in case of any danger as shown in Figure 10.
### _Smart Door System_
The proposed smart door system is an automatic identification and authentication system deployed at the doors of various buildings, including banks, corporate offices, financial institutions, jewelry stores, government organizations, etc. It is designed to prevent unauthorized access and violation by using a Bluetooth module, RFID module, servo motor, keypad, LCD screen, speaker, buzzer, and LEDs. As soon as the user approaches the door, the sound system reminds him to disinfect his hands and take all precautions against COVID-19. The LCD screen also displays all the options by which the user can control the keyless door, either by RFID card, password, or remotely via the smart door system interface of the Raniso app using the Bluetooth connection or the Internet via WiFi, 4G/5G (see Figure 13). The system will give access using the Raniso app, or on scanning the right tag or entering the correct password, and on scanning the wrong tag or entering the wrong password, the system will deny, a red LED will light up, and the buzzer will make a keep sound. This solution is intended to control doors in the building with a relatively low-cost design, user-friendly interface, and ease of installation. It also provides a protection system and building security. Figure 12 shows the prototype of the proposed smart door system.
### _Fire and gas alarm system_
This proposed system for extinguishing fires and detecting gas leaks is implemented to protect lives and property from gas and fire hazards that lead to serious accidents, human injuries, and material losses (see Figure 14). This service is based on a
Figure 11: Intrusion alarm system monitoring on Thingspeak.
Figure 12: Prototype of a smart door system.
Figure 10: Intrusion alarm interface on the Raniso App.
Figure 13: Smart door interface on the Raniso App.
Figure 9: Intrusion alarm system.
GSM module, GPS shield, servo motors, a flame sensor, Gas leakage sensor, LCD screen, speaker, and LEDs. Sensors are used to sense fire and gas in the building. Whenever the fire or the gas is detected, the alarm rings, the LCD screen displays "There is Danger, Not safe here", a red LED lights up, one servo motor opens the water sprayer and the other one opens the window. In addition, the system not only sends calls and SMS alerts to the owner but also sends an SMS along with the location of the incident to the fire station and civil protection. The system also guides the building owner to a safe, fire-free route through an audio system and a green light. Besides, users can monitor data of gas and fire detection of their buildings from anywhere at any time via our channel in the ThingSpeak platform as shown in Figure 16, and via the Raniso app's fire and gas alarm interface which sends real-time alerts in case of fire and gas leaks as illustrated in Figure 15.
Button Switches.
### _Smart Medicine Reminder_
Medicines help us live longer and healthier. But taking it the wrong way, mixing certain medications, or forgetting to take it at the right time can be dangerous. The proposed IoT/M2M Smart Medicine Reminder solves such problems by reminding and alerting patients to take the proper dose at the right moment (see Figure 21).
This system includes a GSM module, DS3231 real-time clock module, LCD screen, push buttons, speaker, and LEDs. A speaker and LEDs are used to alert and remind that it's time to take medicine. The LCD screen is set to cycle in three screens. The 1st screen shows the message " Please take care of your health ". The second screen is a help screen that tells to press the select push button to select any one-time slot to remind (once/twice/hrice in a day). The time slot is changeable in the program and can be configured accordingly. Four push buttons are used and each has a distinct select feature. The first push-button is used for reminding to take the medication once a day, the second is used to recall twice a day, and the third to recall three times a day. Stopping the notifications is done with the fourth push button when the user has taken his/her medication. If the patient is half an hour late in taking their medicine, the GSM module sends calls and short messages to alert them to hurry up to take their medication. In addition, the user can use the Raniso app's smart medication reminder interface illustrated in Figure 22, which will help them take their medicine on time, according to their treatment plan, and allow them to remotely manage and control medications/pill schedules and usage data. It also allows direct communication anytime anywhere between patients and caregivers, as it quickly alerts the caregiver if the patient needs help. The proposed IoT/M2M smart medication reminder system is extremely versatile and can be used by patients at home, doctors in hospitals, and many other places.
### _Indoor Air Quality Monitoring System_
Indoor air pollution has become a real-life problem and a common phenomenon in buildings. With the spread of COVID-19, people spend most of their time indoors, and poor indoor air quality (IAQ) is a significant public health
Figure 19: Smart lighting interface on the Raniso App.
Figure 20: Smart lighting monitoring on ThingSpeak.
Figure 18: Automatic lights.
Figure 21: Smart medicine reminder.
risk, which can increase short-term health problems such as fatigue and nausea, as well as chronic respiratory disease, heart disease, and lung cancer. It is now necessary to monitor air quality in real-time in most buildings. The air quality monitoring system developed in this work is based on the MQ135 sensor, DHT11, servo motor, DC fan, air purifier, speaker, buzzer, LCD screen, and LEDs (see Figure 23)).
IAQ is measured in Parts Per Million (PPM) units, where a lower PPM value indicates good air quality and a higher value indicates polluted air containing toxic gases. When the value is less than 130 PPM, then the LCD and the Raniso App will display "Air Quality Level Good ". The fan will turn on when the value is between 130 PPM and 250 PPM. The LCD and the Raniso App will display "Air Quality Level Medium. " Whenever the value increases to 250 PPM. The buzzer will start keeping, the red LED will light up, the fan and the air purifier will turn on, and the LCD and the Raniso App will display" Air Quality Level Danger". This service is connected to the Internet, and as a result, anyone can remotely visualize the air quality index from anywhere via the interface of air quality monitoring in the Raniso App (see Figure 24)). This system aims to perform real-time monitoring of IAQ parameters via our channel in the ThingSpeak platform (see Figure 25) and generate alerts via Raniso App to the building occupants to avoid hazardous conditions.
## V Results and Discussion
This section presents the findings from the prototype's functional testing of the proposed IoT/M2M smart building system, which consists of several innovative services and functionalities such as smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring. We tried these services individually in the previous section, while in this section, the effectiveness of the designed system is confirmed by testing all its services and functions on the final IoT / M2M smart building model that was created to elaborate the performance and functionality of the proposed approach.
Many sensors, actuators, and shields used resulted in a low power supply and a limited number of ports. As a result, we added the Arduino Mega board along with the Arduino UNO and NodeMCU for perfect working operation and fulfilled the requirements of the IoT/M2M smart building. Our goal was to design and implement an IoT/M2M smart building based on the convergence of two heterogeneous networks: wireless sensor networks operating on RFID, Bluetooth, Wi-Fi, and mobile cellular networks like GSM, 4G, or 5G. All data collected from the suggested system is stored in the cloud database and sent to the ThingSpeak server and was visualized in four channels. The channels receive data from the building sensors at an interval of 10 seconds and are
Figure 23: Indoor air quality monitoring system.
Figure 22: Smart medicine reminder interface on the Raniso.
Figure 24: Indoor air quality monitoring interface on the Raniso.
Figure 25: Indoor air quality monitoring graph on ThingSpeak and the Serial Plotter tool.
visualized as line graphs on the ThingSpeak webpage and the Raniso App in real-time. Figure 26 shows the graphs that were obtained from our channels in the ThingSpeak platform. In addition to the fact that these graphs show the possibility of monitoring building data in real-time anywhere via the Internet, it also shows the proposed approach's good performance and fast response. For example, garden irrigation automation graphs show how quickly the system responds in seconds should a plant need to be watered, photosynthesized, or refilled a water tank, etc. The same applies to other services, as the indoor air quality, fire, and gas detection graphs also show the system's speed in mitigating fire and gas leak risks. In addition to the automatic lighting service, whose graphs show real-time brightness, lux, and movement values. The results we have reached are very satisfactory, as we have achieved the desired goal of providing accurate readings for the server. Besides, the latency is very low, so the users can easily control and monitor the smart building remotely anytime and anywhere using mobile phones through Raniso App. Also, one of this study's accomplishments is using artificial intelligence to control appliances by allowing Google Assistant voice commands. The proposed smart building is customizable according to the user's preferences. For example, the user can control the lights via pushbuttons, Bluetooth, Wi-Fi, 4G/5G networks, a voice through Google Assistant, or automatically based on the PIR sensor's feedback. Also, the proposed smart door, in which the user can control the keyless door, either by RFID card, password, or remotely via the Raniso app using the Bluetooth connection or the Internet via WiFi, 4G/5G. The same for the rest of the proposed applications (smart parking, smart garden, etc.). Furthermore, to examine the performance metric of the proposed system's error rate, and based on the data collected in Table IV, including temperature, humidity, and indoor air quality for 15 days, we were able to calculate the proposed system's error rate, which was less than 1%. Table IV presents the data collected, including the Temperature observed by the proposed system (\(T_{ps}\)), Temperature observed from Official Data (\(T_{od}\)), Humidity observed by the proposed system (\(H_{ps}\)), Humidity Data observed from Official Data (\(H_{od}\)), IAQ Data observed by the proposed system (\(IAQ_{ps}\)), and IAQ Data observed from Official Data (\(IAQ_{od}\)). The data collected was taken over 15 days, which represents the number of observations (N).
For the temperature, the mean of \(T_{ps}\) is given as
\[MT_{ps}=\frac{\sum_{i=1}^{N=15}T_{ps}}{N}. \tag{3}\]
The mean of \(T_{od}\) is given as
\[MT_{od}=\frac{\sum_{i=1}^{N=15}T_{od}}{N}, \tag{4}\]
and the temperature error rate \(e_{T}\) is expressed as
\[e_{T}=\frac{MT_{od}-MT_{ps}}{MT_{od}}*100. \tag{5}\]
For humidity, the mean of \(H_{ps}\) is given as
\[MH_{ps}=\frac{\sum_{i=1}^{N=15}H_{ps}}{N}. \tag{6}\]
The mean of \(H_{od}\) is given as
\[MH_{od}=\frac{\sum_{i=1}^{N=15}H_{od}}{N}, \tag{7}\]
and the (humidity error rate) \(e_{H}\) is expressed as
\[e_{H}=\frac{MH_{od}-MH_{ps}}{MH_{od}}*100. \tag{8}\]
Similarly, for indoor air quality, the mean of \(IAQ_{ps}\) is given by
\[MIAQ_{ps}=\frac{\sum_{i=1}^{N=15}IAQ_{ps}}{N}. \tag{9}\]
The mean of \(IAQ_{od}\) is given by
\[MIAQ_{od}=\frac{\sum_{i=1}^{N=15}IAQ_{od}}{N}, \tag{10}\]
and the error percentage is expressed as
\[e_{IAQ}=\frac{MIAQ_{od}-MIAQ_{ps}}{MIAQ_{od}}*100. \tag{11}\]
Based on the data collected in Table IV and the mathematical equations from (3) to (11), we were able to calculate the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Date & \(T_{ps}\) & \(T_{od}\) & \(H_{ps}\) & \(H_{od}\) & \(IAQ_{ps}\) & \(IAQ_{od}\) \\ \hline \hline
01/12/02/02 & 24\({}^{\circ}\)C & 25\({}^{\circ}\)C & 99\({}^{\circ}\) & 89\({}^{\circ}\) & 240 & 239 \\ \hline
02/11/2022 & 27\({}^{\circ}\)C & 26\({}^{\circ}\)C & 76\({}^{\circ}\) & 78\({}^{\circ}\) & 225 & 228 \\ \hline
03/11/2022 & 26\({}^{\circ}\)C & 25\({}^{\circ}\)C & 84\({}^{\circ}\) & 83\({}^{\circ}\) & 279 & 275 \\ \hline
04/12/02/02 & 23\({}^{\circ}\)C & 24\({}^{\circ}\)C & 77\({}^{\circ}\) & 78\({}^{\circ}\) & 459 & 470 \\ \hline
05/11/2022 & 17\({}^{\circ}\)C & 19\({}^{\circ}\)C & 75\({}^{\circ}\) & 77\({}^{\circ}\) & 760 & 257 \\ \hline
06/11/2022 & 18\({}^{\circ}\)C & 18\({}^{\circ}\)C & 77\({}^{\circ}\) & 76\({}^{\circ}\) & 500 & 499 \\ \hline
07/11/2022 & 22\({}^{\circ}\)C & 23\({}^{\circ}\)C & 49\({}^{\circ}\) & 51\({}^{\circ}\) & 447 & 450 \\ \hline
08/11/2022 & 23\({}^{\circ}\)C & 24\({}^{\circ}\)C & 59\({}^{\circ}\) & 53\({}^{\circ}\) & 330 & 333 \\ \hline
91/12/02/02 & 26\({}^{\circ}\)C & 25\({}^{\circ}\)C & 54\({}^{\circ}\) & 55\({}^{\circ}\) & 109 & 112 \\ \hline
10/11/2022 & 24\({}^{\circ}\)C & 24\({}^{\circ}\)C & 56\({}^{\circ}\) & 56\({}^{\circ}\) & 83 & 85 \\ \hline
11/11/2022 & 24\({}^{\circ}\)C & 25\({}^{\circ}\)C & 47\({}^{\circ}\) & 46\({}^{\circ}\) & 100 & 96 \\ \hline
12/11/2022 & 19\({}^{\circ}\)C & 20\({}^{\circ}\)C & 61\({}^{\circ}\) & 59\({}^{\circ}\) & 69 & 71 \\ \hline
13/11/2022 & 21\({}^{\circ}\)C & 20\({}^{\circ}\)C & 78\({}^{\circ}\) & 78\({}^{\circ}\) & 168 & 163 \\ \hline
14/11/2022 & 20\({}^{\circ}\)C & 19\({}^{\circ}\)C & 80\({}^{\circ}\) & 82\({}^{\circ}\) & 251 & 249 \\ \hline
15/11/2022 & 19\({}^{\circ}\)C & 19\({}^{\circ}\)C & 99\({}^{\circ}\) & 98\({}^{\circ}\) & 270 & 266 \\ \hline \end{tabular}
\end{table} TABLE IV: COLLECTD DATA OVER 15 DAYS.
Fig. 26: Quantitative sensing results for all the proposed applications.
estimated error rate for temperature as 0.89%, humidity as 0.56%, and indoor air quality as 0.06%. These error rates are very low, indicating the good performance and efficiency of the proposed system.
## VI Conclusion
This paper focuses on developing an IoT/M2M smart building paradigm based on the convergence of wireless sensor networks and mobile-cellular networks. The proposed system used open hardware (Arduino, NodeMCU, sensors, modules, etc.) and open-source software (IDE, Proteus, and Raniso App). Based on our findings in this paper, we prove that our proposed solution offers a novel architectural design for a low-cost and flexible system that can be deployed for various smart IoT/M2M systems, including smart grids, smart retail, smart cities, etc. As an example, the architectural design was provided in more detail for the case of a smart building, where we suggested several main services and functionalities like smart parking, garden irrigation automation, intrusion alarm, smart door, fire and gas detection, smart lighting, smart medication reminder, and indoor air quality monitoring. All these services can be controlled and monitored remotely via our channels in the ThingSpeak platform and through our multiple-platform mobile application named 'Raniso', a local server that allows remote building control via RFID/Bluteoth/Wi-Fi connectivity and cellular networks like GSM, 4G, or 5G. The proposed IoT/M2M smart building was designed, implemented, deployed, and tested and yielded the expected results. This smart building system can be developed for future research purposes by integrating machine learning techniques to make it more robust and advanced.
|
2308.06073 | Emergent inductance from spin fluctuations in strongly correlated
magnets | Recently, the intriguing phenomenon of emergent inductance has been
theoretically proposed and experimentally observed in nanoscale spiral spin
systems subjected to oscillating currents. Building upon these recent
developments, we put forward the concept of emergent inductance in strongly
correlated magnets in the normal state with spin fluctuations. It is argued
that the inductance shows a positive peak at temperatures above the ordering
temperature. As for the frequency dependence, in systems featuring a
single-band structure or a gapped multi-band, we observe a Drude-type, while in
gapless multi-band systems, a non-Drude inductance with a sharp dip near zero
frequency. These results offer valuable insights into the behavior of strongly
correlated magnets and open up new possibilities for harnessing emergent
inductance in practical applications. | Taekoo Oh, Naoto Nagaosa | 2023-08-11T11:25:32Z | http://arxiv.org/abs/2308.06073v1 | # Emergent inductance from spin fluctuations in strongly correlated magnets
###### Abstract
Recently, the intriguing phenomenon of emergent inductance has been theoretically proposed and experimentally observed in nanoscale spiral spin systems subjected to oscillating currents. Building upon these recent developments, we put forward the concept of emergent inductance in strongly correlated magnets in the normal state with spin fluctuations. It is argued that the inductance shows a positive peak at temperatures above the ordering temperature. As for the frequency dependence, in systems featuring a single-band structure or a gapped multi-band, we observe a Drude-type, while in gapless multi-band systems, a non-Drude inductance with a sharp dip near zero frequency. These results offer valuable insights into the behavior of strongly correlated magnets and open up new possibilities for harnessing emergent inductance in practical applications.
_Introduction.--_ The noncollinear magnets show a variety of intriguing phenomena such as multiferroics of spin origin,[1; 2] topological protection of spin textures,[3; 4] and various kinds of Hall effects.[5; 6; 7; 8; 9; 10; 11] The underlying principle of these phenomena is the emergent electromagnetic field (magnetic (\(\mathbf{h}\)) and electric (\(\mathbf{e}\)) fields)[12] associated with the spin Berry connection \(a_{\mu}\) (\(\mu=0(t),i=1,2,3\)) defined by the spin direction field \(\mathbf{n}(r,t)\) as
\[h_{i}=(\mathbf{\nabla}\times\mathbf{a})_{i}=\frac{\hbar c}{2e}\left(\varepsilon_{ijk} \mathbf{n}\cdot\partial_{j}\mathbf{n}\times\partial_{k}\mathbf{n}\right), \tag{1}\]
where \(\varepsilon_{ijk}\) is the totally antisymmetric tensor, and
\[e_{i}=\frac{1}{c}\frac{\partial a_{0}}{\partial x_{i}}-\frac{\partial a_{i}} {\partial t}=\frac{\hbar c}{2e}\left(\mathbf{n}\cdot\partial_{i}\mathbf{n}\times \partial_{t}\mathbf{n}\right). \tag{2}\]
Applying these formula to the spiral magnet, the emergent inductance has been theoretically proposed,[13] and experimentally demonstrated in Gd\({}_{3}\)Ru\({}_{4}\)Al\({}_{12}\).[14] Later, the emergent inductance has been observed in YMn\({}_{6}\)Sn\({}_{6}\) beyond room temperature.[15] Emergent inductance in Rashba spin-orbit-coupled system has been theoretically proposed.[16]
In respect to the applications, the frequency dependence and the quality factor \(Q\) are important issues. At present, the experiments show the rapid decay of inductance as the frequency increases above \(\sim 10\)kHz, and the quality factor \(Q\) is less than a few \(\%\). This characteristic frequency is considered to be due to the collective dynamics of the ordered spin system. Typically, the dynamics is characterized by the energy scale \(\alpha J\), with the Gilbert damping constant \(\alpha\) (\(\sim 0.01\)) and the exchange coupling \(J\) of the order of 1 meV. It is note-worthy that the energy scale corresponds to the order of 1 GHz, which is much larger than the observed one but much smaller than conduction electrons' that is typically of the order of 1 THz.
Therefore, in the present paper, we propose a novel way to improve the frequency dependence of the emergent inductance by utilizing the rapid quantum/thermal spin fluctuation with higher energy than that of ordered moments. To explore this phenomenon, we employ the U(1) Slave-fermion theory, where the spin Berry connection \(\mathbf{a}\) appears naturally as the phase of the singlet correlation of neighboring spins, which can also be interpreted as the gauge potential. In this formalism, the electrons undergo fractionalization into distinct entities known as spinons and holons. Importantly, the spinons possess significantly longer lifetimes compared to the holons, leading us to anticipate the emergence of spionic inductance in the low-frequency regime. Remarkably, the spinon inductance is physically observable according to Ioffe-Larkin composition rule,[17] which is implied by the coupling between spinons and holons through the gauge field \(\mathbf{a}\) in Schwinger boson method[18] or \(U(1)\) Slave-fermion theory.[19]
Our findings, which is illustrated by several systems including 1D spin chain, 1D spin ladder, 2D square, 2D honeycomb, and 2D Kagome lattices, can be summarized as follows. First, we observe that the inductance displays a positive peak at temperatures higher than the "ordering temperature." [See the upper panel of Fig. 1(a).] Note that the ordering temperature here means the characteristic temperature where the correlation length grows rapidly since there is no long range ordering in 1D and 2D Heisenberg models at finite temperatures. This behavior is attributed to the increased resistivity and associated inductance as the system becomes less metallic near the spinon phase transition temperature. At much higher temperature, as the system shows the thermally assisted hopping conduction, we anticipate that the inductance lowers down and the peak is found. Second, as for frequency dependence, we distinguish between Drude-type inductance in single-band or gapped multi-band systems and non-Drude-type inductance in gapless multi-band sys
Figure 1: Our main findings of emergent inductance in strongly correlated magnets at high temperatures. (a) The schematics of inductance in temperatures (upper panel) and in frequencies (lower panel). (b) The classification of inductance: Drude and non-Drude types.
tems. [See Fig. 1(b).] Drude-type inductance remains independent of frequency, while non-Drude-type inductance exhibits a sharp negative dip near \(\omega=0\), as depicted in the lower panel of Fig. 1(a).
_Method --_ To investigate the phenomenon of inductance in strongly correlated magnets, we employ the Slave-fermion method.[20; 21; 22; 23] The behavior of strongly correlated magnets can be effectively captured by the widely studied \(t\)-\(J\) model, defined by the Hamiltonian:
\[H=-\sum_{\langle ij\rangle}J(\mathbf{S}_{i}\cdot\mathbf{S}_{j}-\frac{1}{4}n_{ i}n_{j})-\sum_{\langle ij\rangle}t_{ij}(c_{i\alpha}^{\dagger}c_{j\alpha}+ \text{h.c.}). \tag{3}\]
Within the Slave-fermion method, the electron operator can be expressed as \(c_{i\alpha}^{\dagger}=f_{i\alpha}^{\dagger}b_{i}+\epsilon_{\alpha\beta}f_{i \beta}d_{i}^{\dagger}\), subject to the constraint \(\sum_{\alpha}f_{i\alpha}^{\dagger}f_{i\alpha}+b_{i}^{\dagger}b_{i}+d_{i}^{ \dagger}d_{i}=1\). Here, \(b_{i}^{\dagger}\) (holon) represents the vacancy, \(f_{i\alpha}^{\dagger}\) (spinon) denotes the single-electron state with spin \(\alpha\), and \(d_{i}^{\dagger}\) (doublon) corresponds to the double-occupancy. Since the electron is a fermion, either \(f_{i\alpha}\) or \(b_{i}/d_{i}\) must be fermionic, while the other is bosonic. Specifically, when \(b_{i}/d_{i}\) exhibits fermionic (bosonic) behavior, it is referred to as the Slave-fermion (Slave-boson) theory.
Due to strong correlation effects, the presence of double occupancy is prohibited. Therefore, the electron operators can be expressed as \(c_{i\alpha}^{\dagger}=f_{i\alpha}^{\dagger}b_{i}\), subject to the constraint \(\sum_{\alpha}f_{i\alpha}^{\dagger}f_{i\alpha}+b_{i}^{\dagger}b_{i}=1\). Introducing the operators \(\hat{\chi}_{ij}=\sum_{\alpha}f_{i\alpha}f_{j\alpha}\) and \(\hat{\Delta}_{ij}=\sum_{\alpha\beta}\epsilon_{\alpha\beta}f_{i\alpha}f_{j\beta}\), we find that \(\mathbf{S}_{i}\cdot\mathbf{S}_{j}=\frac{1}{2}(\hat{\chi}_{ij}^{\dagger}\hat{ \chi}_{ij}-2S(S+1))\) for ferromagnets, while \(\mathbf{S}_{i}\cdot\mathbf{S}_{j}=\frac{1}{2}(2S^{2}-\hat{\Delta}_{ij}^{ \dagger}\hat{\Delta}_{ij})\) for antiferromagnets. \(\hat{\chi}_{ij}\) represents coherent spinon propagation and \(\hat{\Delta}_{ij}\) represents spinon singlet-coupling.
By employing Slave-fermion mean-field theory (SFMFT) and introducing the order parameters \(\chi_{ij}=\langle\hat{\chi}_{ij}\rangle\) in ferromagnets, we arrive at the following Hamiltonian:
\[H= -\tilde{J}\sum_{\langle ij\rangle}(\chi_{ij}\hat{\chi}_{ij}^{ \dagger}+\chi_{ij}^{*}\hat{\chi}_{ij})+\sum_{\langle ij\rangle}(t_{ij}\chi_{ ij}b_{i}^{\dagger}b_{j}+\text{h.c.})\] \[+\sum_{i}\lambda_{i}(\sum_{\alpha}f_{i\alpha}^{\dagger}f_{i\alpha }+b_{i}^{\dagger}b_{i}-1). \tag{4}\]
Here \(\tilde{J}=J/2\). It should be noted that the last line represents the Lagrange multiplier \(\lambda_{i}\) associated with the constraint, and the fermionic and bosonic components are treated separately in this formulation.
Using a \(U(1)\) gauge theory, the physical conductivity is determined by \(\sigma^{-1}=\sigma_{f}^{-1}+\sigma_{b}^{-1}\), where \(\sigma_{f,b}\) represent the conductivity of spinons and holons, respectively. This is known as the Ioffe-Larkin composition rule.[17] The rule arises from the fact that the spinons flows against holons because of the strong coupling of holons and spinons by gauge field. We assume that the system is appreciably away from half-filling, so the holon conductivity \(\sigma_{b}\) follows a Drude-type behavior, \(\sigma_{b}=\sigma_{0}/(1-i\omega\tau_{b})\), which is relatively temperature-insensitive and much larger than that of spinons. Here, the transport lifetime \(\tau_{b}\) is typically \(\sim 1\) THz\({}^{-1}=1\) ps, and the contribution to the inductance from the holons can be also neglected. Therefore, the remaining part of the Hamiltonian describes the Schwinger-boson theory for the spinons.
Spinons exhibit conductivity at half-filling only in ferromagnetic systems (\(J>0\)) since \(\chi=\chi_{ij}=0\) in antiferromagnetic systems (\(J<0\)). Thus, we consider the ferromagnetic model on various lattices, such as spin chains, honeycomb, and Kagome lattices, and determine the order parameters self-consistently at temperatures, as shown in the upper panels of Fig. S4. Subsequently, we compute the current-current correlation function \(\Pi(\mathbf{q},\tau)=-\langle T_{\tau}J(\mathbf{q},\tau)J(-\mathbf{q})\rangle\) and obtain Re \(\sigma(\omega)=-\text{Im}\ \Pi(\omega)/\omega\) by analytic continuation \(i\omega_{n}\rightarrow\omega+i\eta\). We mostly set \(\mathbf{q}=0\) as the external electric field is constant in space while oscillating in time. The imaginary conductivity is evaluated using the Kramers-Kronig relation, Im \(\sigma(\omega)=-\frac{1}{\pi}\int d\omega^{\prime}\frac{\text{Re}\ \sigma(\omega^{\prime})}{\omega-\omega}\). The inductivity is then calculated as \(\mathcal{L}=-\text{Im}\ \rho(\omega)/\omega\), where \(\rho(\omega)=1/\sigma(\omega)\). The inductance can be obtained by \(L=\mathcal{L}l/A\) where \(l\) is the length and \(A\) is the area of the system. We set \(\tilde{J}=1\), and present energies and frequencies in units of \(\tilde{J}\). The frequency range is \(\omega\in[-3,3]\) (up to \(\sim 1\) GHz), and the spinon lifetime parameter is \(\eta\approx dk\) where \(dk\) is the \(k\)-mesh size. We suppose that the system is a cube with \(100\) nm in all Figures, so the unit of \(L\) is \(\sim 0.1\)\(\mu\)H. We will discuss about how the units are determined later. Further computational details can be found in the Supplementary Materials (SM).
_The inductance peak at high T --_ In the upper panels of Fig. S4, the order parameters \(\chi\) and \(\lambda\) are plotted as functions of temperatures. The phase transition to the finite \(\chi\) occurs at \(T_{\chi}\approx 0.77\) for the 1D spin chain, \(T_{\chi}\approx 1.45\) for the 2D honeycomb lattice, and \(T_{\chi}\approx 1.43\) for the 2D Kagome lattice. The inductance \(L\) can only be determined below \(T_{\chi}\) by SFMFT, since \(\chi\) is finite only for this regime. This phase transition is an artifact of the mean-field theory, and describes the crossover from the coherent propagation of the spinons to their thermal hopping conduction. This limitation arises
Figure 2: The order parameters (upper panel) and log-scale inductance (lower panel) in temperatures from the self-consistent SFMFT with \(\tilde{J}=1\) on (a) 1D spin chain, (b) 2D honeycomb lattice, and (c) 2D Kagome lattice. \(T\) is in units of \(\tilde{J}\), and \(L\) is in units of \(\sim 0.1\)\(\mu\)H for the 100 nm cubic sample.
because the model introduces artificial behavior where \(\chi\) approaches zero at high temperatures and fails to capture the short-range spin correlations which persist at finite values at any finite temperatures.[23] It is also noted here that the finite energy gap \(E_{g}\) of the lowest spinon dispersion similar to or smaller than temperatures below \(T_{\chi}\). [See SM.]
In the lower panels of Fig. S4, the increased inductance near \(T_{\chi}\) within the frequency range \(\omega\in[-3,3]\) is shown. The inductance reaches its highest value \(L=10^{3}\sim 10^{5}\) near \(T_{\chi}\) for every case. Despite the anticipated exaggeration of values from the artifact of SFMFT, the increment tendency of inductance remains true near \(T_{\chi}\). It is also important to note that \(T_{\chi}>T_{C}\) (the Curie-Weiss temperature) considering the presence of short-range spin correlation near \(T_{\chi}\).
On the other hand, beyond \(T_{\chi}\) where SFMFT fails, we anticipate that the conductivity rises again and the inductance decreases as the thermally activated hopping motion increases the conductivity with temperatures. At higher temperatures where lattice vibrations are more significant, the transfer of spinons could be primarily governed by incoherent thermal excitations. This excitations come from the electron-phonon coupling and consequent polaron effect which is not included in the present model.[24; 25] It is argued that the conductivity has the minimum at the crossover from the coherent propagation to the hopping conduction associated by the phonon. Therefore, the positive peak of the inductance is expected near \(T_{\chi}\).
_The inductance in frequencies_-- The Drude-type inductance can be briefly reviewed as follows.[26] In a normal metal subjected to an AC electric field \({\bf E}(\omega)\), the Drude conductivity is given by \(\sigma=\sigma_{0}/(1-i\omega\tau)\), where \(\sigma_{0}=ne^{2}\tau/m_{e}\). Here, \(n\) represents the number density, \(e\) is the charge of the carriers, \(\tau\) is the relaxation time, and \(m_{e}\) is the mass of the carriers. Consequently, the resistivity is given by \(\rho=(1-i\omega\tau)/\sigma_{0}\), and the associated inductance is given by \(L=\tau l/\sigma_{0}A=m_{e}l/ne^{2}A\). Importantly, the inductance \(L\) remains frequency-independent. In the present case, the thermally activated spinons across the gap contributes to the Drude-like transport due to \(E_{g}\lesssim T\).
In the upper panels of Fig. S3, we show the energy band structures near \(T_{\chi}\) for the spin chain, honeycomb lattice, and Kagome lattice. The 1D spin chain exhibits a single-band structure. In contrast, the 2D honeycomb and Kagome lattices are multi-band systems with band crossing points, where the inter-band contribution to the conductivity is also finite. In the honeycomb lattice, the band crossing occurs at the \(K\) points. For the Kagome lattice, band crossings are observed at both the \(\Gamma\) and \(K\) points.
In the lower panels of Fig. S3, we present the numerically computed inductance \(L\) near \(T_{\chi}\) for the spin chain, honeycomb lattice, and Kagome lattice. Notably, the inductance exhibits frequency independence only for the spin chain, following a Drude-type behavior. However, for the honeycomb and Kagome lattices, the inductance displays a sharp negative dip structure near \(\omega=0\). The characteristics of this sharp dip, including its depth and width, are closely connected to the lifetime parameter \(\eta\) of the spinons. This indicates that the resonance structure is rooted in the transport phenomena of these systems. The energy gap \(E_{g}\lesssim T\) does not appear in the frequency dependence of the inductance because of the thermally activated bosons. Further details can be found in the SM.
The distinction between the spin chain and other systems arises from the interband transitions at \({\bf q}=0\) in spinon transport phenomena. This can be demonstrated through analytic calculations of inductance in two scenarios.
First, consider the 1D spin chain. The current-current correlation function is given by
\[\Pi^{1}({\bf q},\omega)=2(\tilde{J}\chi)^{2}\int\frac{dk}{2\pi}\sin^{2}k\frac {-\Delta\omega_{k,q}\Delta n}{(\omega+i\eta)^{2}-\Delta\omega_{k,q}^{2}}, \tag{5}\]
where \(\Delta\omega_{k,q}=\omega_{k-q/2}-\omega_{k+q/2}\), \(\Delta n=n(\omega_{k-q/2})-n(\omega_{k+q/2})\), and \(\omega_{k}=\lambda-\tilde{J}|\chi|\cos k\) represents the energy. Here, \(n\) denotes the Bose-Einstein distribution. Notably, the intraband transition described by \(\Delta\omega_{k,q}\) dominates because of a single band in the system. By taking the limits \(\eta\to 0\) and \({\bf q}\to 0\) successively, the total conductivity becomes
\[\sigma^{1}(\omega)=-2(\tilde{J}\chi)^{2}\int\frac{dk}{2\pi}\sin^{2}k\left( \pi\delta(\omega)+\frac{i}{\omega}\right)n^{\prime}(\omega_{k}). \tag{6}\]
The integration in \(k\) yields \(\sigma^{1}({\bf q},\omega)=A(\pi\delta(\omega)+i/\omega)\) with \(A>0\), rendering the inductance \(L=1/A\) frequency-independent.
Second, consider the 2D honeycomb lattice. The current-current correlation function at \({\bf q}=0\) is given by
\[\Pi^{2}(0,\omega)=4(\tilde{J}\chi)^{2}\int\frac{d^{2}k}{(2\pi)^{2}}\frac{ \Delta n}{\omega_{k}}\frac{g({\bf k})^{2}}{(\omega+i\eta)^{2}-(2\omega_{k})^ {2}}, \tag{7}\]
where \({\bf k}=(k_{1},k_{2})\), \(g({\bf k})=(\tilde{J}\chi)[1+\cos k_{1}+\cos(k_{1}-k_{2})]\), \(\omega_{k}^{2}=(\tilde{J}\chi)^{2}[3+2\cos k_{1}+2\cos k_{2}+2\cos(k_{1}-k_{2})]\), and \(\Delta n=n(\lambda-\omega_{k})-n(\lambda+\omega_{k})\). Remarkably, the dispersion of the energy band in the honeycomb lattice is \(\lambda\pm\omega_{k}\), so \(2\omega_{k}=(\lambda+\omega_{k})-(\lambda-\omega_{k})\) corresponds to interband transitions. Upon
Figure 3: The energy band (upper panel) and inductance in frequencies (lower panel) near \(T_{\chi}\), for (a) 1D spin chain, (b) 2D honeycomb lattice, and (c) 2D Kagome lattice. \(E(k)\) and \(\omega\) are in units of \(\tilde{J}\), and \(L\) has the same unit as above. The 1D spin chain is a single-band system, exhibiting Drude-type inductance. The honeycomb and Kagome lattices are gapless multi-band systems, exhibiting a sharp negative dip structure near \(\omega=0\).
performing some algebraic manipulations, we arrive at
\[\sigma^{2}(\omega)=2(\tilde{J}\chi)^{2}\int_{k}\frac{g(k)^{2}\Delta n}{\omega_{k}^ {3}}\frac{i(\omega+i\eta)}{(\omega+i\eta)^{2}-(2\omega_{k})^{2}}, \tag{8}\]
where \(\int_{k}=\int d^{2}k/(2\pi)^{2}\). The integrand exhibits resonance behavior observed in numerical computations. If a band crossing point exists such that \(\omega_{k}=0\), in the vicinity of \(\omega=0\), the denominator of the integrand approaches zero, leading to a significant increase in conductivity. Consequently, the resistivity decreases near \(\omega=0\), resulting in a sharp negative dip structure in the corresponding inductance.
Three additional aspects are worth noting. First, the result obtained for the 1D spin chain can be generalized to higher dimensions with a single band. Consequently, we anticipate that higher-dimensional single-band systems would exhibit Drude-type inductance. We briefly address that the 2D square lattice hosts Drude-type inductance in SM. Second, in a multiband system, the sharp dip structure near \(\omega=0\) may not exist when there is a gap in the energy bands, considering that the denominator in Eq. 8 would not approach zero for such a case. We illustrate it further with a 1D spin ladder model in SM. Lastly, in the presence of impurities or disorder, even single-band systems or gapped multi-band systems demonstrate non-Drude inductance. This arises from the contribution of intraband transitions to the transport phenomenon. We utilize the Mattis-Bardeen scheme[27] in the spin chain, and provide the detail in SM.
_Discussion_-- In summary, we have examined the emergence of inductance in strongly correlated magnets with fractionalized spins. At temperatures above the ordering temperature, the dispersion of spinons decreases, leading to a significant increase in inductance. The type of inductance, whether Drude or non-Drude, depends on the system's characteristics such as the number of bands and the presence of band gaps. In non-Drude cases, a sharp dip structure near \(\omega=0\) is observed, the width of which is determined by the spinon's relaxation rate, which is typically \(\sim J\alpha\) with \(\alpha\) being the Gilbert damping constant, and is much smaller than the usual transport relaxation rate \(\tau_{b}^{-1}\).
Here, we discuss about the units of physical quantities and the range of the estimated inductance in our theory. We assume the unit of exchange interaction \(J\) is the order of \(\sim 1\) meV and the unit of lattice constant \(a=1\)\(\AA\). Thus, considering that \(\hbar=e=1\), the unit of \(k_{B}T\) is \(11.6\) K, that of frequency \(\omega\) is \(242\) MHz, that of the spinon lifetime \(\eta^{-1}\) is \(4.13\)\(ns\), that of resistivity \(\rho\) is \(258\)\(\mu\Omega\cdot\) cm, and that of inductivity \(\mathcal{L}\) is \(1.07\) pH\(\cdot\)cm. Accordingly, in a cubic system with \(100\) nm edges, the unit of inductance is approximated \(\sim 0.1\mu\)H. Near \(T_{\chi}\), \(L\sim 10^{2}-10^{4}\)\(\mu\)H, which can be contrast to the previous experimental findings show \(L\sim 1-10\)\(\mu\)H.[15] Although the SFMFT exaggerates the computed inductance, we predict that the positive inductance peak at high temperatures is experimentally observable in the strongly correlated magnets.
Lastly, it is important to distinguish our work from a previous theoretical study.[28] Their investigation focused on the emergence of inductance in spiral magnetic orders with weakly correlated electrons and a strong Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction. In this case, the spin density wave drives the emergent inductance. We believe that our results provide useful intuitions into the transport in strongly correlated system complementary to Ref.[28] and reveal the practical prospects for utilizing emergent inductance.
## Acknowledgment
This work was supported by JST, CREST Grant Number JPMJCR1874, Japan.
## Appendix A Ioffe-Larkin Rule in U(1) Gauge Theory of \(t-J\) model
The strongly correlated system is well described by so-called \(t-J\) model,
\[H=\sum_{\langle ij\rangle}J({\bf S}_{i}\cdot{\bf S}_{j}-\frac{1}{4}n_{i}n_{j})- \sum_{ij,\alpha}t_{ij}(c^{\dagger}_{i\alpha}c_{i\alpha}+h.c.), \tag{10}\]
where \(t_{ij}\) represents the hopping between \(i\) and \(j\) sites, and \(J\) is the exchange strength. \(J<0\) (\(J>0\)) represents ferromagnetic (antiferromagnetic) exchange. Notably, in the manuscript, we mainly consider \(J<0\) system. In this section, we review the Ioffe-Larkin rule of \(U(1)\) gauge theory of \(t\)-\(J\) model.
Due to the strong Coulomb repulsion in the system, we impose the constraint that the double occupancy at a site is forbidden, \(\sum_{\alpha}c^{\dagger}_{i\alpha}c_{i\alpha}\leq 1\). Then, the Slave-boson or Slave-fermion method can be applied here. The operator can be replaced by
\[c^{\dagger}_{i\alpha}=f^{\dagger}_{i\alpha}b_{i}+\epsilon_{\alpha\beta}f_{i \beta}d^{\dagger}_{i}. \tag{11}\]
Here, \(\epsilon_{\alpha\beta}\) is the anti-symmetric tensor, \(b^{\dagger}_{i}\) and \(b_{i}\) are the creation / annihilation operators for the 0-electron state (holon) at site \(i\), \(f^{\dagger}_{i\alpha}\) and \(f_{i\alpha}\) are the creation / annihilation operators for the 1-electron state with spin \(\alpha\) (spinon) at site i, and \(d^{\dagger}_{i}\) and \(d_{i}\) are the creation / annihilation operators for the 2-electron states (doublon) at site \(i\). When \(c^{\dagger}_{i\alpha}\) is applied to 0-electron state, the 1-electron state with spin \(\alpha\) is created. When \(c^{\dagger}_{i\alpha}\) is applied to the 1-electron state with spin \(\beta\neq\alpha\), it creates 2-electron state. Since \(c\) is fermionic, one of \(b/d\) and \(c\) is fermionic and the other is bosonic. If \(b,d\) are bosons (fermions) and \(f\) is the fermion (bosons), we call it "Slave-boson (Slave-fermion) theory." A constraint exists for both theories,
\[\sum_{\alpha}f^{\dagger}_{i\alpha}f_{i\alpha}+b^{\dagger}_{i}b_{i}+d^{ \dagger}_{i}d_{i}=1. \tag{12}\]
This means that only the states can only have one of the following states: vacancy, 1-electron with spin \(\uparrow\), 1-electron with spin \(\downarrow\), and 2-electron states. The double occupancy is forbidden by simple delete of \(d\) operators.
\[c^{\dagger}_{i\alpha}=f^{\dagger}_{i\alpha}b_{i}. \tag{13}\]
The constraint changes into
\[\sum_{\alpha}f^{\dagger}_{i\alpha}f_{i\alpha}+b^{\dagger}_{i}b_{i}=1. \tag{14}\]
Here, we induce the Ioffe-Larkin rule by applying Slave-boson theory to Eq. 10. Please note that this argument can be equivalently applied to Slave-fermion theory which we use in the manuscript. Projecting the Hilbert space onto holon and spinon states, we need to find the correct matrix elements. As the spins are represented by \({\bf S}_{i}=\frac{1}{2}f^{\dagger}_{i\alpha}\sigma_{\alpha\beta}f_{i\beta}\), the dot product is represented by
\[{\bf S}_{i}\cdot{\bf S}_{j}= -\frac{1}{4}(B^{\dagger}_{ij}B_{ij}+A^{\dagger}_{ij}A_{ij})+ \frac{1}{4}f^{\dagger}_{i\alpha}f_{i\alpha}. \tag{15}\]
Here,
\[A^{\dagger}_{ij} =\epsilon_{\alpha\beta}f^{\dagger}_{i\alpha}f^{\dagger}_{j\beta },A_{ij}=\epsilon_{\alpha\beta}f_{j\beta}f_{i\alpha},\] \[B^{\dagger}_{ij} =f^{\dagger}_{i\alpha}f_{j\alpha},B_{ij}=f^{\dagger}_{j\alpha}f_{i \alpha}. \tag{16}\]
\(A_{ij}\) is the spin-singlet coupling, and \(B_{ij}\) is the hopping of the spinons. On the other hand, \(n_{i}n_{j}=(1-b^{\dagger}_{i}b_{i})(1-b^{\dagger}_{j}b_{j})\). Ignoring the hole interaction \(b^{\dagger}_{i}b_{i}b^{\dagger}_{j}b_{j}\), we have
\[H= -\sum_{\langle ij\rangle}J^{\prime}(A^{\dagger}_{ij}A_{ij}+B^{ \dagger}_{ij}B_{ij})+\mu_{B}\sum_{i}b^{\dagger}_{i}b_{i}\] \[-\sum_{ij}t_{ij}(f^{\dagger}_{i\alpha}b_{i}b^{\dagger}_{j}f_{j \alpha}+h.c.). \tag{17}\]
Here, \(J^{\prime}=J/4\), \(\mu_{B}=zJ^{\prime}\) is the chemical potential for holons, and \(z\) is the coordination number. Applying the mean-field theory such as \(\Delta_{ij}=\langle A_{ij}\rangle\) and \(\chi_{ij}=\langle B_{ij}\rangle\),
\[H_{MF}= -J^{\prime}\sum_{\langle ij\rangle}(B_{ij}^{\dagger}\chi_{ij}+B_{ ij}\chi_{ij}^{*}+A_{ij}^{\dagger}\Delta_{ij}+A_{ij}\Delta_{ij}^{*}\] \[-|\chi_{ij}|^{2}-|\Delta_{ij}|^{2})+\mu_{B}\sum_{i}b_{i}^{\dagger }b_{i}\] \[-t\chi\sum_{\langle ij\rangle}(b_{i}b_{j}^{\dagger}+h.c.) \tag{100}\]
In path integral formalism, the partition function is given by
\[Z=\int DfD\bar{f}DbDb^{*}D\lambda D\chi D\Delta\exp\Biggl{(}-\int_{0}^{\beta}L _{1}\Biggr{)}, \tag{101}\]
where
\[L_{1}= \sum_{i}[\bar{f}_{i\alpha}(\partial_{\tau}-i\lambda)f_{i\alpha}+b_ {i}^{*}(\partial_{\tau}-i\lambda+\mu_{B})b_{i}]\] \[+J^{\prime}\sum_{\langle ij\rangle}(|\chi_{ij}|^{2}+|\Delta_{ij}| ^{2})-J^{\prime}\chi\sum_{\langle ij\rangle}(\bar{f}_{i\alpha}f_{j\alpha}+h.c.)\] \[-J^{\prime}\Delta\sum_{\langle ij\rangle}(\bar{f}_{i\uparrow}\bar {f}_{j\downarrow}-\bar{f}_{i\downarrow}\bar{f}_{i\uparrow}+h.c.)\] \[-t\chi\sum_{ij}(b_{j}^{*}b_{i}+h.c.). \tag{102}\]
For convenience, let us think that the doping \(\langle b_{i}^{\dagger}b_{i}\rangle=x\) is small, and the phase is uniform resonating valence bond (uRVB), where \(\Delta=0\), \(\chi,\lambda\neq 0\). Since the \(U(1)\) gauge symmetry is broken, we utilize the \(U(1)\) gauge field that obeys
\[\mathbf{a}_{ij}\rightarrow\mathbf{a}_{ij}+\theta_{j}-\theta_{i},a_{0}(i) \to a_{0}(i)+\partial_{\tau}\theta_{i}, \tag{103}\]
where the gauge transform is given by \(f_{i\alpha}\to f_{i\alpha}e^{i\theta_{i}}\) and \(b_{i}\to b_{i}e^{i\theta_{i}}\). Then, we obtain
\[L_{1}= \sum_{i}[\bar{f}_{i\alpha}(\partial_{\tau}+\mu_{F}-ia_{0}(i))f_{i\alpha}\] \[+b_{i}^{*}(\partial_{\tau}+\mu_{B}-ia_{0}(i))b_{i}]\] \[-J^{\prime}\chi\sum_{\langle ij\rangle}(e^{-i\mathbf{a}_{ij}}B_{ ij}^{*}+e^{i\mathbf{a}_{ij}}B_{ij})\] \[-t\chi\sum_{\langle ij\rangle}(e^{i\mathbf{a}_{ij}}b_{i}b_{j}^{*} +h.c.). \tag{104}\]
To obtain the effective theory for gauge field, the perturbation theory is introduced to the continuum version of \(L_{1}\), which is
\[L_{1}= \int d^{d}r\;\bar{f}_{\alpha}(\mathbf{r})(\partial_{\tau}+\mu_{F} -ia_{0}(i))f_{\alpha}(\mathbf{r})\] \[+b^{*}(\mathbf{r})(\partial_{\tau}+\mu_{B}-ia_{0}(i))b(\mathbf{r})]\] \[-\frac{1}{2m_{F}}\bar{f}_{\alpha}(\mathbf{r})(\mathbf{\nabla}+i \mathbf{a})^{2}f_{\alpha}(\mathbf{r})\] \[-\frac{1}{2m_{B}}b^{*}(\mathbf{r})(\mathbf{\nabla}+i\mathbf{a})^{2}b( \mathbf{r}), \tag{105}\]
in \(d\)-dimension. The coupling between the fermions, bosons and gauge field is
\[L_{int}=\int d^{d}r(j_{\mu}^{F}+j_{\mu}^{B})\cdot a_{\mu}. \tag{106}\]
Integrating over \(a_{\mu}\), we have \(j_{\mu}^{F}+j_{\mu}^{B}=0\).
Ioffe-Larkin rule can be induced from this condition. Suppose that we have external field \(\mathbf{E}\), gauge field \(\mathbf{a}\), and associated internal field \(\mathbf{e}\). The external field only couples with holons. Then the effective field for fermions and bosons are
\[\mathbf{e}_{B}=\mathbf{e}+\mathbf{E},\mathbf{e}_{F}=\mathbf{e}. \tag{107}\]
The current density is given by
\[j_{F}=\sigma_{F}\mathbf{e}_{F},j_{B}=\sigma_{B}\mathbf{e}_{B} \tag{108}\]
and \(j_{F}+j_{B}=0\) gives
\[\mathbf{e}=-\frac{\sigma_{B}}{\sigma_{F}+\sigma_{B}}. \tag{109}\]
The physical current is
\[j=j_{B}=-j_{F}=\frac{\sigma_{F}\sigma_{B}}{\sigma_{F}+\sigma_{B}}\mathbf{E}. \tag{110}\]
Thus,
\[\sigma^{-1}=\sigma_{F}^{-1}+\sigma_{B}^{-1}. \tag{111}\]
That is, the physical resistivity is the addition of fermionic and bosonic resistivity. Also, it is note-worthy that in \(SU(2)\) gauge theory for \(t-J\) model, the Ioffe-Larkin rule is not applicable anymore, since \(SU(2)\) gauge field is not directly added to the external fields which is \(U(1)\).
## Appendix B 1D spin chain and its generalization to 2D square lattice
We begin from the 1D helical spin chain model, which is given by
\[H=-J_{1}\sum_{i}\mathbf{S}_{i}\cdot\mathbf{S}_{i+1}+J_{2}\sum_{i}\mathbf{S}_{i }\cdot\mathbf{S}_{i+2}, \tag{112}\]
We utilize the Slave-fermion (Schwinger-boson) theory in this model, where \(a\equiv f_{i\uparrow}\) and \(b\equiv f_{i\downarrow}\). Then,
\[H= -\frac{J_{1}}{2}\sum_{i}\chi_{i,i+1}^{\dagger}\chi_{i,i+1}-\frac{ J_{2}}{2}\sum_{i}\Delta_{i,i+2}^{\dagger}\Delta_{i,i+2}\] \[+\sum_{i}\lambda_{i}(n_{i}-2S). \tag{113}\]
where
\[\chi_{i,i+1}=a_{i}^{\dagger}a_{i+1}+b_{i}^{\dagger}b_{i+1},\] \[\chi_{i,i+1}^{\dagger}=a_{i+1}^{\dagger}a_{i}+b_{i+1}^{\dagger}b_ {i},\] \[\Delta_{i,i+2}=a_{i}b_{i+2}-b_{i}a_{i+2},\] \[\Delta_{i,i+2}^{\dagger}=b_{i+2}^{\dagger}a_{i}^{\dagger}-a_{i+2}^{ \dagger}b_{i}^{\dagger},\] \[n_{i}=a_{i}^{\dagger}a_{i}+b_{i}^{\dagger}b_{i}, \tag{114}\]
\(\chi_{i,i+1}\) is the nearest-neighbor hopping and \(\Delta_{i,i+2}\) is the next-nearest-neighbor spin-singlet coupling of Schwinger-bosons. \(\lambda_{i}\) is the Lagrange multiplier or chemical potential that impose the constraint on the number of Schwinger-bosons at each site.
The mean-field theory renders \(\langle\chi_{i,i+1}\rangle=\langle\chi_{i,i+1}^{\dagger}\rangle=\chi,\langle \Delta_{i,i+2}\rangle=\langle\Delta_{i,i+2}^{\dagger}\rangle=\Delta\), \(\lambda_{i}\) = \(\lambda\). The Hamiltonian becomes
\[H_{MF}= -\frac{J_{1}}{2}\chi\sum_{i}(\chi_{i,i+1}+\chi_{i,i+1}^{\dagger})\] \[-\frac{J_{2}}{2}\Delta\sum_{i}(\Delta_{i,i+2}+\Delta_{i,i+2}^{ \dagger})\] \[+\lambda\sum_{i}(n_{i}-2S), \tag{20}\]
The transform onto \(k\)-space gives
\[H=\sum_{k}\eta_{\alpha}^{\dagger}(k)H_{\alpha\beta}(k)\eta_{\beta}(k), \tag{21}\]
where
\[\eta^{\dagger}(k)=\left(a_{k}^{\dagger}\ \ \tilde{b}_{-k}\right),\eta(k)= \begin{pmatrix}a_{k}\\ \tilde{b}_{-k}^{\dagger}\end{pmatrix}, \tag{22}\]
and
\[H(k)=(\lambda-J_{1}\chi\cos k)\mathbf{1}+J_{2}\Delta\sin 2k\;\tau_{1}. \tag{23}\]
Here, \(\tilde{b}_{-k}=-ib_{-k},\tilde{b}_{-k}^{\dagger}=ib_{-k}^{\dagger}\). This can be considered as the bosonic Bogoliubov-de Gennes (BdG) Hamiltonian.
### Green function
The Green function is defined as
\[G_{\alpha\beta}(\tau,k)= -\langle T_{\tau}\eta_{\alpha}(\tau)\eta_{\beta}^{\dagger}\rangle. \tag{24}\]
This can be obtained from the equation of motion of Green functions.
\[\frac{d}{d\tau}G_{\alpha\beta}(\tau,k)= -\delta(\tau)[\eta_{\alpha},\eta_{\beta}^{\dagger}]+\langle T_{ \tau}\frac{d\eta_{\alpha}(\tau)}{d\tau}\eta_{\beta}^{\dagger}\rangle. \tag{25}\]
Considering that
\[\frac{d\eta_{\alpha}(\tau)}{d\tau}=[H,\eta_{\alpha}(\tau)], \tag{26}\]
where
\[H= \sum_{k}h_{11}a_{k}^{\dagger}a_{k}+h_{12}a_{k}^{\dagger}b_{-k}^{ \dagger}+h_{21}b_{-k}a_{k}\] \[+h_{22}b_{-k}b_{-k}^{\dagger}, \tag{27}\]
we obtain
\[[H,a_{q}]=-h_{11}a_{q}-h_{12}b_{-q}^{\dagger},\] \[[H,b_{-q}^{\dagger}]=h_{21}a_{q}+h_{22}b_{-q}^{\dagger}. \tag{28}\]
Thus, the equations of motion are
\[\frac{d}{d\tau}G_{1\beta}(\tau,k)= -\delta(\tau)\delta_{1\beta}-h_{11}G_{1\beta}(\tau,k)\] \[-h_{12}G_{2\beta}(\tau,k),\] \[\frac{d}{d\tau}G_{2\beta}(\tau,k)= \delta(\tau)\delta_{2\beta}+h_{21}G_{1\beta}(\tau,k)\] \[+h_{22}G_{2\beta}(\tau,k). \tag{29}\]
Fourier transform of imaginary time (\(d/d\tau\rightarrow-i\omega_{n}\)) gives
\[-i\omega_{n}G_{1\beta}(i\omega_{n},k)= -\delta_{1\beta}-h_{11}G_{1\beta}(i\omega_{n},k)\] \[-h_{12}G_{2\beta}(i\omega_{n},k),\] \[-i\omega_{n}G_{2\beta}(i\omega_{n},k)= \delta_{2\beta}+h_{21}G_{1\beta}(i\omega_{n},k)\] \[+h_{22}G_{2\beta}(i\omega_{n},k). \tag{30}\]
In matrix form, this is equivalent to
\[i\omega_{n}\tau_{3}G(i\omega_{n},k)=1+H_{k}G(i\omega_{n},k), \tag{31}\]
where \(\tau_{i}\) (\(i=0,1,2,3\)) is the Pauli matrix. Hence, the Green function is
\[G(i\omega_{n},k)=\frac{-i\omega_{n}\tau_{3}-(\lambda-J_{1}\chi \cos k)1+J_{2}\Delta\sin 2k\tau_{1}}{\omega_{n}^{2}+\omega_{k}^{2}}, \tag{32}\]
where \(\omega_{k}=\sqrt{(\lambda-J_{1}\chi\cos k)^{2}-(J_{2}\Delta\sin 2k)^{2}}\) is the quasi-particle energy in the system. Note that in the ground state, \(\omega_{k}^{2}>0\) for all \(k\).
### Self-consistent ground state
The inverse Fourier transform gives
\[\lim_{\tau\to 0^{-}}G(\tau,k)= \frac{1}{\beta}\sum_{i\omega_{n}}e^{i\omega_{n}0^{+}}G(i\omega_{n },k)\] \[= -\oint\frac{dz}{2\pi i}n(z)e^{z0^{+}}G(z,k)\] \[= \frac{1}{2}\tau_{3}-\frac{1+2n(\omega_{k})}{2\omega_{k}}[( \lambda-J_{1}\chi\cos k)1\] \[-J_{2}\Delta\sin 2k\tau_{1}]. \tag{33}\]
According to the definition,
\[\lim_{\tau\to 0^{-}}G(\tau,k)=-\begin{pmatrix}\langle a_{k}^{\dagger}a_{k} \rangle&\langle b_{-k}a_{k}\rangle\\ \langle a_{k}^{\dagger}b_{-k}^{\dagger}\rangle&\langle b_{-k}b_{-k}^{\dagger} \rangle\end{pmatrix}. \tag{34}\]
From this, we obtain the self-consistent equations for order parameters. First, the nearest-neighbor hopping parameter \(\chi\) is given by
\[\chi= \langle\chi_{i,i+1}\rangle=\frac{1}{N}\sum_{k}\langle\hat{a}_{k}^ {\dagger}\hat{a}_{k}+\hat{b}_{k}^{\dagger}\hat{b}_{k}\rangle e^{ik}\] \[= \int_{-\pi}^{\pi}\frac{dk}{2\pi}\cos k\frac{1+2n(\omega_{k})}{ \omega_{k}}(\lambda-J_{1}\chi\cos k). \tag{35}\]
Next, the next-nearest-neighbor spin-singlet coupling is given by
\[\Delta= \langle\Delta_{i,i+2}\rangle=\frac{1}{N}\sum_{k}\langle\hat{a}_{k} \hat{b}_{-k}-\hat{b}_{k}\hat{a}_{-k}\rangle e^{-2ik}\] \[= \int_{-\pi}^{\pi}\frac{dk}{2\pi}\frac{J_{2}\Delta\sin^{2}2k}{ \omega_{k}}(1+2n(\omega_{k})). \tag{111}\]
At last, the chemical potential can be determined by
\[2S+1= \int_{-\pi}^{\pi}\frac{dk}{2\pi}[(\frac{\lambda-J_{1}\chi\cos k}{ \omega_{k}})(2n(\omega_{k})+1). \tag{112}\]
The calculated order parameters at \(J_{2}/J_{1}=0.5\) and \(0\) in temperatures are shown in Fig. S1. At \(J_{2}/J_{1}=0.5\), the ordering temperature for \(\chi\) is \(T_{\chi}\approx 0.75\), and that for \(\Delta\) is \(T_{\Delta}\approx 0.52\). At \(J_{2}/J_{1}=0\), the ordering temperature for \(\chi\) is \(T_{\chi}\approx 0.75\). The energy gap (from zero energy to the lowest spinon dispersion) in temperatures is given also in Fig. S1.
### Current-current correlation function
The current operator from the Hamiltonian is given by
\[J(r_{j})= \frac{J_{1}\chi}{2}i[\chi_{j-1,j}-\chi_{j-1,j}^{\dagger}]. \tag{113}\]
The Fourier transform of current operator is
\[J(q)=-J_{1}\chi\sum_{k}[a_{k_{-}}^{\dagger}a_{k_{+}}+b_{k_{-}}^{ \dagger}b_{k_{+}}]\sin k, \tag{114}\]
where \(k_{\pm}=k\pm q/2\).
The current-current (auto-)correlation function is given by
\[\Pi(q;\tau)= -\left\langle T_{\tau}J(q,\tau)J(-q)\right\rangle \tag{115}\]
After some algebra, we arrive at
\[\Pi(q,\tau)= -(J_{1}\chi)^{2}\sum_{k}\sin^{2}k\] \[\times\mathrm{Tr}[G(-\tau,k_{-})\tau_{3}G(\tau,k_{+})\tau_{3}]. \tag{116}\]
The Fourier transform of correlation function is
\[\Pi(q,i\omega_{n})= -\frac{(J_{1}\chi)^{2}}{\beta}\sum_{k}\sin^{2}k\sum_{i\omega_{l}}\] \[\times\mathrm{Tr}[G(i\omega_{l},k_{-})\tau_{3}G(i\omega_{l}+i \omega_{n},k_{+})\tau_{3}]. \tag{117}\]
The tedious algebra gives
\[\Pi(q,i\omega_{n})=(J_{1}\chi)^{2}\sum_{k}\sin^{2}k\ \frac{f}{g}, \tag{118}\]
where
\[g= \omega_{k_{-}}\omega_{k_{+}}((\omega_{k_{-}}-\omega_{k_{+}})^{2}+ \omega_{n}^{2})\] \[\times((\omega_{k_{-}}+\omega_{k_{+}})^{2}+\omega_{n}^{2}) \tag{119}\]
. and
\[f= (\omega_{k_{-}}+\omega_{k_{+}})(\omega_{k_{-}}\omega_{k_{+}}-\xi) ((\omega_{k_{-}}-\omega_{k_{+}})^{2}+\omega_{n}^{2})\] \[+2n(\omega_{k_{-}})\omega_{k_{+}}[(\omega_{k_{-}}^{2}-\omega_{k_{ +}}^{2})(\xi+\omega_{k_{-}}^{2})\] \[+\omega_{n}^{2}(-\xi+\omega_{k_{-}}^{2})]+2n(\omega_{k_{+}})\omega _{k_{-}}\] \[\times[(\omega_{k_{+}}^{2}-\omega_{k_{-}}^{2})(\xi+\omega_{k_{+} }^{2})+\omega_{n}^{2}(-\xi+\omega_{k_{+}}^{2})]. \tag{120}\]
The retarded correlation function is just obtained by analytic continuation \(i\omega_{n}\rightarrow\omega+i\eta\). \(\eta\) is related to the lifetime of spinons.
When \(J_{2}/J_{1}=0\), the system only has the ferromagnetic exchange, and the correlation function is reduced to
\[\Pi^{R}(q,\omega)=2(J_{1}\chi)^{2}\int_{k}\sin^{2}k\frac{\Delta \omega_{k}\Delta n_{k}}{\Delta\omega_{k}^{2}-(\omega+i\eta)^{2}}, \tag{121}\]
where \(\Delta\omega_{k}=\omega_{k_{-}}-\omega_{k_{+}}\), \(\Delta n_{k}=n(\omega_{k_{-}})-n(\omega_{k_{+}})\), \(n\) is Bose-Einstein distribution, and \(\int_{k}=d^{d}k/(2\pi)^{d}\). \(d=1\) in this case.
The inductance can be obtained as follows. First, the real conductivity can be obtained from the imaginary part of correlation function,
\[\mathrm{Re}\,\sigma(\omega)=-\frac{\mathrm{Im}\,\Pi^{R}}{\omega}. \tag{122}\]
Then, the imaginary conductivity can be computed by the Kramers-Kronig relation.
\[\mathrm{Im}\,\sigma(\omega)=-\frac{1}{\pi}\int d\omega^{\prime} \frac{\mathrm{Re}\,\sigma(\omega^{\prime})}{\omega^{\prime}-\omega}, \tag{123}\]
Then, the resistivity is given by \(\rho(\omega)=\sigma(\omega)^{-1}\), and the inductivity is found as
\[\mathcal{L}(\omega)=-\frac{\mathrm{Im}\,\rho(\omega)}{\omega}. \tag{124}\]
and the inductance is \(L=\mathcal{L}l/A\), where \(l\) is the length and \(A\) is the area of the system. We let \(l=1,A=1\) for all cases.
### Drude-type inductance
In Eq. 16, suppose \(\eta\to 0\). From the fact,
\[\lim_{\eta\to 0}\frac{1}{\omega+i\eta}=P\frac{1}{\omega}-i\pi\delta(\omega), \tag{173}\]
we obtain
\[\Pi^{R}(q,\omega)= -2(J_{1}\chi)^{2}\int_{k}\sin^{2}k\frac{\Delta\omega_{k}\Delta n_ {k}}{2\omega}[\frac{2\omega}{\omega^{2}-\Delta\omega_{k}^{2}}\] \[-i\pi\delta(\frac{\omega^{2}-\Delta\omega_{k}^{2}}{2\omega})]. \tag{174}\]
Using another fact
\[\delta(g(x))=\sum_{i}\frac{\delta(x-x_{i})}{|g^{\prime}(x_{i})|}, \tag{175}\]
we have
\[\operatorname{Re}\sigma(q,\omega)= 2\pi(J_{1}\chi)^{2}\int_{k}\sin^{2}k(-\Delta\omega_{k}\Delta n_ {k})\] \[\times[\frac{\delta(\omega+\Delta\omega_{k})+\delta(\omega- \Delta\omega_{k})}{2\omega^{2}}]. \tag{176}\]
The Kramers-Kronig relation leads to
\[\operatorname{Im}\sigma(q,\omega)= -(J_{1}\chi)^{2}\int_{k}\sin^{2}k(-\frac{\Delta n_{k}}{\Delta \omega_{k}})\] \[\times\frac{2\omega}{\Delta\omega_{k}^{2}-\omega^{2}}. \tag{177}\]
After some brief algebra, the total conductivity is given by
\[\sigma(q,\omega)= (J_{1}\chi)^{2}\int_{k}\sin^{2}k(-\frac{\Delta n_{k}}{\Delta \omega_{k}})[\pi\delta(\frac{\omega^{2}-\Delta\omega_{k}^{2}}{2\omega})\] \[+i\frac{2\omega}{\omega^{2}-\Delta\omega_{k}^{2}}] \tag{178}\]
When \(q\to 0\),
\[\sigma(0,\omega)= 2(J_{1}\chi)^{2}\int_{k}\sin^{2}k(-\frac{\partial n(\omega_{k}) }{\partial\omega_{k}})(\pi\delta(\omega)+\frac{i}{\omega})\] \[= C(\pi\delta(\omega)+\frac{i}{\omega}). \tag{179}\]
Here,
\[C=2(J_{1}\chi)^{2}\int_{k}\sin^{2}k(-\frac{\partial n(\omega_{k})}{\partial \omega_{k}})>0. \tag{180}\]
Thus, the inductance is given by
\[L=\frac{1}{C}, \tag{181}\]
which is just constant in frequencies. Since \(C\) is proportional to \(\chi^{2}\), \(L\) reaches the highest peak near \(T_{\chi}\) where \(\chi\) becomes small. It is worth noting that although we revive \(\eta\),
\[\sigma(0,\omega)=C(\frac{1}{-i\omega+\eta}), \tag{182}\]
but the inductance is still \(L=1/C\).
### Dirty system
When the system is dirty enough by impurities and disorders, the system is well described by the the Mattis-Bardeen scheme.
\[\Pi_{d}^{R}(0,\omega)=\sum_{q}\frac{2\alpha}{q^{2}+\alpha^{2}}\Pi^{R}(q,\omega) \tag{183}\]
The correlation function in temperature and \(\omega\) is computed by setting \(\eta=1\times 10^{-2}\) and \(\alpha=1/200\). The associated inductance near \(T_{\chi}\) is shown in the left panel of Fig. S2. Notably, different from the Drude-type, the two distinct dip structures at \(\omega=0\) and \(\omega\neq 0\) appear. Both dips are due to the intraband transition \(\Delta\omega_{k}\) in Eq. 16. The dip at \(\omega=0\) are attributed to \(q\sim 0\), and the dip at \(\omega\neq 0\) are attributed to \(q=q_{T}\), where \(q_{T}\) is the momentum distance difference between lowest and highest energy \(k\)-points. When we define the dispersion of energy band as the difference of highest and lowest energies, one can notice that the dip position and the dispersion of energy band are the same as shown in the right panel of Fig. S2.
### Generalization to 2D square lattice
The results we obtained here is readily generalized to the higher dimensions with a single band, for instance, a ferromagnetic 2D square lattice. The current-current correlation function for the ferromagnetic 2D square lattice has the same form as Eq. 16, but \(\sin^{2}k\rightarrow\sin^{2}k_{i}\) (\(i=x,y\)) and \(\omega_{k}^{2}=\lambda-J_{1}\chi(\cos k_{x}+\cos k_{y})\). Therefore, by the same procedure in the previous part, the inductance of the spinons in a 2D square lattice is constant in frequencies as well.
### The unit of inductance
Here, the unit we use is \(J_{1}=1\sim 1\) meV, \(\hbar=e=1\), and the lattice constant \(a=1\sim 1\)\(\AA\). Thus, \(k_{B}T=1\sim 11.6\) K, \(\omega=1\sim 242\) MHz, \(\tau=1\sim 4.13\times 10^{-9}\) s, \(\rho=1\sim 25.8\)\(k\Omega\cdot\AA=258\)\(\mu\Omega\cdot cm\). \(\mathcal{L}=1\sim 1.07\)\(pH\cdot cm\). At \(T=T_{\chi}\), \(\mathcal{L}\sim 10^{3}\approx 1\)\(nH\cdot cm\). If the system is micro-sized, \(L\sim 10^{3}\approx 10\)\(\mu H\), which is observable in experiments.
We suppose that the holon has the average inductance of metal. The typical value of resistivity and relaxation time is
given by \(\rho_{0}\sim 10\)\(\mu\Omega\cdot cm\), \(\tau\sim 1\times 10^{-9}\)\(s\). \(\mathcal{L}=\rho_{0}\tau=10\)\(fH\cdot cm\), which is negligible compared to that of spinons.
## Appendix C 2D Honeycomb lattice
Let us consider the Hamiltonian only with the nearest-neighbor ferromagnetic exchanges.
\[H= -J\sum_{\langle ab\rangle}\mathbf{S}_{a}\cdot\mathbf{S}_{b}\] \[= -J\sum_{ij}\mathbf{S}_{ijA}\cdot\mathbf{S}_{ijB}+\mathbf{S}_{ijA} \cdot\mathbf{S}_{i+1jB}\] \[+\mathbf{S}_{ijA}\cdot\mathbf{S}_{ij+1B}. \tag{10}\]
The unit cell is in Fig. S3. The red dots denote A sublattice, and the blue dots denote B sublattice.
### Mean-Field theory
The Slave-fermion (Schwinger-boson) theory for ferromagnetic honeycomb lattice is
\[H= -J\sum_{ij}[\chi^{\dagger}_{ijA,ijB}\chi_{ijA,ijB}+\chi^{\dagger} _{i+1jA,ijB}\chi_{i+1jA,ijB}\] \[+\chi^{\dagger}_{ij+1A,ijB}\chi_{ij+1A,ijB}]\] \[+\sum_{ij}\sum_{\xi=A}^{B}\lambda_{ij\xi}(n_{ij\xi}-2S), \tag{11}\]
where \(\xi\) is the sublattice index and
\[\chi_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}=a^{\dagger}_{i^{ \prime}j^{\prime}\xi^{\prime}}a_{ij\xi}+b^{\dagger}_{i^{\prime}j^{\prime}\xi^{ \prime}}b_{ij\xi},\] \[\chi^{\dagger}_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}=a^{ \dagger}_{ij\xi}a_{i^{\prime}j^{\prime}\xi^{\prime}}+b^{\dagger}_{ij\xi}b_{i^{ \prime}j^{\prime}\xi^{\prime}}. \tag{12}\]
Let us apply the mean field theory
\[\chi=\langle\chi_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}\rangle=\langle\chi^ {\dagger}_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}\rangle,\lambda=\lambda_{ij\xi}. \tag{13}\]
Then,
\[H_{MF}= -J\chi\sum_{ij}[\chi^{\dagger}_{ijA,ijB}+\chi_{ijA,ijB}+\chi^{ \dagger}_{i+1jA,ijB}\] \[+\chi_{i+1jA,ijB}+\chi_{ij+1A,ijB}+\chi_{ij+1A,ijB}]\] \[+\lambda\sum_{ij}\sum_{\xi=A}^{B}(n_{ij\xi}-2S). \tag{14}\]
The Fourier transform gives
\[H_{MF}= -J\chi\sum_{k}[(a^{\dagger}_{kA}a_{kB}+b^{\dagger}_{kA}b_{kB})(1+ e^{-ik_{1}}+e^{-ik_{2}})\] \[+(a^{\dagger}_{kB}a_{kA}+b^{\dagger}_{kB}b_{kA})(1+e^{ik_{1}}+e^{ ik_{2}})]\] \[+\lambda\sum_{k,\xi}(a^{\dagger}_{k\xi}a_{k\xi}+b^{\dagger}_{k\xi }b_{k\xi}-2S). \tag{15}\]
In matrix form
\[H_{MF}=V_{\alpha}(k)^{\dagger}H_{\alpha\beta}(k)V_{\beta}(k), \tag{16}\]
where
\[V=[a_{kA},a_{kB},b_{kA},b_{kB}]^{T},\] \[V^{\dagger}=[a^{\dagger}_{kA},a^{\dagger}_{kB},b^{\dagger}_{kB}, b^{\dagger}_{kB}], \tag{17}\]
and
\[H(k)= \lambda\tau_{0}\sigma_{0}-J\chi((1+\cos k_{1}+\cos k_{2})\tau_{0 }\sigma_{1}\] \[+(\sin k_{1}+\sin k_{2})\tau_{0}\sigma_{2}). \tag{18}\]
Here, \(\tau_{i}\) are the Pauli matrices representing the spin degrees of freedom, and \(\sigma_{i}\) are that representing the sublattice degrees of freedom. The energy is given by
\[E=\lambda\pm\omega_{k}, \tag{19}\]
where \(\omega_{k}^{2}=(J\chi)^{2}(3+2\cos k_{1}+2\cos k_{2}+2\cos(k_{1}-k_{2}))\). The energy bands are doubly degenerate. Also, the system has the band crossings at \(K\) points in the Brillouin zone.
Since this is the block-diagonal and all blocks are the same, we utilize only one block.
\[H(k)= \lambda\sigma_{0}-J\chi((1+\cos k_{1}+\cos k_{2})\sigma_{1}\] \[+(\sin k_{1}+\sin k_{2})\sigma_{2}). \tag{20}\]
### Green function and Self-consistent Equations
Here the Green function is simply given by
\[G(i\omega_{n},k)= (i\omega_{n}-H)^{-1}\] \[= \frac{1}{(i\omega_{n}-\lambda)^{2}-\omega_{k}^{2}}[(i\omega_{n}- \lambda)\sigma_{0}\] \[-J\chi[(1+\cos k_{1}+\cos k_{2})\sigma_{1}\] \[+(\sin k_{1}+\sin k_{2})\sigma_{2}]. \tag{21}\]
We should find
\[G_{\alpha\beta}(0^{-},k)=-\langle V_{\beta}^{\dagger}(k)V_{\alpha}(k)\rangle \tag{101}\]
The inverse Fourier transform is
\[G(0^{-},k)= \frac{1}{\beta}\sum_{i\omega_{n}}e^{i\omega_{n}0^{+}}G(i\omega_{n},k). \tag{102}\]
When \(n_{tot}=n(\lambda+\omega_{k})+n(\lambda-\omega_{k})\), \(\Delta n=n(\lambda-\omega_{k})-n(\lambda+\omega_{k})\), the Green function becomes
\[G(0^{-},k)= -\frac{1}{2}\{n_{tot}\sigma_{0}+\frac{-\Delta n}{\omega_{k}}(-J \chi)\] \[\times[(1+\cos k_{1}+\cos k_{2})\sigma_{1}\] \[+(\sin k_{1}+\sin k_{2})\sigma_{2}]\}. \tag{103}\]
We can find the self-consistent equations from here.
\[\chi= -J\chi\int\frac{d^{2}k}{(2\pi)^{2}}[\frac{-\Delta n}{\omega_{k}}](1+ \cos k_{1}+\cos k_{2}). \tag{104}\]
Also, \(\lambda\) is determined by
\[2S= \int\frac{d^{2}k}{(2\pi)^{2}}n_{tot}. \tag{105}\]
The ground state acquired self-consistently in Fig. S4 with \(J=1\). The transition temperature is \(T_{\chi}\sim 1.45\).
### The current-current correlation function
The current operator is given by
\[J_{1ij} =i(-J\chi)[\chi_{ijA,i-1jB}^{\dagger}-\chi_{ijA,i-1jB}],\] \[J_{2ij} =i(-J\chi)[\chi_{ijA,ij-1B}^{\dagger}-\chi_{ijA,ij-1B}]. \tag{106}\]
The Fourier transform gives
\[J_{1}(\mathbf{q})= \frac{i(-J\chi)}{\sqrt{N}}\sum_{k}a_{k_{-}A}^{\dagger}a_{k_{+}B} e^{-ik_{1}}-a_{k_{-}B}^{\dagger}a_{k_{+}A}e^{ik_{1}}\] \[(a\to b),\] \[J_{2}(\mathbf{q})= \frac{i(-J\chi)}{\sqrt{N}}\sum_{k}a_{k_{-}A}^{\dagger}a_{k_{+}B} e^{-ik_{2}}-a_{k_{-}B}^{\dagger}a_{k_{+}A}e^{ik_{2}}\] \[(a\to b). \tag{107}\]
From the current operator, we can acquire the current-current correlation function. Defining \(\int_{k}=\int\frac{d^{2}k}{(2\pi)^{2}}\),
\[\Pi_{11}(\tau,q)= -\langle T_{\tau}J_{1}(\tau,q)J_{1}(-q)\rangle, \tag{108}\]
When we define
\[P_{1}(k)=\sin k_{1}\tau_{1}-\cos k_{1}\tau_{2}, \tag{109}\]
the Fourier transform gives
\[\Pi_{11}(i\omega_{n},q)= -\frac{(J\chi)^{2}}{\beta}\int_{k}\sum_{i\omega_{l}}\operatorname {Tr}G(i\omega_{l},k_{-})P_{1}(k)\] \[\times G(i\omega_{l}+i\omega_{n},k_{+})P_{1}(k). \tag{110}\]
We denote the function
\[F_{1}(\mathbf{k},\mathbf{q})= 1+\cos 2k_{1}+\cos 2(k_{1}-k_{2})+\cos k_{-1}+\cos k_{+1}\] \[+\cos(k_{+1}-k_{+2})+\cos(k_{-1}-k_{2-})\] \[+\cos(2k_{1}-k_{+2})+\cos(2k_{1}-k_{-2}). \tag{111}\]
Then, the long algebraic process ends at
\[\Pi_{11}(i\omega_{n},\mathbf{q})= (J\chi)^{2}\int_{k}\frac{1}{[(\omega_{-}-\omega_{+})^{2}+\omega_{n} ^{2}][(\omega_{-}+\omega_{+})^{2}+\omega_{n}^{2}]}\] \[\times\{\Delta n_{-}[\omega_{-}(\omega_{+}^{2}-\omega_{-}^{2}- \omega_{n}^{2})\] \[+\frac{(J\chi)^{2}F_{1}}{\omega_{-}}(\omega_{-}^{2}-\omega_{+}^{ 2}-\omega_{n}^{2})]\] \[+\Delta n_{+}[\omega_{+}(\omega_{-}^{2}-\omega_{+}^{2}-\omega_{n }^{2})\] \[+\frac{(J\chi)^{2}F_{1}}{\omega_{+}}(\omega_{+}^{2}-\omega_{-}^{ 2}-\omega_{n}^{2})]\}. \tag{101}\]
where \(\Delta n_{\pm}\equiv n(\lambda-\omega_{\pm})-n(\lambda+\omega_{\pm})\). The retarded correlation function is obtained by analytic continuation \(i\omega_{n}\to\omega+i\eta\).
When \(\mathbf{q}\to 0\) with \(\Delta n_{k}=n(\lambda-\omega_{k})-n(\lambda+\omega_{k})\), the correlation function is now
\[\Pi(\omega,0)=2(J\chi)^{2}\int_{k}\frac{\Delta n_{k}}{\omega_{k}}\frac{(J\chi )^{2}F_{1}(\mathbf{k},0)+\omega_{k}^{2}}{(\omega+i\eta)^{2}-4\omega_{k}^{2}}. \tag{102}\]
We show the numerical computation of inductance from Eq. 101 in Fig. S5 and Fig. S6. The dip structure appears near \(\omega=0\) for every case. In Fig. S5, we change the \(k\)-mesh size from \(100\times 100\) to \(200\times 200\) and show that the result is independent of \(k\)-mesh size. In Fig. S6, we change \(\eta\), which is related to the lifetime of spinons,c from \(1dk\) to \(6dk\) where \(dk\) is the length of a side of \(k\)-mesh plaquette. The dip structure depends on \(\eta\). Remarkably, when \(\eta\) increases, the width of dip is enlarged, the depth of dip is decreased (in percentage), and the saturated value of the inductance is increased.
To explain the behavior of inductance, let us suppose \(\eta\to 0\), we can write this as
\[\Pi(\omega,0)= 4(J\chi)^{4}\int_{k}\frac{\Delta n_{k}}{\omega_{k}}\frac{g_{1}^{ 2}}{2\omega_{k}^{2}}[\frac{2\omega}{\omega^{2}-4\omega_{k}^{2}}\] \[-i\pi\delta(\frac{\omega^{2}-4\omega_{k}^{2}}{2\omega})] \tag{103}\]
where \(g_{1}(\mathbf{k})=1+\cos k_{1}+\cos(k_{1}-k_{2})\). The real conductivity is
\[\operatorname{Re}\sigma(\omega,0)= \pi(J\chi)^{4}\int_{k}\frac{\Delta n_{k}}{\omega_{k}}\frac{g_{1}^{ 2}}{2\omega_{k}^{2}}[\delta(\omega-2\omega_{k})\] \[+\delta(\omega+2\omega_{k})]. \tag{104}\]
The imaginary conductivity is obtained by the Kramers-Kronig relation.
\[\operatorname{Im}\sigma(\omega,0)= (J\chi)^{4}\int_{k}\frac{\Delta n_{k}}{\omega_{k}}\frac{g_{1}^{ 2}}{2\omega_{k}^{2}}[\frac{2\omega}{\omega^{2}-4\omega_{k}^{2}}]. \tag{105}\]
Thus, the conductivity is
\[\sigma(\omega,0)= 2(J\chi)^{4}\int_{k}\frac{\Delta n_{k}}{\omega_{k}}\frac{g_{1}^{ 2}}{2\omega_{k}^{2}}[\pi\delta(\frac{\omega^{2}-4\omega_{k}^{2}}{2\omega})+i \frac{2\omega}{\omega^{2}-4\omega_{k}^{2}}]\] \[= \lim_{\eta\to 0}2(J\chi)^{4}\int_{k}\frac{\Delta n_{k}}{\omega_{k}} \frac{g_{1}^{2}}{\omega_{k}^{2}}\] \[\times[\frac{i(\omega+i\eta)}{(\omega+i\eta)^{2}-(2\omega_{k})^{2 }}]. \tag{106}\]
The factor \(2\) comes from the spin degeneracy. Notably, the conductivity comes from the interband transition, since the energy difference between energy bands at \(k\)-point is \(2\omega_{k}\). It is worthy noting that the integrand shows resonance at \(\omega=0\) if there is a band crossing point, i.e., \(\omega_{k}=0\). Since the honeycomb lattice has the band crossings at \(K\) points \((\omega_{k}=0)\), conductivity is greatly enhanced near \(\omega=0\), and the resistivity and inductance is reduced near \(\omega=0\). Furthermore, according to the structure of the integrand, the dip structure is related to \(\eta\) as well. Hence, this explains the sharp dip structure appears in the inductance at \(\omega=0\).
From this result, we can also infer that when the system is bipartite and the energy bands are gapped throughout the whole Brillouin zone (\(\omega_{k}\neq 0\)), the system would not show the dip structure at \(\omega=0\). This is because the integrand in the above equation never shows resonance near \(\omega=0\) anymore.
### Dirty system
For the dirty system, we compute
\[\Pi^{d}(\omega)= \int\frac{d^{2}q}{(2\pi)^{2}}\frac{4\alpha}{q(q^{2}+\alpha^{2})} \Pi^{R}(\omega,\mathbf{q}). \tag{107}\]
We let the \(k\)-mesh size \(160\times 160\) and \(\eta=10^{-1}\), \(\alpha=1/50\), and numerically compute the inductance near \(T_{\chi}\) in Fig. S7. This shares the same structural properties as Figs. S5 and S6.
## Appendix D 2D Kagome lattice
Let us consider the Hamiltonian only with the nearest-neighbor ferromagnetic exchanges.
\[H= -J\sum_{\langle ab\rangle}\mathbf{S}_{a}\cdot\mathbf{S}_{b}\] \[= -J\sum_{ij}[\mathbf{S}_{ijA}\cdot\mathbf{S}_{ijB}+\mathbf{S}_{ijB} \cdot\mathbf{S}_{ijC}+\mathbf{S}_{ijC}\cdot\mathbf{S}_{ijA}\] \[+\mathbf{S}_{ijA}\cdot\mathbf{S}_{i+1jB}+\mathbf{S}_{ijB}\cdot \mathbf{S}_{ij+1C}\] \[+\mathbf{S}_{ijC}\cdot\mathbf{S}_{i-1j-1A}]. \tag{108}\]
The upper line is the intra cell, and the lower line is the inter cell contributions. The unit cell is shown in Fig. S8.
We use the following notations. \(k_{i}=\mathbf{k}\cdot\vec{a}_{i}\), \(k_{i}^{\prime}=\mathbf{k}^{\prime}\cdot\vec{a}_{i}\), \(\vec{a}_{1}=(1,0),\vec{a}_{2}=(-1/2,\sqrt{3}/2)\), \(\vec{b}_{1}=2\pi(1,1/\sqrt{3})\), \(\vec{b}_{2}=2\pi(0,2/\sqrt{3})\). \(\mathbf{k}=(k_{1}\vec{b}_{1}+k_{2}\vec{b}_{2})/(2\pi)\). Within the Brillouin zone we take \(k_{1},k_{2}\in[-\pi,\pi]\). The Brillouin zone area is \(8\pi^{2}/\sqrt{3}\).
### Mean-Field theory
The Slave-fermion (Schwinger-boson) theory gives
\[H= -J\sum_{ij}[\chi_{ijA,ijB}^{\dagger}\chi_{ijA,ijB}+\chi_{ijB,ijC} ^{\dagger}\chi_{ijB,ijC}\] \[+\chi_{ijC,ijA}^{\dagger}\chi_{ijC,ijA}+\chi_{ijA,i+1jB}^{\dagger} \chi_{ijA,i+1jB}\] \[+\chi_{ijB,ij+1C}^{\dagger}\chi_{ijB,ij+1C}+\chi_{ijC,i-1j-1A}^{ \dagger}\chi_{ijC,i-1j-1A}\] \[+\sum_{ij}\sum_{\xi=A}^{C}\lambda_{ij\xi}(n_{ij\xi}-2S), \tag{100}\]
where \(\frac{J}{2}\to J\) is redefined, \(\xi\) is the sublattice index, and
\[\chi_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}= a_{i^{\prime}j^{\prime}\xi^{\prime}}^{\dagger}a_{ij\xi}+b_{i^{ \prime}j^{\prime}\xi^{\prime}}^{\dagger}b_{ij\xi},\] \[\chi_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}^{\dagger}= a_{ij\xi}^{\dagger}a_{i^{\prime}j^{\prime}\xi^{\prime}}+b_{ij\xi}^{ \dagger}b_{i^{\prime}j^{\prime}\xi^{\prime}}. \tag{101}\]
Let us apply the mean field theory
\[\chi=\langle\chi_{ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}\rangle=\langle\chi_ {ij\xi,i^{\prime}j^{\prime}\xi^{\prime}}^{\dagger}\rangle,\lambda=\lambda_{ij\xi}. \tag{102}\]
Then,
\[H_{MF}= -J\chi\sum_{ij}[\chi_{ijA,ijB}^{\dagger}+\chi_{ijA,ijB}+\chi_{ijB,ijC}^{\dagger}+\chi_{ijB,ijC}\] \[+\chi_{ijC,ijA}^{\dagger}+\chi_{ijC,ijA}+\chi_{ijA,i+1jB}^{ \dagger}\] \[+\chi_{ijA,i+1jB}+\chi_{ijB,ij+1C}^{\dagger}+\chi_{ijB,ij+1C}\] \[+\chi_{ijC,i-1j-1A}^{\dagger}+\chi_{ijC,i-1j-1A}]\] \[+\lambda\sum_{ij}\sum_{\xi=A}^{B}(n_{ij\xi}-2S), \tag{103}\]
The Fourier transform gives
\[H_{MF}= -J\chi\sum_{k}[(a_{kB}^{\dagger}a_{kA}+b_{kB}^{\dagger}b_{kA})(1+ e^{-ik_{1}})\] \[+(a_{kC}^{\dagger}a_{kB}+b_{kC}^{\dagger}b_{kB})(1+e^{-ik_{2}})\] \[+(a_{kA}^{\dagger}a_{kC}+b_{kA}^{\dagger}b_{kC})(1+e^{i(k_{1}+k_ {2})})+h.c.]\] \[+\lambda\sum_{k,\xi}(a_{k\xi}^{\dagger}a_{k\xi}+b_{k\xi}^{ \dagger}b_{-k\xi}-2S). \tag{104}\]
In matrix form
\[H_{MF}=V_{\alpha}(k)^{\dagger}H_{\alpha\beta}(k)V_{\beta}(k), \tag{105}\]
where
\[V(k)=[a_{kA},a_{kB},a_{kC},b_{kA},b_{kB},b_{kC}]^{T},\] \[V^{\dagger}(k)=[a_{kA}^{\dagger},a_{kB}^{\dagger},a_{kC}^{ \dagger},b_{kA}^{\dagger},b_{kB}^{\dagger},b_{kC}^{\dagger}]. \tag{106}\]
and
\[H(k)=[\lambda I_{3\times 3}-J\chi P]\otimes\sigma_{0}, \tag{107}\]
where
\[P=\begin{pmatrix}0&1+e^{ik_{1}}&1+e^{i(k_{1}+k_{2})}\\ 1+e^{-ik_{1}}&0&1+e^{ik_{2}}\\ 1+e^{-i(k_{1}+k_{2})}&1+e^{-ik_{2}}&0\end{pmatrix}. \tag{108}\]
Here, \(\sigma_{i}\) are that representing the sublattice degrees of freedom. The energy is given by
\[E=\lambda+\omega_{i,k}, \tag{109}\]
where
\[\omega_{1,k}= -J\chi(1+\epsilon_{k}),\] \[\omega_{2,k}= -J\chi(1-\epsilon_{k}),\] \[\omega_{3,k}= 2J\chi. \tag{110}\]
Here, \(\epsilon_{k}^{2}=(3+2\cos k_{1}+2\cos k_{2}+2\cos(k_{1}+k_{2}))=(1+\cos k_{1}+ \cos k_{2})^{2}+(\sin k_{1}-\sin k_{2})^{2}\). The energy bands are doubly degenerate. There are fourfold band crossings between \(\omega_{1,k}\) and \(\omega_{2,k}\) at \((k_{1},k_{2})=\pm 2\pi(-1/3,-1/3)\), \(\pm 2\pi(-2/3,1/3)\), \(\pm 2\pi(1/3,-2/3)\) which are \(K\)-points in the Brillouin zone (i.e., \(\epsilon_{k}=0\)). Also, there are band crossings between \(\omega_{2,k}\) and \(\omega_{3,k}\) at \(\mathbf{k}=0(\Gamma)\). In the calculation, we consider one block since spin up and down are degenerate.
### Green function and self-consistent equation
The Green function is
\[G(i\omega_{n},k)=(i\omega_{n}-H)^{-1}. \tag{111}\]
and what we should find here is
\[G(0^{-},k)= \frac{1}{\beta}\sum_{i\omega_{n}}e^{i\omega_{n}0^{+}}G(i\omega_{n},k). \tag{112}\]
Because of the complexity, we here acquire the Green function in Lehmann representation. Using the identity \(\sum_{a=1}^{3}|a,k\rangle\langle a,k|=I\) where \(|a,k\rangle\) is the eigenstate of \(H(k)\) with eigenvalue \(\lambda+\omega_{a,k}\), one finds
\[G(i\omega_{n},k)= \sum_{a,b}|a,k\rangle\langle a,k|(i\omega_{n}-H)^{-1}|b,k\rangle \langle b,k|,\] \[= \sum_{a}\frac{1}{i\omega_{n}-\lambda-\omega_{a,k}}|a,k\rangle \langle a,k|. \tag{101}\]
Then,
\[G(0^{-},k)= -\sum_{a}|a,k\rangle\langle a,k|[\int\frac{dz}{2\pi i}\frac{n(z)} {z-\lambda-\omega_{a,k}}]\] \[= -\sum_{a}|a,k\rangle\langle a,k|(n(\lambda+\omega_{a,k})). \tag{102}\]
We can find the self-consistent equations from here. (\(\int_{k}=\int d^{2}k/(2\pi)^{2}\).)
\[\chi= -\frac{1}{3}\int_{k}[G_{12}(0^{-},k)+G_{21}(0^{-},k)+G_{23}(0^{-},k)\] \[+G_{32}(0^{-},k)+G_{31}(0^{-},k)+G_{13}(0^{-},k)]. \tag{103}\]
Also,
\[2S= -\frac{2}{3}\int_{k}\mathrm{Tr}\big{[}G(0^{-},k)\big{]}. \tag{104}\]
The transition occurs at \(T_{\chi}=1.43\).
### The current-current correlation function
The current operator in this system is
\[J_{1}(i,j)= -iJ\chi(\chi^{\dagger}_{i-1jA,ijB}-\chi_{i-1jA,ijB}),\] \[J_{2}(i,j)= -iJ\chi(\chi^{\dagger}_{ij-1B,ijC}-\chi_{ij-1B,ijC}),\] \[J_{3}(i,j)= -iJ\chi(\chi^{\dagger}_{i+1j+1C,ijA}-\chi_{i+1j+1C,ijA}), \tag{105}\]
The Fourier transform gives
\[J_{1}(\mathbf{q})= -i\frac{J\chi}{\sqrt{N}}\sum_{k}[a^{\dagger}_{k_{-}A}a_{k_{+}B}e ^{ik_{1}}-a^{\dagger}_{k_{-}B}a_{k_{+}A}e^{-ik_{1}}].\] \[J_{2}(\mathbf{q})= -i\frac{J\chi}{\sqrt{N}}\sum_{k}[a^{\dagger}_{k_{-}B}a_{k_{+}C}e ^{ik_{2}}-a^{\dagger}_{k_{-}C}a_{k_{+}B}e^{-ik_{2}}].\] \[J_{3}(\mathbf{q})= -i\frac{J\chi}{\sqrt{N}}\sum_{k}[a^{\dagger}_{k_{-}C}a_{k_{+}A}e ^{-i(k_{1}+k_{2})}\] \[-a^{\dagger}_{k_{-}A}a_{k_{+}C}e^{i(k_{1}+k_{2})}]. \tag{106}\]
Here, \(\mathbf{k}_{\pm}=\mathbf{k}\pm\mathbf{q}/2\).
Let us find the following correlation function.
\[\Pi_{11}(\tau,\mathbf{q})=-\langle T_{\tau}J_{1}(\tau,\mathbf{q})J_{1}(- \mathbf{q})\rangle. \tag{107}\]
Then,
\[\Pi_{11}(\tau,\mathbf{q})= -(J\chi)^{2}\int_{k}\mathrm{Tr}\,G(-\tau,k_{-})P_{1}(k)G(\tau,k_{ +})\] \[\times P_{1}(k), \tag{108}\]
where
\[P_{1}(k)=-i\begin{pmatrix}0&e^{ik_{1}}&0\\ -e^{-ik_{1}}&0&0\\ 0&0&0\end{pmatrix}. \tag{109}\]
The Fourier Transform gives
\[\Pi_{11}(i\omega_{n},\mathbf{q})= -\frac{(J\chi)^{2}}{\beta}\int_{k}\sum_{i\omega_{l}}\mathrm{Tr}\,G (i\omega_{l},k_{-})P_{1}(k)\] \[\times G(i(\omega_{l}+\omega_{n}),k_{+})P_{1}(k), \tag{110}\]
We denote \(U\) and \(U^{\prime}\) the unitary matrix diagonalizing \(H(k)\) at \(k=k_{-}\) and \(k=k_{+}\) each, \(\Delta n_{ab}=n(\lambda+\omega_{b,k_{+}})-n(\lambda+\omega_{a,k_{+}})\)
\(\omega_{a,k_{-}}),\Delta\omega_{ab}=\omega_{b,k_{+}}-\omega_{a,k_{-}}\). Thus,
\[\Pi_{11}(i\omega_{n},\mathbf{q})= (J\chi)^{2}\int_{k}\sum_{ab}\frac{-\Delta n_{ab}\Delta w_{ab}}{(i \omega_{n}-\Delta w_{ab})(i\omega_{n}+\Delta w_{ab})}\] \[\times[U_{2,a}(U^{\dagger})_{a,2}U^{\prime}_{1,b}(U^{\dagger})^{ \prime}_{b,1}\] \[+U_{1,a}(U^{\dagger})_{a,1}U^{\prime}_{2,b}(U^{\dagger})^{\prime }_{b,2}\] \[-U_{1,a}(U^{\dagger})_{a,2}U^{\prime}_{1,b}(U^{\dagger})^{\prime }_{b,2}e^{-2ik_{1}}\] \[-U_{2,a}(U^{\dagger})_{a,1}U^{\prime}_{2,b}(U^{\dagger})^{\prime }_{b,1}e^{2ik_{1}}]. \tag{101}\]
The retarded form is obtained from the analytic continuation \(i\omega_{n}\rightarrow\omega+i\eta\). When \(q=0\), \(\Delta w_{ab}\) represents the interband transition of spinons. It is note-worthy that the correlation function and the conductivity shows the resonance when \(\omega=0\) since there are the band crossing points \(\Delta w_{ab}=0\). Thus, it is expected to have the dip structure near \(\omega=0\) as well as the 2D honeycomb lattice. The expectation is confirmed as shown in the numerical computation result in Fig. S10 (\(J=1,\eta=10^{-1},T\sim T_{\chi}\)). As well as the case of 2D honeycomb, the computation is independent of \(k\)-mesh size, and the dip width and depth depend strongly on the parameter \(\eta\). Furthermore, the inductance in the dirty system also shows the same structure.
This result could be generalized into the arbitrary number of energy bands. Although the current-current correlation function changes its detailed form, but the denominator is always represented as
\[(\omega+i\eta-\Delta w_{ab})(\omega+i\eta+\Delta w_{ab}). \tag{102}\]
in Lehmann representation. Since \(\Delta w_{ab}\) represents the interband transition, the band crossing point (\(\Delta w_{ab}=0\)) gives the resonance near \(\omega=0\), leading to the sharp dip near \(\omega=0\). However, if there is no band crossing, there will be no resonance near \(\omega=0\), the sharp dip structure will vanish. When the system is dirty, now \(\Delta w_{ab}\) also contains the intraband transitions, the sharp dip structure at \(\omega=0\) will appear. We illustrate the statement by an example next section.
## Appendix E 1D spin ladder chain
For illustrating the inductance in a gapped multiband system, we take 1D spin ladder chain model.
\[H= -J\sum_{\langle ab\rangle}\mathbf{S}_{a}\cdot\mathbf{S}_{b}\] \[= -J\sum_{i}[\mathbf{S}_{iA}\cdot\mathbf{S}_{iB}+\mathbf{S}_{iA} \cdot\mathbf{S}_{i+1A}+\mathbf{S}_{iB}\cdot\mathbf{S}_{i+1B}]. \tag{103}\]
The structure of 1D spin ladder chain is given in Fig. S11.
### Mean-Field theory
The Slave-fermion (Schwinger-boson) theory for ferromagnetic ladder model is
\[H= -J\sum_{i}[\chi^{\dagger}_{iA,iB}\chi_{iA,iB}+\chi^{\dagger}_{iA, i+1A}\chi_{iA,i+1A}\] \[+\chi^{\dagger}_{iB,i+1B}\chi_{iB,i+1B}]+\sum_{i,\alpha}\lambda_{i \alpha}[n_{i\alpha}-1]\] \[-h\sum_{i\alpha}[n_{iA}-n_{iB}]. \tag{104}\]
Here, \(\alpha=A,B\), and \(h\) gives additional imbalance between \(A\) and \(B\) and enlarges the energy gap. The mean-field theory gives,
\[H= -J\chi\sum_{i}[\chi^{\dagger}_{iA,iB}+\chi_{iA,iB}+\chi^{\dagger}_ {iA,i+1A}\chi_{iA,i+1A}\] \[+\chi^{\dagger}_{iB,i+1B}\chi_{iB,i+1B}]+\lambda\sum_{i,\alpha}[n_ {i\alpha}-1]\] \[-h\sum_{i}[n_{iA}-n_{iB}]. \tag{105}\]
In \(k\)-space, there are two copies of
\[H(k)=(\lambda-2J\chi\cos k)\tau_{0}-J\chi\tau_{1}-h\tau_{3}. \tag{106}\]
We only care \(\uparrow\) spin and double the result. The energy is
\[E_{\pm}(k)=\lambda-2J\chi\cos k\pm b, \tag{107}\]
where \(b=\sqrt{(J\chi)^{2}+h^{2}}\). The energy is gapped about \(2b\) throughout the whole Brillouin zone. We represent the Green function in Lehmann representation.
\[G(k,i\omega_{n})= (i\omega_{n}-H)^{-1}\] \[= \sum_{a}\frac{|a\rangle\langle a|}{i\omega_{n}-E_{a}(k)} \tag{108}\]
where \(a=\pm\).
### Current-current correlation function
The current operator is
\[J(i)= (-iJ\chi)[\chi^{\dagger}_{iA,i+1A}-\chi_{iA,i+1A}\] \[+\chi^{\dagger}_{iB,i+1B}-\chi_{iB,i+1B}.] \tag{109}\]
By Fourier transform, one could get
\[J(q)= \frac{J\chi}{N}\sum_{k}\sin k[u^{\dagger}_{k_{-,A}}u_{k_{+}A}+u^{ \dagger}_{k_{-,B}}u_{k_{+}B}] \tag{100}\]
where \(u=f_{\uparrow}\), \(k_{\pm}=k\pm q/2\).
The current-current correlation function is given by
\[\Pi(\mathbf{q},i\omega_{n})= -\frac{(J\chi)^{2}}{\beta}\int_{k}\sin^{2}k\sum_{i\omega_{1}}\] \[\times\mathrm{Tr}[G(i\omega_{n}+i\omega_{l},k_{-})G(i\omega_{l}, k_{+})] \tag{101}\]
After some algebra with the Green function above and analytic continuation, one would find out that
\[\Pi(q,\omega)=2(J\chi)^{2}\int_{k}\sin^{2}k\sum_{a}\frac{-\Delta n _{a}\Delta E_{a}}{(\omega+i\eta)^{2}-\Delta E_{a}^{2}}, \tag{102}\]
where \(\Delta n_{a}=n_{B}(E_{a}(k_{-}))-n_{B}(E_{a}(k_{+}))\) and \(\Delta E_{a}=E_{a}(k_{-})-E_{a}(k_{+})\). Here the factor \(2\) comes from inclusion of the \(\downarrow\) spin part. When \(\eta\to 0\),
\[\Pi(q,\omega)= 2(J\chi)^{2}\int_{k}\sin^{2}k\sum_{a}(-\frac{\Delta n_{a}}{ \Delta E_{a}})\frac{\Delta E_{a}^{2}}{2\omega}\] \[\times[\frac{2\omega}{\omega^{2}-\Delta E_{a}^{2}}-i\pi\delta( \frac{\omega^{2}-\Delta E_{a}^{2}}{2\omega})]. \tag{103}\]
The conductivity is given by
\[\sigma(q,\omega)= 2(J\chi)^{2}\int_{k}\sin^{2}k\sum_{a}(-\frac{\Delta n_{a}}{ \Delta E_{a}})[\pi\delta(\frac{\omega^{2}-\Delta E_{a}^{2}}{\omega})\] \[+i\frac{\omega}{\omega^{2}-\Delta E_{a}^{2}}]. \tag{104}\]
It is note-worthy that the correlation function and conductivity has the same form as that of 1D spin chain, so we anticipate the constant inductance. When we let \(q\to 0\) and revive \(\eta\),
\[\sigma(\omega)= \lim_{\eta\to 0}2(J\chi)^{2}i\int_{k}\sum_{a}(-\frac{\Delta n_{a}}{ \Delta E_{a}})\frac{1}{\omega+i\eta}\] \[= \lim_{\eta\to 0}\frac{iC}{\omega+i\eta}. \tag{105}\]
Here, \(C>0\) depends only on the temperature. Thus, the inductance \(L=1/C\) is constant in frequencies just as the anticipation.
|
2310.03341 | Fermionic Quantum Turbulence: Pushing the Limits of High-Performance
Computing | Ultracold atoms provide a platform for analog quantum computer capable of
simulating the quantum turbulence that underlies puzzling phenomena like pulsar
glitches in rapidly spinning neutron stars. Unlike other platforms like liquid
helium, ultracold atoms have a viable theoretical framework for dynamics, but
simulations push the edge of current classical computers. We present the
largest simulations of fermionic quantum turbulence to date and explain the
computing technology needed, especially improvements in the ELPA library that
enable us to diagonalize matrices of record size (millions by millions). We
quantify how dissipation and thermalization proceed in fermionic quantum
turbulence by using the internal structure of vortices as a new probe of the
local effective temperature. All simulation data and source codes are made
available to facilitate rapid scientific progress in the field of ultracold
Fermi gases. | Gabriel Wlazlowski, Michael McNeil Forbes, Saptarshi Rajan Sarkar, Andreas Marek, Maciej Szpindler | 2023-10-05T06:48:31Z | http://arxiv.org/abs/2310.03341v2 | Characterizing the Cascade of Energy in Fermionic Quantum Turbulence: Pushing the Limits of High-Performance Computing
###### Abstract
Ultracold atoms provide a form of analog quantum computer capable of simulating the quantum turbulence that underlies mysterious phenomena like pulsar glitches in rapidly spinning neutron stars. Unlike other system (e.g. liquid helium) ultracold atoms have a viable theoretical framework for dynamics, but simulations push the edge of current classical computers. We present the largest simulations of fermionic quantum turbulence to date and explain the computing technology needed, especially improvements in the Eigenvalue sol.Lvers for Petaflop Applications (elja) library that enable us to diagonalize matrices of record size (millions by millions). We quantify how dissipation and thermalization proceeds in fermionic quantum turbulence, and provide evidence that the temperature-dependence of quantum vortices alters the correlation between the turbulent cascades of flow energy and total vortex length. All simulation data and source codes are made available to facilitate rapid scientific progress in the field of quantum turbulence.
## Introduction
Computation is regarded as the third pillar of physical science, complementing theoretical and experimental physics. Each pillar has its unique methodology: theoretical physics relies on mathematical analysis, measurements are the central interest of experimental physics, and numerical modeling is the heart of computational physics. Many recent breakthroughs, like observing the Higgs boson (_1_, _2_) or detecting gravitational waves (_3_), would not have been possible without advanced numerical analysis capabilities that adapt algorithmic breakthroughs to evolving hardware. Here we demonstrate the synergy between theory and computation: advances in linear algebra libraries enable Europe's fastest supercomputer (lumi) to diagonalize matrices of record size, allowing us to simulate turbulent dynamics in quantum systems (superfluids). We use these simulations to explain how vortices dissipate energy, driving quantum turbulence in neutron stars and ultracold-atom experiments.
As Moore's law bottoms out, using high-performance computing (hpc) effectively becomes a significant challenge. Current hpc systems consist of thousands of interconnected nodes, each comprising dozens of computing cores or multiple hardware accelerators. Specifically, accelerators like graphics processing units (gpus) account for most of the computing power on modern platforms. Leadership supercomputers can compute from \(10^{17}\) floating point operations per second (flops) for pre-exascale systems, to \(10^{18}\)flops (\(=\)1Eflops or exas-flops) for exascale systems. According to Top 500 list (June 2023) the top three supercomputers are: Frontier (Oak Ridge National Laboratory, usa) with 1.19Eflops, Supercomputer Fugaku (riken Center for Computational Science, Japan) with 0.44Eflops, and lumi (eflophc/csc, Finland) with 0.31Eflops. Here we use lumi (Fig. 1), the fastest European system, to demonstrate some of its capabilities to advance computational physics.
While this computational potential is enormous, using these hpc capabilities requires a highly-tuned software stack capable of dealing with massive parallel and heterogeneous architectures, and core scientific libraries are constantly being adjusted to maximize performance on new hardware. These include Fast Fourier Transforms, linear algebra routines, libraries for matrix decomposition, random number generators, and solvers for algebraic and differential equations. These core libraries form the building blocks for the efficient domain-specific scientific packages that enable us to make physics breakthroughs in the domain of quantum mechanics.
Simulating quantum dynamics is one of the hardest challenges for classical computers due to the exponentially large size of a many-body wavefunction. Even storing the wavefunction for a modest nucleus like tin with \(\sim\) 100 nucleons would require more bytes
Figure 1: **lumi gluonic’s pre-exascale system with and mixgox gluo-accelerated nodes.** Each gluo node consists of the four and mixgox gluo, each of which has two gocs being individual mp-programmable devices. Photo copyright Fade Creative. |
2309.01435 | Middle-obstacle approach of mapping phase-field model unto its sharp
interface counterpart | A new diffuse interface model has been proposed in this study for simulating
binary alloy solidification under universal cooling conditions, involving both
equilibrium and non-equilibrium solute partitioning. Starting from the
Gibbs-Thomson equation, which is the classical theory that describes the
dynamics of a sharp interface, the phase-field equation is derived using a
traveling wave solution that represents a diffuse interface. To tackle the
spurious effects caused by the variation of liquid concentration within the
diffuse interface with artificial width, a middle obstacle is introduced to
sharpen the diffuse interface and an invariant liquid concentration can be
found for determining a constant undercooling in the interface normal
direction. For slow solidification under equilibrium conditions, the
convergence performance of the dendrite tip shows superior invulnerability to
the width effect of the diffuse interface. For rapid solidification under
non-equilibrium conditions, the output partition coefficients obtained from the
steady-state concentration profiles agree with the input velocity-dependent
function. The proposed model is promising to be an indispensable tool for the
development of advanced alloy materials through the microstructure control of
solidification under a wide range of cooling conditions. | Chuanqi Zhu, Yusuke Seguchi, Masayuki Okugawa, Yuichiro Koizumi | 2023-09-04T08:33:35Z | http://arxiv.org/abs/2309.01435v3 | # Middle-obstacle approach of mapping phase-field model onto its sharp interface counterpart
###### Abstract
A new diffuse interface model has been proposed in this study for simulating binary alloy solidification under universal cooling conditions, involving both equilibrium and non-equilibrium solute partitioning. Starting from the Gibbs-Thomson equation, which is the classical theory that describes the dynamics of a sharp interface, the phase-field equation is derived using a traveling wave solution that represents a diffuse interface. To tackle the spurious effects caused by the variation of liquid concentration within the diffuse interface with artificial width, a middle obstacle is introduced to "sharpen" the diffuse interface and an invariant liquid concentration can be found for determining a constant undercooling in the interface normal direction. For slow solidification under equilibrium conditions, the convergence performance of the dendrite tip shows superior invinnersability to the width effect of the diffuse interface. For rapid solidification under non-equilibrium conditions, the output partition coefficients obtained from the steady-state concentration profiles agree with the input velocity-dependent function. The proposed model is promising to be an indispensable tool for the development of advanced alloy materials through the microstructure control of solidification under a wide range of cooling conditions.
## I Introduction
In alloy solidification, solute partitioning can be either equilibrium or non-equilibrium, depending on the interface velocity [1, 2]. Local equilibrium is usually assumed within the interface moving at a slow or moderate speed and the solute with limited solubility is partially rejected from the newly-formed solid. The concentrations on the two sides of the steady-state interface lie on the solidus and liquidus of the equilibrium phase diagram. When the interface velocity becomes high, the amount of captured solute exceeds the solubility and the liquid concentration ahead of the interface deviates from the equilibrium liquidus. Consequently, the morphology and concentration distribution in the microstructures resulting from slow and fast solidification processes are distinct. Thus, it is vital to fully describe the process of non-equilibrium solute partitioning, which is dependent on interface velocity, for optimizing the processing conditions during rapid solidification and developing advanced alloy materials.
The continuous growth (CG) model [3, 4] and the local non-equilibrium (LN) model [5, 6] are two established models for describing the non-equilibrium solute partition. The interface is assumed to be a sharp plane in these analytical models. Correspondingly, numerical models including diffuse interfaces were developed in accordance with the analytical ones. They are the so-called parabolic [7, 8, 9] and hyperbolic [10] phase-field models, respectively. Despite the physical correctness of these phase-field models, they can hardly be implemented in two-dimensional space to be relevant to the characteristic length of the microstructure because the width of the diffuse interface in these models needs to have a physical size on the nano-scale. If the interface width is artificially enlarged to reduce the computational cost, the numerical models are invalid in matching the analytical ones. It is required to develop numerical schemes that can improve the computational efficiency of the diffuse interface models without losing the connection to the analytical sharp interface models.
The so-called quantitative phase-field model [11, 12, 13, 14] is the parabolic model with an artificially wide interface. They can be mapped onto the analytical sharp interface model through thin-interface analysis and additional flux incorporated into the diffusion equation. The simulation results of alloy solidification can possibly be independent of the interface width once a thin interface limit is reached. It should be noted that the convergence of the results from these models is mostly limited to low-speed interfaces under equilibrium conditions. It is still challenging to extend them to simulate high-speed interfaces under non-equilibrium conditions.
Recently, attempts have been made to extend the quantitative scheme for simulating solidification under non-equilibrium conditions in accordance with the CG analytical models. In the work of Tatu et al. [15] and Kavousi et al. [16], the anti-trapping current is modified to regulate non-equilibrium solute partitioning. Results weakly dependent on the interface width can be obtained. Ji et al. [17] reproduced the banded structure consistent with experimental observation. In their model, the solute transport in the diffuse interface region is enhanced by introducing a diffusivity interpolated by a quadratic function, which should be adapted according to the interface width. The model proposed by Steinbach et al. [18] incorporates a third kinetic equation for controlling the exchange of solute atoms between solid and liquid phases within the interface. The results tend to be dependent on the interface width and the kinetic parameter (permeability) needs to be determined by fitting to the experimental or analytical data [19].
In contrast to the variational approach [20], in which the governing equations for concentration and phase field are derived from the variational derivative of the free energy functional, the present work chooses a non-variational way to map the diffuse interface model unto its sharp interface counterpart. Rather than mapping the parabolic or hyperbolic phase-field models unto their analytical counterparts, the velocity-dependent partition coefficients obtained from CG and LN models can be straightforwardly incorporated into the proposed diffuse interface model. The simulations of dendrite growth under equilibrium conditions and excessive solute trapping under non-equilibrium conditions will be demonstrated to verify the applicability of the proposed model.
## II.method
### 2.1 Phase-field equation
The kinetics of the sharp interface is described by the Gibbs-Thomson equation,
\[v=\mu(\Delta T-\Gamma\kappa). \tag{1}\]
The motion of the interface is driven by the contributions from interface undercooling \(\Delta T\) and the surface pressure related to curvature \(\kappa\) and Gibbs-Thomson coefficient \(\Gamma\), which is the ratio of surface energy to fusion entropy. The proportional relationship between the net contribution and interface velocity \(v\) is suggested by the kinetic coefficient \(\mu\).
In the diffuse interface model, a continuous field parameter is used to represent the bulk phases as well as the interface. The so-called phase field remains constant in the bulk regions and varies smoothly within the interface region. Conventionally, the value of the phase field changes from 1 in the solid to 0 in the liquid, showing a diffuse profile at the interface. For pure substances, the equation that governs the kinetics of the diffuse interface can be expressed by referring to the Gibbs-Thomson equation. The resulting phase-field equation [21; 22] is
\[\frac{1}{\mid\nabla\phi\mid}\frac{\partial\phi}{\partial t}=\mu\left[\Delta T -\Gamma\,\nabla\left(\,-\,\frac{\nabla\phi}{\mid\nabla\phi\mid}\,\right)\right], \tag{2}\]
in which interface velocity \(v\) and curvature \(\kappa\) are replaced by the gradient and local change of the phase field \(\phi\). For isothermal conditions when \(\Delta T\) is constant throughout the domain, Eqs. (1) and (2) are identical if a proper solution for the phase field with a diffuse interface profile can be found. Compared to the Gibbs-Thomson equation, the phase-field equation facilitates the numerical simulation of pattern formation because the position, curvature, and morphology of the interface can be implicitly and easily obtained from the spatially continuous phase field.
The solution of the phase field can be found by minimizing the total free energy of the whole domain with bulk phases and diffuse interface. Following the derivation in [23], the solution of the steady-state diffuse interface in 1D is specified as,
\[\phi=\begin{cases}1&x<-\eta/2+vt\\ \frac{1}{2}-\frac{1}{2}\sin\frac{\pi}{\eta}(x-vt)&-\eta/2+vt\leq x\leq\eta/2+vt\\ 0&x>\eta/2+vt\end{cases} \tag{3}\]
which is the so-called traveling wave solution. The interface width is \(\eta\) and the velocity is \(v\). The values of the phase field in the solid and liquid regions are 1 and 0, respectively. For position within the interface (\(-\eta/2+vt\leq x\leq\eta/2+vt\)), the solution shows a diffuse contour constructed by the sinusoidal function, which has singularities at the boundaries between the interface and bulk regions. Despite this, the traveling wave solution has its advantage in numerical implementation and is related to the double obstacle function used in the free energy functional. With this solution, Eq. (2) can be reformulated to be,
\[\frac{\partial\phi}{\partial t}=\mu\bigg{\{}\Gamma\big{[}\nabla^{2}\phi+\frac{ \pi^{2}}{\eta^{2}}(\phi-\frac{1}{2})\big{]}-\frac{\pi\sqrt{\phi(1-\phi)}}{ \eta}\Delta T\bigg{\}} \tag{4}\]
### Diffusion equation
For a moving interface of a dilute binary alloy with a negative liquidus slope, the solute atoms are rejected from the growing solid into the liquid. The concentration on the solid side is lower than that of the liquid side. Thus, the concentration profile shows a jump at the interface and a diffusion layer in the bulk liquid. This jump can be expressed as
\[c_{i\text{S}}<c_{i\text{L}}, \tag{5}\]
in which the superscript \(i\) denotes the position at the interface. In the diffuse interface model, this jump is manifested by a continuous transition across the interface (Fig.1). By assuming the interface is a mixture of the solid and liquid phases, the concentrations of solid and liquid are individual field variables and the overall concentration is weighted by the local phase fractions, which is indicated by the local phase field. The relation of the three distinct concentration fields is expressed by,
\[c=c_{\text{S}}\phi+c_{\text{L}}(1-\phi). \tag{6}\]
Similarly, the change of the local overall concentration is the sum of changes in individual solid and liquid phases. The fluxes according to Fick's law in solid and liquid phases are also weighted by the local phase fractions,
\[J_{\text{S}} =-\phi D_{\text{S}}\nabla c_{\text{S}}, \tag{7}\] \[J_{\text{L}} =-(1-\phi)D_{\text{L}}\nabla c_{\text{L}}, \tag{8}\]
in which \(D_{\text{S}}\) and \(D_{\text{L}}\) are diffusivities in solid and liquid phases. The diffusion equation [21; 22] can then be written by complying with the law of mass conservation,
\[\frac{\partial c}{\partial t}=-\nabla\left(J_{\text{S}}+J_{\text{L}}\right), \tag{9}\]
which can be expanded by the chain rule of derivative,
\[\begin{split}\frac{\partial c}{\partial t}&=\nabla\cdot \left[D_{\rm S}\nabla\,c_{\rm S}\phi+D_{\rm L}c_{\rm L}(1-\phi)\right]\\ &=D_{\rm S}(\nabla\phi\,\nabla\,c_{\rm S}+\phi\,\nabla^{2}c_{\rm S })+D_{\rm L}\big{[}-\nabla\phi\,\nabla\,c_{\rm L}+(1-\phi)\,\nabla^{2}c_{\rm L })\big{]}\end{split} \tag{10}\]
Special attention needs to be paid to the boundary conditions of the liquid and solid concentrations at their limits within the interface. During the numerical implementation of the diffusion equation, the fluxes in the liquid and solid should be blocked where the phase fractions become zero:
\[J_{\rm L}^{b}=J_{\rm S}^{f}=0. \tag{11}\]
Here, we designate these places as obstacles for solute transport. The limit of the liquid phase is called the back obstacle, while the one of the solid phase is called the front obstacle. The physical meaning of these obstacles is that the solute in the liquid phase cannot be transported to the solid phase by diffusion, and vice versa.
### Coupling the phase and concentration fields
In the sharp interface model, the propagation of the interface is accompanied by the inter-diffusion of solute between solid and liquid within the interface as well as the long-range transport of solute away from the interface toward the two sides of bulk phases. The former results in solute partitioning (rejection), while the latter lowers the concentration at the interface. The steady state is realized when the two processes are balanced. Correspondingly, the same processes happen in the diffuse interface model. In the following paragraphs, the details of mapping the diffuse interface model unto its sharp interface counterpart will be explained.
#### 2.3.1 Concentration changes within interface
Fig.2 demonstrates the coupled phase and concentration fields in two consecutive steps. The local change of phase field is denoted by \(\delta\phi\), which is the amount of liquid that transforms into solid. This change is accompanied by the change of local concentration \(\delta c\). If a steady state is realized, the two concentration
Figure 1: Schematic image of the liquid, solid, and overall concentration fields of a moving interface in the diffuse interface model. The change of overall concentration comes from the sum of changes in the solid and liquid phases, which are implemented separately in computation through the diffusion equation. The fluxes at the limits (back and front obstacles) of liquid and solid phases are set to zero.
profiles should keep steady except for the shifted position. To determine the amount of \(\delta c\), the diffuse interface is divided into several representative volumes (RVs) to show the specific processes in each individual numerical grid. In Fig.3, the RV is a rectangle with two separate areas representing the mixture of solid and liquid phases with the denoted fractions and concentrations. As the local phase field changes, fractions and concentrations in the current RV on the left-hand side are reconfigured in the next RV on the right-hand side. The fraction of the newly formed solid is \(\delta\phi\). Theoretically, for the dilute alloy with a negative liquidus slope, the newly formed solid can not capture all the solute of its previous liquid, which has a concentration of \(c_{\rm L}\). A capture coefficient is then defined to quantify the percentage of the trapped solute in the newly formed solid. Consequently, the amount of rejected solute can be determined to be,
\[\delta c=(1-\lambda)c_{\rm L}\delta\phi \tag{12}\]
which is the local descending amplitude of the concentration profile in Fig.2.
The physical ground of solute capturing lies in the process of solute redistribution for reaching the equality of chemical potential. However, because the interface is in motion, the extent of solute redistribution may depend on the velocity of the interface. For most cases when interface velocity is low and moderate, the time is sufficient for the exchange of solute between the local solid and liquid. The capture coefficient \(\lambda\) is smaller than one. If the interface velocity is very large, the time for phase transformation and solute redistribution is so limited that the capture coefficient \(\lambda\) may approximate unity and no solute is rejected from the solid.
The capture coefficient is unknown yet. Intuitively, it has a relation to the partition coefficient as both of them approach unity at high interface velocity. The latter is the result of the equality of chemical potential and is related to the inter-diffusion between the solid and liquid phases. The capture process represents the solute flow from the liquid to the solid. A relation between these two coefficients can be found by considering the mass conservation and equality of chemical potential. In the present work, for the sake of simplicity at the beginning of model development, it has been assumed that the capture coefficient equals to the partition coefficient and that all rejected solute atoms are released to the local liquid. This assumption brings about two consequences: one is that the present model is a one-sided model, in which the diffusion in
the solid has been neglected; another one is that the capture coefficient can be a function of the interface velocity, which can be obtained from the analytical CG or LN models. Thus, the function of the CG model can be straightforwardly input into the phase field model through the capture coefficient. It can be expressed as
\[\lambda=k(v)=\frac{k_{e}+v/v_{\rm D}}{1+v/v_{\rm D}}. \tag{13}\]
In future work, the exact relation between the capture and partition coefficients needs to be derived to incorporate the effect of solute diffusion in the solid phase. Also, the partition coefficient function of the LN model will be used to more fully describe the non-equilibrium solute trapping using the proposed phase-field model.
#### 2.3.2 Collect-and-cast operation
One may ask how to deal with the rejected solute \(\delta c\) associated with the change \(\delta\phi\) in each RV. In the sharp interface model, the rejected solute should be transported by long-range diffusion into both liquid and solid phases. In the diffuse interface model, it should be released to the local solid and liquid phases. However, problems should arise when the interface width is artificially enlarged. It takes a long distance to transport the solute out of the interface region and the solute concentration in the interface depends on the interface width. In addition, the rejected solute in the tail area (Fig.2) can not be released into the liquid because there is no liquid in that area after the phase transformation. As a result, they should be trapped in the solid. The solute trapping in the diffuse interface model inherently exists in all alloy phase-field models. If the interface width has a physical size on the nano-scale, the parabolic model resembles the sharp interface CG model in modeling solute trapping [7; 8]. However, in the model with an enlarged diffuse interface, the trapping becomes excessive and undesired. An operation on the rejected solute is needed to enhance the solute transport and eliminate this spurious effect caused by the artificially wide diffuse interface.
In the present work, the anti-trapping operation has been accomplished by collecting the rejected solute atoms in each RV and then casting them in the normal direction of the interface. This can be realized by iteratively searching the next grid in the normal direction of the interface during the numerical implementation (See appendix). This anti-trapping operation on the rejected solute \(\delta c\) facilitates the solute redistribution within the artificially wide diffuse interface. It is crucial and should be implemented with care to avoid spurious effects that may easily arise. For example, if all the rejected solute atoms are collected and cast into the bulk liquid just in front of the interface, the solute transport is excessively enhanced by this operation in the normal direction of the interface. This enhancement is proportion to the interface width and affects the simulation results. Therefore, a natural and sophisticated way is required to emulate the solute redistribution process in the sharp interface model.
A middle plane of the interface is used to "sharpen" the diffuse interface model. As illustrated in Fig.4, the diffuse interface is divided into the front and back parts by the middle plane, which is called middle obstacle
Figure 3: The processes of phase transformation and solute redistribution in a representative volume (RV) of the diffuse interface. For simplicity, the capture coefficient is assumed to be the partition coefficient, which can be the function obtained from either CG or LN model.
(MO) here. As the dashed arrow in Fig.4 suggests, the rejected solute atoms in the back part should be collected and cast just in front of the middle obstacle, whereas the ones in the front part should be transferred to the local liquid and transported away by diffusion. As a result, the diffuse interface, which has been assumed to be a mixture of solid and liquid, can then be regarded as the extensions from the bulk solid and liquid phases separated by the imaginary "sharp" interface. The tangential diffusion along the arc direction in the diffuse interface turns out to be the diffusion process in the bulk phases.
#### 2.3.3 Undercooling at the interface
As the rejected solute atoms \(\delta c\) in the two parts separated by the middle obstacle are transported in distinct manners, the profiles of the liquid concentration on the two sides should be quite different. In the front part of the diffuse interface, the profile joins with the middle obstacle and extends its gradient to the diffusion layer in the bulk liquid; in the back part of the diffuse interface, because the liquid concentration is not affected by the collect-and-cast operation, the concentration inside the back part should be quickly flattened by the diffusion process. The liquid concentration just behind the middle obstacle can be regarded as the liquid concentration at the sharp interface. Because it might be independent of the interface width, it is called invariant concentration \(c_{\rm L}^{inv}\), which can be used to determine the interface undercooling.
\[\Delta T=T_{\rm m}-T_{i}+c_{\rm L}^{inv}m_{\rm L} \tag{14}\]
in which \(T_{\rm m}\), \(T_{i}\), and \(m_{\rm L}\) are the melting point, interface temperature, and liquidus slope. The concentration of each grid within the interface should be determined by searching the nearest invariant concentration in the normal direction through the numerical technique applied in the collect-and-cast operation (See appendix).
## III Results & Discussions
The dilute binary alloy system of Al-1.3at.%(3wt.%)Cu is chosen for the simulation of solidification under both equilibirum and non-equilibrium conditions. The physical parameters [14, 24] are as follows: melting point of pure aluminum, 931 K; diffusivity of copper in the liquid, \(2.4\times 10^{-9}\) m\({}^{2}\)/s; liquidus slope, -600 K/ at.; equilibirum partition coefficient, 0.14; Gibbs-Thomson coefficient, \(2.4\times 10^{-7}\) K \(\cdot\) m. Other physical and numerical parameters will be mentioned together with the following simulation results.
### Equilibrium conditions
The isothermal temperature is set to be 917K for low-rate solidification under equilibrium conditions. In 1D simulation (Fig.5), the diffuse interface is resolved by 8 grids. The total solute is conserved in the whole domain, suggesting the anti-trapping (collect-and-cast) operation does not change the nominal concentration. As expected, the liquid concentration profile shows a discontinuity at the position of the middle obstacle. In the 2D simulation (Fig.6 inset), the dendrite grows in a moving frame until its tip velocity reaches a steady state. The computational domain has a physical size of \(~{}~{}32~{}\mu\,\text{m}\times 64~{}\mu\,\text{m}\). The resolution of the domain depends on the interface width, which should be adjusted by comparing it with the tip velocity and the resulting diffusion length. Three interface widths (3.0, 1.5, and 0.75 \(\mu\,\text{m}\)) are used and each of them is resolved by 6 girds. To maintain numerical stability, the kinetic coefficient is chosen to be \(0.5\times 10^{-3}\) m/(sK), which is 200 times smaller that the physical value of 0.1 m/(sK) for atomically-rough surface [26].
The convergence of the tip velocity is of great importance in the dendrite growth simulation because it indicates the dependence of the tip dynamics on the artificial interface width. If the tip growth converges with the interface width on the micrometer scale, the tip shape and velocity remain the same even as the interface width is reduced to the nanometer scale. Thus it can then be ensured that the numerical simulation
Figure 5: 1D simulation result with concentration and phase field profiles. The concentration in the liquid phase shows a discontinuity at the middle obstacle, where \(\phi=0.5\). The liquid phase has the equilibrium concentration in the solid. The same is for the solid phase.
produces the dendrite with the "true" shape, which can hardly be obtained from theoretical analysis [25]. As plotted in Fig.6, three cases of dendrite simulation reach steady states after the physical time of about 0.15s. As the interface width decreases, the tip velocity converges at a value of around 500 \(\mu\) m/s. The diffusion length \(l_{D}\)[24] ahead of the tip is the ratio of diffusivity to the tip velocity and has a value of about 4.8 \(\mu\) m. In principle, \(l_{D}\) should be larger than the interface width \(\eta\) in phase-field simulation. The ratio of \(l_{D}\) to \(\eta\) is a common measure of numerical performance. The smaller the ratio, the higher the efficiency. In the present simulation, the convergence happens with the interface width \(\eta_{b}\) = 1.5 \(\mu\) m considering the difference between the tip velocities in simulations with \(\eta_{b}\) and \(\eta_{c}\) is less than 1.5%. The ratio \(l_{D}\)/\(\eta_{b}\) is about 3.2, which suggests an excellent performance compared to the established quantitative model in the literature [11; 13; 14]. Even with the interface width \(\eta_{a}\), the difference in tip velocity from the converged ones is around 10%. This suggests that the tip velocity is highly invulnerable to the width effect of the diffuse interface owing to the middle obstacle approach. The model is efficient in simulating solidification under equilibrium conditions. The above results suggest that the proposed model has achieved the goal of quantitative and efficient simulation of dendrite growth, which is ubiquitously observed in low-speed alloy solidification.
### Non-equilibrium conditions
As the isothermal temperature becomes low and the interface velocity increases, the solute partitioning process turns to be non-equilibrium and dependent on the interface velocity. As the interface velocity reaches as high as about 1 m/s, the diffusion length is shortened to about a few nanometers. Thus, the width of the interface in simulation should be also on the nano-scale. The kinetic coefficient is chosen to have a physical value of 0.1 m/(sK) in the following simulation.
Fig.7 shows the concentration profiles of two high-speed moving interfaces. The diffusion length varies according to the interface velocity. The solid concentrations are equal to the far field liquid concentration, exhibiting the complete trapping phenomenon, which is the typical feature of the steady state during non-equilibrium solidification [18]. The liquid concentration profiles overlap with overall concentrations in the bulk liquid and show a step in the interface, where the invariant concentration lies. The large difference between the invariant and equilibrium concentration suggests the large deviation from the equilibrium and high undercooling at the interface. The undercooling is calculated by referring to the vertical distance to the liquidus in the equilibrium phase diagram. It should be noted that the velocity-dependent liquidus slope [25] and the effect of solute drag [4] have not been considered in the present work.
The partition coefficients obtained from concentration profiles under a range of temperatures are plotted against the interface velocities. The partition coefficient is defined to be the ratio of the invariant concentration to the solid concentration near the interface. When the temperature is high, the complete
trapping steady state may not happen, but the instant velocity and partition coefficient can still be measured. These point data from simulations are plotted in Fig.8 along with the function of the CG model, which has been input into the PF model according to Eq.(13). A good agreement between them can be discerned. As shown in Fig.9, when the interface width in the PF model increases, the steady-state velocity and partition coefficient deviate from the results of the small interface width. For the interface width of 5 nm, the deviation (\(<\)10%) is obvious but still acceptable and the points stay near the curve of the CG model.
As the PF model is capable of simulating velocity-dependent solute partitioning, it is expected to reproduce experimentally relevant phenomena. For example, the banded structure [27] formed during rapid solidification features in the cyclic concentration distribution with segregated and non-segregated areas, corresponding to equilibrium and non-equilibrium solute partitioning. In the simulation for reproducing the cyclic growth in rapid solidification, a temperature gradient (10\({}^{7}\) K/m) is imposed on the domain and the constant cooling rate (\(6\times 10^{6}\) K/s) gives rise to the pulling velocity of 0.6 m/s. The initial temperature at the interface is 905 K. As shown in Fig.10a, c, the interface grows at a low velocity in the beginning and the
Figure 8: Partition coefficients from continuous growth (CG) model and phase-field (PF) simulation. The diffuse interface width in the PF model is 1.25 nm.
Figure 9: Partition coefficients obtained from PF simulation with different interface widths are compared. All interfaces are in complete trapping steady states. Interface velocity appears to deviate as the interface width increases.
concentration jump at the interface suggests diffusion-controlled growth under the equilibrium condition. The interface velocity does not adapt to the pulling velocity immediately because of the accumulating solute ahead of the interface. It only increases gradually along with the decreasing interface temperature. Until a critical state is reached at \(\sim\)4.3 \(\mu\)s, the interface velocity surges to be over 3 m/s and the growth is no longer controlled by the solute diffusion but by the atomic attachment (Fig.10b). Because of the high propagation speed, the interface moves far ahead along the temperature gradient, the decreased undercooling relaxes the motion of the interface. The interface slows down and returns to be controlled by diffusion. As Fig.10c shows, the cycle of fast and slow growth modes continues with a steady amplitude of oscillation, which can be discerned from 10 to 16 \(\mu\)s. Though the resulting concentration distribution (Fig.11) is in one dimension, it is remarkably similar to the banded structure observed in literature (Fig.8 in [27] and Fig.5b in [28] ). The simulation results in 2D will be disclosed in future publication.
A new type of phase-field model that can effectively model alloy solidification under both equilibrium and non-equilibrium conditions is explained and demonstrated. Under equilibrium conditions, the convergence behavior of the dendrite tip is invulnerable to the width effect of the diffuse interface. An acceptable convergence happens when the ratio of diffusion length \(l_{D}\) to interface width \(\eta\) approaches 3.5, which is obviously smaller than those in previous quantitative models [11, 13]. Under non-equilibrium conditions, the velocity-dependent partition coefficient from the CG model can be simply input into the PF model. The output one-dimension results agree with the input function. The simulation results are weakly affected even when the \(l_{D}/\eta\) ratio is less than 1. The middle obstacle approach has been proven to be effective in "sharpening" the diffuse interface in phase-field modeling. Future work will relate the capture coefficient to the partition coefficient so that solute diffusion in the solid is included. The velocity-dependent liquidus slope and the effect of solute drag on the interface undercooling are also required to achieve a more accurate simulation. 2D and 3D simulations are to be conducted and compared with analytical theories and experimental observations.
## Acknowledgments
This work was supported by the Kakenhi Grant-in-Aid for Scientific Research (No. 22J11558) from the Japan Society for Promotion of Science (JSPS).
|
2305.01938 | Doc2SoarGraph: Discrete Reasoning over Visually-Rich Table-Text
Documents via Semantic-Oriented Hierarchical Graphs | Discrete reasoning over table-text documents (e.g., financial reports) gains
increasing attention in recent two years. Existing works mostly simplify this
challenge by manually selecting and transforming document pages to structured
tables and paragraphs, hindering their practical application. In this work, we
explore a more realistic problem setting in the form of TAT-DQA, i.e. to answer
the question over a visually-rich table-text document. Specifically, we propose
a novel Doc2SoarGraph framework with enhanced discrete reasoning capability by
harnessing the differences and correlations among different elements (e.g.,
quantities, dates) of the given question and document with Semantic-oriented
hierarchical Graph structures. We conduct extensive experiments on TAT-DQA
dataset, and the results show that our proposed framework outperforms the best
baseline model by 17.73% and 16.91% in terms of Exact Match (EM) and F1 score
respectively on the test set, achieving the new state-of-the-art. | Fengbin Zhu, Chao Wang, Fuli Feng, Zifeng Ren, Moxin Li, Tat-Seng Chua | 2023-05-03T07:30:32Z | http://arxiv.org/abs/2305.01938v3 | Doc2SoarGraph: Discrete Reasoning over Visually-Rich Table-Text Documents with Semantic-Oriented Hierarchical Graphs
###### Abstract
Discrete reasoning over table-text documents (e.g., financial reports) gains increasing attention in recent two years. Existing works mostly simplify this challenge by manually selecting and transforming document pages to structured tables and paragraphs, hindering their practical application. In this work, we explore a more realistic problem setting in the form of TAT-DQA, i.e. to answer the question over a visually-rich table-text document. Specifically, we propose a novel **Doc2SoarGraph** framework with enhanced discrete reasoning capability by harnessing the differences and correlations among different elements (e.g., quantities, dates) of the given question and document with **S**emantic-**ori**ented **i**ierarchical **G**raph structures. We conduct extensive experiments on TAT-DQA dataset, and the results show that our proposed framework outperforms the best baseline model by **17.73%** and **16.91%** in terms of Exact Match (EM) and F\({}_{1}\) score respectively on the test set, achieving the new state-of-the-art.
## 1 Introduction
Discrete reasoning is a fundamental part of human intelligence [10], and therefore also an indispensable capability we pursue in artificial intelligence that enables machines to reason over multiple parts of natural language content for comprehension and inference. In the past two years, there has been a surge of works addressing discrete reasoning over a hybrid of table-text content like TAT-QA [23], FinQA [3] and MultiHiertt [24]. However, these works focus on the well-maintained tables and manually selected paragraphs from the original documents, which is not in line with reality.
Recently, research on discrete reasoning over real-world table-text documents directly has been activated with the release of TAT-DQA [23], a Document Visual Question Answering (DocVQA) dataset over financial documents (See Figure 1). To address this challenge, Zhu et al. (2022) also proposed an MHST model, which adopts the multi-modal pre-trained model LayoutLMv2 [22] to encode the question and one document page, and then applies sequence tagging on each token to select the relevant tokens to the question, followed by answer inference over the selected tokens. Though effective, the performance of MHST is still not optimal. One reason is that the tokens only carry part of the semantics of the original data, resulting in sub-optimal evidence extracted from the document. For example, as shown in Figure 1, the spectrum license fee in \(2019\) is \(1,731\) million. The quantity \(1,731\) corresponds to four tokens, i.e., "\(\underline{1}\)", ", "\(\underline{73}\)", and
Figure 1: An example from TAT-DQA. We leverage four types of semantic elements from the question and document to facilitate discrete reasoning, i.e., _Date_ in red rectangle, _Quantity_ in purple rectangle, _Question_ in yellow rectangle and _Block_ in blue rectangle. The quantities with yellow background are the supporting evidence to the question. The “million” with green background is the scale of the answer.
"##1" after tokenization. The model can hardly infer the meaning of the original number from every single token unless they are all combined.
Instead of only relying on token-level semantics, we propose to exploit the element-level semantics to facilitate discrete reasoning. In particular, we consider four types of elements in the input, including _Question_, _Block_, _Quantity_ and _Date_, as shown in Figure 1. Specifically, _Question_ refers to the question; _Block_ refers to each document block generated by the OCR/PDF converter; _Quantity_, _Date_ respectively refer to each quantity and each date in the question and the document block. Each of these elements carries more complete semantics than single tokens that can be leveraged by the model. The differences and correlations among them can provide rich and crucial clues for the model to conduct reasoning to derive the answer. For example, though \(2019\) and \(1,731\) in Figure 1 are both numerical values, actually the former refers to "year 2019" (date), while the latter is "the spectrum license fee" (quantity), so they cannot be compared. Obviously, it would be more appropriate to model the different types of elements separately. On the other hand, to understand the numerical value \(1,731\) in the document, the model needs to consider the text information of the corresponding document block. Thus, the correlations of different elements should also be leveraged to facilitate the model reasoning.
In this work, we extend SoarGraph Zhu et al. (2023) to tackle the challenge over visually-rich table-text documents in the form of TAT-DQA Zhu et al. (2022). Specifically, we propose a novel **Doc2SoarGraph** framework for question answering over these **documents** with **s**emantic-**o**riented **i**r**archical **g**raphs. Doc2SoarGraph models the differences and correlations of the elements (i.e., quantities, dates, question and document blocks) in the input data with graph structures taking each element as one node, gaining a more powerful discrete reasoning capability. In TAT-DQA, about \(20\%\) of the documents in test set have multiple pages. Hence, for a multi-page document, our Doc2SoarGraph first transforms it to a single image of the model preferred dimension with a simple yet effective method. Then, given a question and a document, Doc2SoarGraph adopts LayoutLMv2 to take in the question, document text and the corresponding layout and document image, and initializes the representations of all semantic elements with the output. After that, we build a Quantity Comparison (QC) graph to model the magnitude and comparison among all the _Quantity_ nodes. Similarly, we build a Date Comparison (DC) graph to model the time sequence among all the _Date_ nodes. We also build a Text Relation (TR) graph with the _Question_ node and _Block_ nodes as these nodes usually contain rich text information. On top of these three graphs, a Semantic Dependency (SD) graph is built with all types of nodes to model the semantic relationships and dependencies among them. Then, the framework selects the most question-relevant nodes from the SD graph and applies different reasoning strategies over the selected nodes to derive the final answer based on the answer type.
To sum up, we make three-fold contributions.
**1)** We propose to exploit element-level semantics to facilitate discrete reasoning over visually-rich table-text documents. **2)** We develop a novel Doc2SoarGraph framework to model the differences and correlations among various elements with semantic-oriented hierarchical graph structures, which owns greatly enhanced evidence extraction and discrete reasoning capability. **3)** We conduct extensive experiments on TAT-DQA dataset, and the results show that our Doc2SoarGraph framework outperforms the MHST model by **17.73%** and **16.91%** respectively in terms of Exact Match and F\({}_{1}\) score on the test set, demonstrating remarkable effectiveness.
## 2 Doc2SoarGraph Framework
We first formally define our problem. Consider a natural language question denoted as \(Q\), and a visually-rich table-text document denoted as \(D\) with several pages \(P=(P_{1},P_{2},...,P_{|P|})\), where \(|P|\) is the number of pages. In the document \(D\), the page \(p\) has a list of blocks \(B^{p}=(B^{p}_{1},B^{p}_{2},...,B^{p}_{|B|})\) that are generated by the OCR/PDF converter, where \(|B|\) is the number of blocks on the page \(p\). Our goal is to generate the answer to the question \(Q\) that usually requires discrete reasoning based on the document \(D\). To solve the problem, we develop a Doc2SoarGraph framework. An overall architecture is illustrated in Figure 2. We now elaborate the details of each component below.
### Multi-page Document Transformation
In TAT-DQA, about 20% of the documents in test set consist of multiple pages. we first pre-process these multi-page documents into one-page ones with a simple yet effective method. In particular,
we first transform each page to a single image with the same dimension, and then combine the corresponding multiple images of the pages vertically following the original page order. After that, we resize the combined image to the model preferred dimension as the final document image, and obtain its initial visual embeddings by applying the same CNN-based encoder as LayoutLMv2 Xu et al. (2021). Since the document text and layout information are available in TAT-DQA, we further adjust the layout information according to the dimension of the final document image.
### Node Initialization
Instead of only relying on token-level semantics, our method also exploits element-level semantics to facilitate discrete reasoning with graph structures. In particular, we harness four types of elements, namely, the question, each document block generated by the OCR/PDF converter, each quantity and each date in the question and the document block, which are named _Question_, _Block_, _Quantity_ and _Date_, respectively. We take each type of element as a kind of node, and get four types of nodes to build the graphs, i.e., _Question_ node, _Block_ node, _Quantity_ node and _Date_ node. We then employ LayoutLMv2\({}_{\text{LARGE}}\)Xu et al. (2021) to take as input the question, the document text and layout information, and the final document image, and output the token-level hidden representations. Then, we compute the mean of the corresponding tokens for each node as its initial representation.
### Node Selection
Based on the four types of nodes as explained above, we construct hierarchical graphs to model their relationships so as to select those most relevant nodes as the supporting evidence to the question and facilitate discrete reasoning of the model.
#### 2.3.1 Hierarchical Graphs Construction
We construct four graphs, which form a two-level hierarchy, to model the element-level semantics. Formally, a graph G is represented by an adjacency matrix \(\text{A}\in R^{N\times N}\), where \(N\) is the number of nodes. If there is an edge connecting the \(i^{\text{th}}\) and \(j^{\text{th}}\) nodes, we assign value 1 to the corresponding position \((i,j)\) in the matrix \(A\), and otherwise 0.
**Quantity Comparison (QC) Graph** (denoted as \(G_{QC}\)): It is dedicated to retaining the numerical magnitude and comparison between every two
Figure 2: An overview of proposed Doc2SoarGraph framework. Take the sample in Figure 1 as an example.
quantities. For two _Quantity_ nodes \(q_{i}\), \(q_{j}\), if \(q_{i}\geq q_{j}\), a directed edge \(e_{ij}\) = (\(q_{i}\), \(q_{j}\)) pointing from \(q_{i}\) to \(q_{j}\) is added following NumNet [10].
**Date Comparison (DC) Graph** (denoted as \(G_{DC}\)): It is dedicated to retaining the time sequence and comparison between every two dates. For two _Date_ nodes \(d_{i}\), \(d_{j}\), a directed edge \(e_{ij}\) = (\(d_{i}\), \(d_{j}\)) pointing from \(d_{i}\) to \(d_{j}\) is added if \(d_{i}\geq d_{j}\) (\(d_{i}\) later than \(d_{j}\)).
**Text Relation (TR) Graph** (denoted as \(G_{TR}\)): It is dedicated to associating the informative descriptions among the question and the document blocks. The _Question_ node and a _Block_ node or every two _Block_ nodes will have an undirected edge between them, forming a fully-connected graph.
**Semantic Dependency (SD) Graph** (denoted as \(G_{SD}\)): It is built with all the four types of nodes to model the semantic dependencies of the _Quantity_ or _Date_ node upon the _Question_ or _Block_ node, besides attaining all the correlations in the above three graphs. 1) Edges for two _Quantity_ nodes, two _Date_ nodes, a _Question_ node and a block node, or two _Block_ nodes will be added in \(G_{SD}\) following the construction rules for \(G_{QC}\), \(G_{DC}\), and \(G_{TR}\), respectively. 2) Between one _Quantity_ node and one _Question_ or _Block_ node, a directed edge pointing from the _Quantity_ node to the _Question_ node or the _Block_ node will be added to the graph \(G_{SD}\) if the quantity is part of the question or block; edges between one _Date_ node and one _Question_ or _Block_ node are added in the same way.
#### 2.3.2 Node Classifier
After constructing the hierarchical graphs, a dedicated graph convolution network (GCN) [11] is applied for each graph to learn node representations respectively. As illustrated in Figure 2, the GCN (QC), GCN (DC) and GCN (TR) are applied respectively on the QC graph, DC graph and TR graph to learn corresponding node representations, which are then used to initialize the node representations of the SD graph. The GCN (SD) is applied on the SD graph to learn the final representation of each node \(h_{node}\). A binary node classifier is then applied on each node in the SD graph to predict whether the node is relevant to the question or not. The probability on node classification is computed as
\[\text{P}_{\text{node}}=\text{softmax}(\text{FFN}(h_{\text{node}})) \tag{1}\]
where FFN is a feed-forward network with two layers. All the nodes that are classified as relevant to the question are collected. Note that the graph representation of the SD graph \(h_{SD}\) is obtained by computing the mean of all the node representations in the SD graph.
### Answer Generation
We generate the final answer with the selected nodes. We first mask the tokens not included in the selected _Block_ nodes, and then apply corresponding reasoning strategies for different answer types.
#### 2.4.1 Token Masking
Based on the selected nodes, we mask the tokens that are not included in the selected _Block_ nodes to reduce the search space for answer prediction and update the token representations with their corresponding block node representations. Particularly, we obtain the token-level representations from the output of the LayoutLMv2 encoder first. Then, we mask the tokens not covered by any selected block nodes. For the tokens that are included in the selected block nodes, we update the representation of each token by concatenating its token representation with the corresponding block representation,
\[h^{{}^{\prime}}_{token}=\text{concat}(h_{token},h_{node}) \tag{2}\]
where \(h_{token}\) is the token representation output from the encoder; \(h_{node}\) is the representation of the token's corresponding block node obtained from the SD graph; \(concat\) denotes concatenation; \(h^{{}^{\prime}}_{token}\) is the updated token representation. For tokens that are masked in the sequence, we pad their representations with zero. Finally, we obtain a sequence of updated token representations \(h^{{}^{\prime}}_{[t_{1},t_{2},\ldots,t_{s}]}\) and \(s\) is the maximum sequence length.
#### 2.4.2 Answer Type Classifier
TAT-DQA offers four different answer types, i.e., _Span_, _Spans_, _Counting_, _Arithmetic_. The Answer Type Classifier is adopted to predict the answer type of a question, which is essentially a multi-class classifier taking the SD graph representation \(h_{SD}\) as input. The probability of each answer type is computed as
\[\text{P}_{\text{type}}=\text{softmax}(\text{FFN}(h_{\text{SD}})) \tag{3}\]
where FFN is a feed-forward network with two layers.
#### 2.4.3 Span Classifier
For the _Span_ question, the answer is a sub-sequence of the input sequence, which is achieved by the
Span Classifier. It takes the token representations obtained in Section 2.4.1 as the input and predicts the start and end indices of the sub-sequence. Formally, the probability distribution of the start position over the sequence is obtained by
\[\text{P}_{\text{start}}=\text{softmax}(\text{FFN}(h^{{}^{\prime}}_{[t_{1},t_{2},...,t_{s}]})) \tag{4}\]
where FFN is a feed-forward network with two layers. Similarly, we can obtain the probability of the end position \(\text{P}_{\text{end}}\).
#### 2.4.4 Token Classifier
For the _Spans_ and _Counting_ questions, Token Classifier is employed to infer the final answer. In particular, for each valid token obtained in Section 2.4.1, Token Classifier assigns a B, I or O label and takes those tagged with B and I to generate the final answer. Formally, it takes in the updated representation \(h^{{}^{\prime}}_{token}\) of each valid token and computes the probability of the label as
\[\text{P}_{\text{token}}=\text{softmax}(\text{FFN}(h^{{}^{\prime}}_{token})) \tag{5}\]
where FFN is a feed-forward network with two layers. After obtaining the tokens, the final answer for _Spans_ and _Counting_ questions is generated heuristically following MHST Zhu et al. (2022).
#### 2.4.5 Tree Generator
For the _Arithmetic_ question, Tree Generator is adopted to generate an expression tree with the selected _Quantity_ and _Date_ nodes, which can be executed to infer the answer. Following MHST Zhu et al. (2022), the Tree Generator is implemented with GTS Xie and Sun (2019), which generates expression trees in a goal-driven manner. To adapt GTS in our framework, we make two major modifications. First, instead of feeding all the numbers and dates in the input into GTS, we only feed the selected most relevant _Quantity_ and _Date_ nodes, which significantly reduces the number of candidates for GTS to predict each leaf node and alleviates the difficulties. Second, when GTS predicts each node in the expression tree, we revise the generation of the context vector by attending to all the nodes in the SD graph instead of the tokens in the sequence, which can offer enhanced comprehensive semantic representations of the document.
The expression tree generated by the Tree Generator includes three kinds of nodes: the arithmetical operators \(V_{op}\) (i.e., +,-,*/), the constant numbers \(V_{con}\) (i.e., \(1,2\),\(3\),.., \(100\) ), and the quantity and date nodes \(V_{node}\) that are selected in Section 2.3. The target vocabulary for tree generation is denoted as \(V=V_{op}\cup V_{con}\cup V_{node}\) and its length is denoted as \(L\). Following the typical construction method of GTS Xie and Sun (2019), the expression tree is constructed starting from producing the topmost operator and then the left and right child nodes.
#### 2.4.6 Scale Classifier
Scale is vital for a numerical answer in TAT-DQA, including five possible values: _Thousand_, _Million_, _Billion_, _Percent_ and _None_. Scale Classifier is developed to predict the scale of the final answer. In particular, it takes as input the SD graph representation \(h_{SD}\) and computes the probability of each scale as
\[\text{P}_{\text{scale}}=\text{softmax}(\text{FFN}(h_{\text{SD}})) \tag{6}\]
where FFN is a feed-forward network with two layers. After obtaining the scale, we generate the final answer by multiplying or concatenating the answer value with the scale following the practice in MHST Zhu et al. (2022).
## 3 Experiments
We conduct extensive experiments to validate the effectiveness of our proposed framework and present comprehensive analyses.
### Dataset, Baselines and Evaluation Metrics
We conduct all the experiments on TAT-DQA Zhu et al. (2022), which is built with visually-rich table-text documents in the finance domain. It contains totally \(16,558\) QA pairs and most questions require discrete reasoning to generate the answers while the answers to other questions can be extracted directly from the documents.
We compare our framework with three baseline models. 1) **NumNet+ V2**Ran et al. (2019) is a text QA model with impressive capability of discrete reasoning over textual data. It constructs a numerically-aware graph neural network, which takes all numbers in the given question and document as nodes and builds edges via numerical comparison, and then performs discrete reasoning over the graph to infer the final answer. 2) **TagOp**Zhu et al. (2021) is a table-text QA model which first applies sequence tagging on each token to extract question-relevant ones and then applies a set of pre-defined aggregation operators (e.g. addition,
subtraction, counting) over extracted tokens to generate the final answer. 3) **MHST**Zhu et al. (2022) is a multi-modal QA model which employs LayoutLMv2 Xu et al. (2021) as the encoder to take the question, document text, layout and visual information as input, extracts supporting evidence using sequence tagging, and applies a tree-based decoder Xie and Sun (2019) to generate an expression tree with the evidence.
Following Zhu et al. (2022), Exact Match (EM) and numeracy-focused (macro-averaged) F\({}_{1}\) are used to measure the performance of all models, taking into account the scale of the answer.
### Implementation Details
We implement our framework in PyTorch and train it on one NVIDIA DGX-1 with eight V100 GPUs. We adopt LayoutLMv2\({}_{large}\) as the encoder. We use Adam optimizer with a learning rate of \(5e-4\) and warmup over the first \(6\%\) steps to train. The maximum number of epochs is set to \(50\), while the batch size is set to \(8\) and the gradient accumulation steps is \(8\). The dropout probabilities for the GCNs and GTS are \(0.6\) and \(0.5\) respectively while \(0.1\) for others. Beam search is applied during inference to select the best expression tree and the beam size is \(5\) by default. We set \(12\) as the maximum number of selected nodes in the node selection.
### Comparison with Baselines
We first compare our Doc2SoarGraph framework with all baseline models. The experimental results are summarized in Table 1. We can observe that our framework significantly outperforms all the baseline models, achieving new state-of-the-art. In particular, our framework reaches \(59.23\%\) and \(67.61\%\) on the test set in terms of Exact Match and F\({}_{1}\) metrics respectively, i.e., an increase of \(17.73\%\) and \(16.91\%\) over the previous state-of-the-art model MHST Zhu et al. (2022), demonstrating the great effectiveness of our method.
### Analysis on Single- and Multi-page Documents
We compare our model with three baseline models over single-page documents and multi-page documents respectively. Figure 3 shows the comparison results in terms of F\({}_{1}\) score on TAT-DQA test set. We can observe that our Doc2SoarGraph significantly outperforms all baselines on both single-page and multi-page documents, up by \(25.41\)% and \(7.06\)% respectively compared to previous best models. Moreover, the superiority of our framework is much more obvious on single-page documents (\(70.81\)%) than multi-page documents (\(45.06\)%). This implies that addressing multi-page documents is still more challenging than single-page ones.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{2-5} & EM & F\({}_{1}\) & EM & F\({}_{1}\) \\ \hline \multirow{2}{*}{
\begin{tabular}{l} NumNet+ V2 \\ TagOp \\ MHST \\ \end{tabular} } & 28.10 & 36.60 & 30.60 & 40.10 \\ & 32.30 & 39.60 & 33.70 & 42.50 \\ & 39.10 & 47.40 & 41.50 & 50.70 \\ & **57.97** & **65.38** & **59.23** & **67.61** \\ & 18.87\(\uparrow\) & 17.98\(\uparrow\) & 17.73\(\uparrow\) & 16.91\(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of our model and baselines on TAT-DQA dev and test. Best results are marked in bold.
\begin{table}
\begin{tabular}{l c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{EM} & \multicolumn{2}{c}{F\({}_{1}\)} \\ \cline{2-5} & MHST & Ours & MHST & Ours \\ \hline Span & 41.10 & **50.00** & 58.30 & **62.88** \\ Spans & 25.70 & **41.43** & 43.30 & **71.19** \\ Counting & **43.20** & 40.00 & **43.20** & 40.00 \\ Arithmetic & 42.70 & **73.96** & 42.70 & **73.96** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison between our model and MHST for different answer types on TAT-DQA test set. Best results are marked in bold.
Figure 3: Performance comparison in F\({}_{1}\) score on one-page and multi-page documents on TAT-DQA test set.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline GCN & GCN & GCN & GCN & & \\ (QC) & (DC) & (TR) & (SD) & EM & F1 \\ \hline \multirow{2}{*}{\begin{tabular}{l} \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \end{tabular} } & \multirow{2}{*}{
\begin{tabular}{l} \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\blacktriangle\) \\ \(\triangle\) \\ \(\blacktriangle\) \\ \(\triangle\) \\ \(\blacktriangle\) \\ \(\triangle\) \\ \\ \(\blacktriangle\) \\ \(\triangle\) \\ \\ \(\triangle\) \\ \(\triangle\) \\ \
### Analysis on Answer Types
TAT-DQA provides four different answer types, and here we analyze the performance of our framework on each answer type compared to the MHST model. The results are summarized in Table 2. Compared with MHST, our framework gains the largest increase (i.e., \(31.26\)% in terms of EM metric) on Arithmetic questions, demonstrating impressive discrete reasoning capability. This enhancement is possibly due to the effective modeling of the differences and correlations among the quantities, dates and blocks from the documents. For Spans and Counting questions, they share almost all techniques in the proposed framework. Comparably, the framework gains a \(15.72\)% increase on Spans questions but has a \(3.2\)% decrease on Counting (i.e., failing one more case). This is probably due to the data bias on Counting questions because the number of Counting questions (<2.0%) is much less than Spans (>\(12.0\)%) on test set. The framework obtains an increase of \(8.9\)% in terms of EM on Span questions, indicating our design also benefits the capability of answer extraction from the document.
### Analysis on Parameter Sharing
Each of the four graphs in our framework adopts a dedicated graph convolution network (GCN) to learn node representations, i.e., GCN (QC), GCN (DC), GCN (TR), and GCN (SD). Here we analyze the performance by adjusting the parameter sharing strategies among the four GCNs, as shown in Table 3. We can observe that it achieves the best performance when the four GCNs have independent parameters. This may be because the four graphs are different and each has its own role.
### Ablation Study
Our Doc2SoarGraph contains four steps, i.e., Multi-page Document Transformation, Node Initialization according to various semantic elements, Node Selection via hierarchical graphs, and Answer Generation powered by token masking. Here we investigate the contributions of each component to its final performance. As shown in Table 4, we add the four steps in sequence to MHST, and find that all added components can benefit the model performance. Furthermore, we find node initialization makes surprisingly greater contributions to the framework, indicating the importance of modeling the differences and correlations among the various elements in table-text documents.
Also, we achieve node selection with hierarchical graphs, i.e. the QC graph, DC graph, TR graph and SD graph. To test the necessity of each graph, we remove them one by one from the full model to see the performance. The results are summarized in Table 5. Continuous performance drop can be observed as we remove each graph, indicating that each graph contributes to the good performance of our framework.
### Error Analysis
We randomly sample \(100\) error instances of our method on the dev set and analyze the reasons. We find that the errors occur to all the six modules (Col. 1 in Table 6), i.e. Span Classifier (SPC), Token Classifier (TC), Node Classifier (NC), Tree Generator (TG), Scale Classifier (SC) and Answer Type Classifier (ATC), listed in a descending order based on the error percentage. These errors are classified into nine categories (Col. 2 in Table 6). See a more comprehensive table with examples in Appendix. We can see that, 1) \(32\)% errors are caused by the SPC module predicting inaccurate predictions of starting and ending positions for Span questions, i.e., \(21\)% predictions overlapping but not exactly matching ground truth, and \(11\)% predictions having zero overlap with ground truth; 2) \(24\)% errors are caused by the NC module failing to select the relevant nodes; 3) \(19\)% errors are due to the TC module predicting less or more tokens than it needs to derive the answer; 4) \(16\)% errors are caused by the TG module generating a wrong
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & EM (\(\uparrow\)) & F\({}_{1}\) (\(\uparrow\)) \\ \hline MHST & 41.50 (-) & 50.70 (-) \\ + Node Initialization & 54.30 (12.80) & 61.59 (10.89) \\ + Doc Transformation & 56.58 (2.28) & 64.06 (2.47) \\ + Hierarchical Graphs & 58.80 (2.22) & 66.60 (2.54) \\ + Token Masking (Full) & **59.23** (0.43) & **67.61** (1.06) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Analysis on effects of the components in Doc2SoarGraph on TAT-DQA test set. Best results are marked in bold.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Dev**} & \multicolumn{2}{c}{**Test**} \\ \cline{2-5} & EM & F\({}_{1}\) & EM & F\({}_{1}\) \\ \hline Full Graphs & **57.97** & **65.38** & **59.23** & **67.61** \\ - QC Graph & 56.69 & 65.18 & 57.73 & 66.89 \\ - DC Graph & 56.14 & 64.83 & 57.73 & 66.82 \\ - TR Graph & 55.23 & 63.57 & 55.86 & 65.09 \\ - SD Graph & 56.27 & 64.77 & 56.95 & 66.54 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of the hierarchical graphs in our framework on the test set.
expression tree, among which \(4\)% are wrong number signs (i.e., positive/negative) and \(12\)% are other wrong expressions; 5) \(6\)% and \(3\)% errors are caused by the SC module and the ATC module predicting wrong scale and answer types.
## 4 Related Work
We review related work on discrete reasoning, document visual question answering (VQA) and graph-based document representation.
### Discrete Reasoning
Discrete reasoning has been explored in many natural language processing (NLP) tasks since 1960s Feigenbaum et al. (1963); Dua et al. (2019); Zhu et al. (2021). Recent works focus on a hybrid of annotated (semi-)structured table and a list of associated paragraphs Chen et al. (2021); Zhao et al. (2022); Zhu et al. (2021); Li et al. (2022), retrieving or extracting evidences from the given table and paragraphs and then reasoning over these evidences to generate the answerLei et al. (2022); Zhou et al. (2022); Li et al. (2022); Nararatwong et al. (2022); Yarullin and Isaev (2023). Most recently, a document VQA dataset TAT-DQA Zhu et al. (2022) is released, which triggers the increasing interest in discrete reasoning over real-world complex financial documents with both tables and text. To tackle this challenging task, Zhu et al. (2022) proposed the MHST model, which extracts relevant tokens from the document using sequence tagging and applies heuristic or "seq2tree" method to generate the answer according to the answer type. In this work, we also address the TAT-DQA challenge but with a more powerful framework, where the differences and correlations of various semantic elements in the question and document are leveraged with semantic-oriented hierarchical graph structures.
### Document VQA
Document VQA is a high-order document understanding task to answer a question in natural language based on a visually-rich document Cui et al. (2021); Mathew et al. (2020); Tanaka et al. (2021); Zhu et al. (2022). This task is mostly tackled by pre-trained language models, like LAMBERT Garncarek et al. (2020) and StructuralLM Li et al. (2021) which exploit both textual and layout information of the documents. Recently, some works develop multi-modal pre-trained language models, e.g., LayoutLMv2 Xu et al. (2021) and DocFormer Appalaraju et al. (2021) Additionally, some DocVQA models are developed by fine-tuning the pre-trained language models, e.g. TILT Powalski et al. (2021) and MHST Zhu et al. (2022). In this work, we build our Doc2SoarGraph framework based on the pre-trained LayoutLMv2 Xu et al. (2021).
### Graph-based Document Representation
Early works use grid-based methods to represent visually-rich documents, such as representing each document page as a grid of characters Katti et al. (2018) or a grid of contextualized word piece embedding vectors Denk and Reisswig (2019). Later, many works Riba et al. (2019); Hwang et al. (2021); Wei et al. (2020); Yu et al. (2020); Cheng et al. (2020) represent documents with more powerful graphs to facilitate information extraction from visually-rich documents. For example, Riba et al. (2019) adopts a GNN-based model to extract structured tables from invoices; Hwang et al. (2021) constructs a directed graph to model the spatial dependency among text tokens in the documents. In this work, we build hierarchical graphs with different semantic elements in the document (i.e., quantities, dates, and document blocks) to facilitate evidence extraction and discrete reasoning.
## 5 Conclusion
In this work, we propose a novel Doc2SoarGraph framework with a strong discrete reasoning capability to tackle QA challenge over visually-rich table-text documents in the form of TAT-DQA. We model the differences and correlations of various elements (i.e., quantities, dates, question, and document blocks) in the input with hierarchical graphs. We validate its
\begin{table}
\begin{tabular}{l|l|c} \hline Module & Error & \% \\ \hline \multirow{2}{*}{Span Classifier} & Offset Error & 21\% \\ \cline{2-3} & No Overlap & 11\% \\ \hline \multirow{2}{*}{Node Classifier} & Missing Nodes & 24\% \\ \hline \multirow{2}{*}{Token Classifier} & Missing Tokens & 11\% \\ \cline{2-3} & Redundant Tokens & 8\% \\ \hline \multirow{2}{*}{Tree Generator} & Wrong Expression & 12\% \\ \cline{2-3} & Wrong Sign & 4\% \\ \hline Scale Classifier & Wrong Scale & 6\% \\ \hline Answer Type Classifier & Wrong Answer Type & 3\% \\ \hline \end{tabular}
\end{table}
Table 6: Types and statistics of errors in each module.
effectiveness with extensive experiments and analyses, which show our framework can beat the previous state-of-the-art model by large margins. Our work significantly pushes the frontier of the research on intelligent document understanding towards real-world application.
## Limitations
Despite the impressive performance on TAT-DQA Zhu et al. (2022), our framework still has much room for future improvement, as shown in error analysis in Section 3.8. Also, the Multi-page Document Transformation technique we have developed for pre-processing multi-page documents is simple, and more effective methods are desired. For example, our model may not be applicable to the documents with a large number of pages (e.g., >50 pages). In addition, our framework is designed for the documents that contain different kinds of elements, such as numerical values and dates. This means it may have limited advantages over those with unique elements like pure textual documents. Furthermore, it may take 3-4 days to train our framework.
## Ethics Statement
In this work, we present a new framework to boost the performance of discrete reasoning over visually-rich table-text documents. Our framework is developed on open-source tools and datasets to assist human-being in process. Thus, we do not anticipate any negative social impacts.
|
2304.09095 | Chirality-Induced Magnetization of Magnetite by an RNA Precursor | Life is homochiral and homochirality is a fundamental feature of living
systems on Earth. While the exact mechanism that led to homochirality is still
not fully understood, any realistic scenario on the origins of life needs to
address the emergence of homochirality. In order to impose and maintain
chirality in a prebiotic network, an environmental factor functioning as a
chiral agent is demanded. Magnetized surfaces are prebiotically plausible
chiral agents, shown to be effective in enantioseparation of
ribose-aminooxazoline (RAO), a ribonucleic acid (RNA) precursor, due to the
chiral-induced spin selectivity (CISS) effect. As such, mechanisms for breaking
the magnetic symmetry of magnetic minerals are of the utmost importance. Here
we report the avalanche magnetization of magnetite $(Fe_{3}O_{4})$ by the
crystallization of enantiopure RAO. The observed breaking of the magnetic
symmetry is induced by the chiral molecules due to the CISS effect and spreads
out across the magnetic surface like an avalanche, providing a way to uniformly
magnetize a magnetic surface without fully covering it. Considered together
with our previous results on enantioseparation by crystallization on a magnetic
surface, chirality-induced avalanche magnetization paves the way for a
cooperative feedback between chiral molecules and magnetic surfaces. With this
feedback, a weak natural bias in the net magnetization can be amplified and
spin-selective processes can be accommodated on magnetic minerals on a
persistent basis. | S. Furkan Ozturk, Deb Kumar Bhowmick, Yael Kapon, Yutao Sang, Anil Kumar, Yossi Paltiel, Ron Naaman, Dimitar D. Sasselov | 2023-04-13T22:01:24Z | http://arxiv.org/abs/2304.09095v1 | # Chirality-Induced Magnetization of Magnetite by an RNA Precursor
###### Abstract
Life is homochiral and homochirality is a fundamental feature of living systems on Earth. While the exact mechanism that led to homochirality is still not fully understood, any realistic scenario on the origins of life needs to address the emergence of homochirality. In order to impose and maintain chirality in a prebiotic network, an environmental factor functioning as a chiral agent is demanded. Magnetized surfaces are prebiotically plausible chiral agents, shown to be effective in enantioseparation of ribose-aminooxazoline (RAO), a ribonucleelic acid (RNA) precursor, due to the chiral-induced spin selectivity (CISS) effect. As such, mechanisms for breaking the magnetic symmetry of magnetic minerals are of the utmost importance. Here we report the avalanche magnetization of magnetite (Fe\({}_{3}\)O\({}_{4}\)) by the crystallization of enantiopure RAO. The observed breaking of the magnetic symmetry is induced by the chiral molecules due to the CISS effect and spreads out across the magnetic surface like an avalanche, providing a way to uniformly magnetize a magnetic surface without fully covering it. Considered together with our previous results on enantioseparation by crystallization on a magnetic surface, chirality-induced avalanche magnetization paves the way for a cooperative feedback between chiral molecules and magnetic surfaces. With this feedback, a weak natural bias in the net magnetization can be amplified and spin-selective processes can be accommodated on magnetic minerals on a persistent basis.
**Keywords:** Homochirality, CISS effect, origins of life, magnetite, ribose-aminooxazoline
The emergence of life can only be understood within the context of its planetary environment. As such, prebiotic chemistry is constrained by environmental conditions [1]. These constraints also dictate the features of life. One such feature is biomolecular homochirality--single-handedness of the molecules of life.
Reaching and maintaining homochirality is crucial for a robust prebiotic network producing functional polymers with high yields. Despite its importance, origin of homochirality has remained elusive to this date and understanding the ways by which the environment can break the chiral symmetry of life is paramount in elucidating this mystery. In a recent work, we addressed this problem and proposed that magnetic mineral surfaces can facilitate enantioselective processes and highlighted the possible role of authigenic magnetite (Fe\({}_{3}\)O\({}_{4}\)) sediments on the origin of homochirality [2, 3].
Authigenic iron minerals are ubiquitous in ancient lacustrine environments and they play various roles in geochemical processes as inferred from the Curiosity rover data on Gale Crater, Mars [4, 5]. Magnetite is one of the most abundant iron minerals present in the sediments and magnetite-silica rocks are hypothesized to be covering the bottom of a redox-stratified freshwater lake in Gale crater [6]. Authigenic magnetite formation is also suggested as a mechanism to stabilize liquid water accompanied with the production of H\({}_{2}\) which can function as a feedstock in the prebiotic synthesis of biomolecules [7]. In addition, due to their ferrimagnetic nature, Fe\({}_{3}\)O\({}_{4}\) sediments align their magnetic domains while they form as small superparamagnetic particles under the planetary field and carry a statistically uniform chemical remanent magnetization (CRM) [8, 9]. With this distinct net magnetization direction, magnetic sediments break the chiral symmetry on a hemisphere scale and can accommodate asymmetric processes due to a phenomenon called chiral-induced spin selectivity (CISS).
The CISS effect establishes a strong coupling mechanism between electron spin and molecular chirality [10, 11, 12]. This coupling forces chiral molecules to interact with electrons in a spin-selective manner [13, 14]. Likewise, electrons with a well-defined spin alignment to the molecular frame interact with chiral molecules based on their molecular handedness [15, 16, 17, 18, 19]. In our latest work, we reached homochirality utilizing this phenomenon and showed the importance of magnetic surfaces to induce enantioselective processes in a prebiotic network [20].
To reach a homochiral state in a chemical network, a robust mechanism to break the chiral symmetry, inducing an imbalance between two enantiomers, is required. In addition, a persistent amplification of this imbalance has to accompany it [21]. In our latest work, we have demonstrated that, due to the CISS effect magnetic surfaces can act as templates for the enantioselective crystallization of a ribonucleic acid (RNA) precursor. Moreover, we have shown that, conglomerate crystallization can accompany the chiral symmetry breaking by the magnetic surface as a simultaneous and well-matched amplification mechanism. By combining these two necessary features to reach homochirality, we obtained enantiopure crystals of ribose-aminooxazoline (RAO) on magnetite surfaces, from a fully racemic solution of RAO [20]. RAO is a ribonucleotide precursor that can be synthesized by the reaction of glyceraldehyde and 2-aminooxazole with high-yields under prebiotically plausible conditions [22, 23]. It is a poorly water-soluble compound that is stable against racemization and forming homochiral crystals in the non-centrosymmetric P2\({}_{1}\)2\({}_{1}\)2\({}_{1}\) space group [24, 22, 25]. Last but not least, the emergence of homochirality at the stage of RAO allows for the propagation of homochirality through RNA to peptides and therefore to the entire prebiotic network [26, 27, 20]. With these features, RAO is an important prebiotic compound which can play a central role in the origin of homochirality.
Our previous results on spin-selective crystallization of RAO on Fe3O4 surfaces demonstrate that it is possible to obtain enantiopure RAO from a racemic mixture by a process controlled only by the environ
Figure 1: **A cooperative feedback between the magnetic surface and RAO can amplify the natural magnetization.** Authigenic magnetic minerals get magnetized while they form under a geomagnetic field. Sedimentary rock surfaces with these magnetic inclusions carry a net remanent magnetization. This natural net magnetization is not uniform albeit statistically significant and can allow for spin-selective asymmetric processes due to the CISS effect. In our previous work, we have shown that an essential RNA precursor, ribose-aminooxazoline (RAO), can selectively crystallize from a racemic mixture of pentose aminooxazoline on a magnetized magnetite (Fe3O4) surface [20]. Although, with a subsequent re-crystallization, we obtained nearly enantiopure crystals of RAO; in a natural environment, on a surface with non-uniform magnetization, this process will inevitably be less selective. However, the interaction between the magnetic surface and chiral molecules is reciprocal: chiral molecules too can magnetize magnetic surfaces due to the spin-exchange and magnetic dipolar interactions. Hence, as enantiopure crystals of RAO form on the magnetic surface, the _chirality-induced magnetization_ allow for obtaining surfaces with uniform magnetization. This cooperative feedback between the magnetic surface and chiral molecules can enhance the natural magnetization and set the stage for highly selective asymmetric processes on early Earth.
ment [20]. However, the interaction behind this process is not one-sided: just as magnetic surfaces induce enantioselective processes among chiral molecules, chiral molecules as well can induce spin polarization on magnetic surfaces. In our latest paper, we proposed a model in which the _chirality-induced magnetization_ phenomenon is reinforcing the statistically significant but non-uniform natural magnetization of the authigenic sediments. A small enantiomeric imbalance induced by the natural magnetization of the magnetic surface can be amplified by subsequent dissolution and re-crystallization cycles. During the crystallization process, nearly pure conglomerate crystals cover the magnetic surface and align the magnetic domains underneath them along their chiral molecular axis--enhancing the magnetization of the magnetic surface due to a cooperative feedback (Fig. 1). _In this work, we have experimentally verified this phenomenon (Fig. 2) and showed that previously not magnetized Fe\({}_{3}\)O\({}_{4}\) surfaces can be magnetized by the crystallization of enantiopure RAO for which the magnetization direction is determined by the handedness of RAO (Fig. 3)._ In addition, we showed that local magnetization by chiral molecules can trigger an avalanche magnetization process and eventually magnetize an area larger than the one covered by the crystals (Fig. 4). Moreover, the area magnetized by the RAO crystals showed a higher magnetic coercivity by about 20 times the modern geomagnetic field, proving the persistence of surface magnetization against possible geomagnetic reversals (Fig. 4C). When considered together with our previous results, _chirality-induced magnetization_ phenomenon paves the way for a cooperative feedback between chiral molecules and magnetic surfaces (Fig. 1). With this feedback, weaker natural magnetization can be amplified and subsequent surface processes can be made highly enantioselective and persistent under prebiotic conditions.
## Chirality-Induced Avalanche Magnetization
The strong coupling of electron spin to molecular chirality established by the CISS effect paves the way for a new chemistry controlled by electron spin. Due to the spin-selective interaction of chiral molecules with electrons, achiral magnetic surfaces with net spin-polarization can act as chiral agents and trigger highly enantiospecific processes.
The early CISS experiments have shown that electron flow through a chiral monolayer is spin-selective and the handedness of the monolayer dictates the spin state with high efficiency of transmission [13, 14]. Another manifestation of the same effect is observed when a chiral molecule approaches a surface and gets transiently polarized. Namely, an induced electric dipole is formed. This charge polarization is due to the dispersive forces between the molecule and the surface causing a transient flow of electron density. This conceptually simple and mundane phenomenon gets interesting in the case of a chiral potential of a chiral molecule. The transient splitting of charge leads to a partial spin polarization, so that one electric pole is associated with one spin direction and the other pole with the opposite spin--as the electron flow through a chiral potential is spin selective due to the CISS effect. Therefore, a transient spin dipole is realized along the molecular axis of a chiral molecule as it approaches a surface, as shown in Fig. 2A.
What if the surface is magnetic? If the surface is magnetic with a net spin alignment at its surface, then, it couples with the transient unpaired spin of the chiral molecule via spin-exchange interaction. This coupling is a result of the Pauli exclusion principle and it favors the singlet-like state (\(E(\uparrow\downarrow)\)) with opposite spins. The triplet-like state (\(E(\uparrow\uparrow)\)) is penalized and the energy difference between these two states is called the exchange energy (or the exchange integral). The exchange interaction is a short-range interaction (few Angstrom scale, relies on the wavefunction overlap), yet a strong one (\(\sim 0.01-1\) eV) typically much stronger than room temperature, \(k_{B}T\)[28]. Therefore, a magnetized surface favorably interacts with a certain handedness of a chiral molecule and breaks the chiral symmetry by doing so. This is the mechanism by which a magnetized surface acts as a chiral seed for selective adsorption and crystallization from a racemic solution, as we have previously observed (See Figure 1A in [20], [15, 19, 29]).
Now if the surface is magnetic, yet, without a net spin alignment, the same interaction can be utilized to magnetize a surface by adsorbing chiral molecules from their enantiopure solution. The fundamental physics underlying the process is identical to the previous case. However, now, instead of aligned surface spins selecting a molecular chirality; molecular chirality is selectively aligning surface spins along the molecular axis, as illustrated in Fig. 2A. Therefore, as an enantiopure layer of a chiral molecule is covering the magnetic domains of a surface, previously misaligned domains align their spins along the same direction, resulting in a net magnetization at the surface. This transition is shown in Fig. 2C. from sub-panels 1 to 2. Previous experiments have shown that chiral molecules can manipulate the magnetization of magnetic surfaces based on their molecular chirality, due to the spin-exchange interactions. Ben-Dor _et al._ showed that the formation of a chiral self-assembled monolayer can switch the magnetization direction of a previously magnetized
Figure 2: **Chiral molecules can spin polarize magnetic surfaces due to the CISS effect.****A.** Electron density of a molecule approaching a surface is asymmetrically distributed and a transient charge dipole is created. Charge transport through chiral molecules is spin selective due to the CISS effect and therefore a spin-dipole along the chiral molecular axis accompanies this charge dipole. This transient spin dipole can couple with surface spins due to the spin-exchange interaction (\(J\equiv[E(\uparrow\uparrow)-E(\uparrow\downarrow)]\)) and spin polarize the surface along the chiral molecular axis. **B.** Schematic of the setup used in crystallization experiments for CD detection. Homochiral crystals of RAO are formed on magnetite from their enantiopure solution. These crystals align the magnetic domains under them and interact with the surface spins due to magnetic dipolar coupling—a weaker but longer range coupling compared to the spin-exchange interaction **C.** A sequence showing the effect of chiral molecules on the magnetic domains. **1** Initially, there is no net magnetization of the magnetic domains. **2** As a layer of RAO is forming, the spins underneath the layer align due to the strong spin-exchange interaction. This is an avalanche magnetization process: molecules align the surface spins and the aligned regions attract more molecules and get larger. **3** The mono-layer grows and covers more surface due to the attraction of chiral molecules to the aligned regions and crystal seeds start forming. **4** Crystals get larger and couple with the magnetic domains due to magnetic dipole-dipole interaction, \(E_{dd}\). **D.** If an external magnetic field, \(\overrightarrow{B}\), is applied, the domains outside the area covered by crystals magnetize. Yet the crystals preserve the magnetization of the domains under them as long as the energy of the dipolar coupling is larger than the magnetic energy.
substrate [16]. Follow-up work by Meirzda _et al._ investigated the dynamics of this system with an _in-situ_ NV magnetometer and demonstrated that the angular alignment of surface magnetic dipoles follow the molecular tilt angle over long time scales. The long-time-scale effect of the induced magnetization provided the evidence that spin-exchange interactions are responsible for the magnetization switching [30].
Molecules approaching the surface couple with the surface electrons, aligning their magnetic moments due to spin-exchange interactions, however, this coupling is a short-ranged one, relying on the overlap of wave-functions. Therefore, as the physio-adsorbed chiral molecular layer grows, the strength of the spin-exchange interaction with the surface electrons decreases rapidly. At longer length-scales magnetic dipole-dipole interactions (also known as dipolar coupling) take over. Magnetic moments localized in the chiral molecular layers generate magnetic dipole fields around them (See Fig. 2B, right panel) and this dipole field can interact with the magnetic dipoles of the surface. This interaction is always present in a system with local magnetic moments (e.g. ferromagnets), however, it is typically ignored due to its weakness compared to the exchange interactions[31]. The thermally averaged magnitude of dipolar coupling, \(E_{dd}\), scale as \(\mu^{2}/d^{3}\), where \(\mu\) is the average magnetic moment and \(d\) is the distance between two magnetic moments. Due to this slowly decaying inverse cubic (\(\sim 1/d^{3}\)) scaling, dipolar coupling is the dominant energy scale at distances longer than a few nanometers into the chiral layers. It is important to reiterate that the local magnetic moment of the chiral molecular layers is due to their charge polarization being accompanied by a spin polarization. And this charge polarization cannot penetrate into the crystal indefinitely as the organic chiral crystals under consideration are not conductive. So it is misleading to think of the whole crystal as a small magnet. Rather, the magnetic behavior is localized to the multi-layer away from the surface whose strength is gradually decreasing as the charge polarization is vanishing away from the surface. To summarize, there are two different interactions dominant at different length scales: short-ranged and strong spin-exchange interactions dominate directly away from the surface (monolayer scale, microscopic); weaker and long-ranged dipolar coupling dominates farther into the crystal (multilayer scale, mesoscopic).
Both of these interactions are present for a chiral crystal forming on the surface as shown in Fig. 2C. Initially, as a monolayer of molecules is forming on a magnetic surface, molecules align the surface magnetic moments underneath them due to exchange interactions. This alignment initiated by the chiral molecules spreads like an avalanche process over the magnetic domains. As molecules align the regions underneath them nearby regions also align themselves similar to the Ising-type ferromagnetic ordering. For this process, chiral molecules can be modeled as a spatially localized external field (\(h_{0}\)) in the two dimensional Ising model [32]. The aligned regions attract chiral molecules faster compared to the randomly oriented domains, due to the associated lower spin-exchange energy. Then, the chiral monolayer spreads around and covers more area on the magnetic surface. Simultaneously, the molecular layers get thicker and macroscopic crystal seeds form. As the layers get thicker, exchange interactions of the higher layers with the surface get weaker and dipolar coupling takes over in this region. The chiral multi-layer with a mesoscopic number of local magnetic moments generates a long-ranged magnetic dipole field and couples with the magnetic domains of the surface. This dipolar coupling together with the exchange interactions dominant at short distances, increase the magnetic coercivity of the magnetic surface. Hence, the regions around the chiral crystals preserve their magnetization under external magnetic fields as they require stronger fields to be aligned compared to the free domains without chiral molecules as shown in Fig. 2D.
## Circular Dichroism (CD) Experiments
We crystallized enantiopure RAO on previously non-magnetized Fe\({}_{3}\)O\({}_{4}\) surfaces and measured their chirality-induced magnetization by circular dichroism (CD) spectroscopy. (Check Supplementary Information Section 11 for the X-ray crystallographic data of RAO crystals). For these experiments we formed 80-nm-films of Fe\({}_{3}\)O\({}_{4}\) on 1-mm-thick quartz substrates by electron-beam deposition of iron and its subsequent oxidation to Fe\({}_{3}\)O\({}_{4}\) as detailed in the Methods section. We then formed D- and L-RAO crystals, separately, from their 75 mM enantiopure aqueous solutions on magnetite surfaces placed horizontally in a Petri dish as shown in Fig. 2B. Additionally, we repeated the same crystallization experiment on just quartz substrates, without any magnetic sample, as a control experiment to identify the spectral features due to RAO crystals. After an overnight crystallization, we observed an almost uniform coverage of surfaces by small RAO crystals of various sizes in 10-100 microns range.
Afterwards, magnetite surfaces were probed by CD spectroscopy in the UV-visible range (230-600 nm). During these measurements, the physio-adsorbed crystals were kept on the surface and the samples were placed such that the beam propagation direction is parallel to the surface normal, as shown in Fig. 3A. The beam aperture was measured to be around 1 cm in diameter and the aperture was fully covered with the magnetic surface.
Starting with the control measurements we first measured the CD spectrum of enantiopure RAO crystals on quartz such that their contribution to the overall spectrum was isolated (Fig. 3A, top row). The thick red and blue curves in the spectra shown in the top panel of Fig. 3B were obtained for D-RAO and L-RAO crystals respectively. As seen, RAO crystals do not give a CD signal in the visible range and they have a narrow feature around 250 nm. For D-RAO the sign of the CD signal is positive and for L-RAO it is negative.
Next, we carried out the second control experiment and identified the spectral features of magnetized magnetite for different magnetization directions. For this experiment, a magnetite surface on quartz without any chiral crystals was used. Then an external magnet near the magnetic sample at an angle (about 45 degrees with respect to the surface normal) was placed such that the surface was both in-plane and out-of-plane magnetized (Fig. 3A, middle row). Then the CD spectra of magnetite magnetized by the north and south poles of the magnet were measured. The thin red and blue curves in the spectra shown in the top panel of Fig. 3B were obtained for north and south poles respectively. A broad spectral feature in the UV-visible range with no zero-crossing was observed. For magnetite films magnetized by the north pole (south) a negative (positive) CD signal was measured. The magnitude of the signal is variable with
Figure 3: **Magnetization of magnetite by RAO crystals is measured by CD spectroscopy.****A.** We measured the CD spectra of chiral RAO crystals and magnetized magnetite (Fe\({}_{3}\)O\({}_{4}\)) separately, as control experiments. We crystallized L- or D-RAO on quartz and obtained their CD spectra (top row). We also measured the CD spectra of Fe\({}_{3}\)O\({}_{4}\) on quartz, magnetized by an external magnet (middle row). Finally, we formed enantiopure RAO crystals on Fe\({}_{3}\)O\({}_{4}\) with no net initial magnetization and measured the induced magnetization by the chiral crystals as our main experiment (bottom row). **B.** As seen on the top spectra, RAO crystals do not give a CD signal in the visible range and they have a narrow peak around 250 nm. L-RAO crystals (thick blue curve) give negative and D-RAO crystals give (thick red curve) positive CD signal. However, magnetized Fe\({}_{3}\)O\({}_{4}\) has a broad CD spectrum over the UV-visible range and Fe\({}_{3}\)O\({}_{4}\) magnetized by north pole (south pole) gives a negative (positive) CD signal. Therefore, in the visible range, the CD signal of magnetized Fe\({}_{3}\)O\({}_{4}\) is separated from RAO and can be easily distinguished above 300 nm. On the bottom spectra corresponding to the main experiment, we can see that initially non-magnetized Fe\({}_{3}\)O\({}_{4}\) (black curve) gets magnetized by the adsorption of chiral RAO crystals. D-RAO induces north-pole-like, negative magnetization (red curve). Whereas L-RAO induces south-pole-like, positive magnetization on Fe\({}_{3}\)O\({}_{4}\) (blue curve). Note the zero-crossing in the CD signal, around 270 nm, due to the overlap of RAO and Fe\({}_{3}\)O\({}_{4}\) signals of opposite signs.
changing distance of the magnet to the surface and with the magnet strength. For the reported experiment, the out-of-plane magnetic field strength at the sample location was measured to be around 16 mT.
Finally, as our main experiment, we probed the induced surface magnetization by chiral crystals and measured the CD spectra of magnetite films with adsorbed enantiopure RAO crystals (Fig. 3A, bottom row). We made sure that the magnetite films used in the experiments were not magnetized prior to the crystal adsorption. As noticed in the CD spectra on the bottom panel of Fig. 3B, the black line corresponding to the bare magnetite surface is flat, confirming the absence of net magnetization. Yet, as RAO crystals form on magnetite surfaces, they induce net magnetization on the magnetite where the magnetization direction is dictated by the molecular chirality of RAO (See Fig. S17 for repeated CD measurements). Changing magnetization direction of Fe\({}_{3}\)O\({}_{4}\) with changing molecular handedness is a smoking gun proving that the observed phenomenon is due to the CISS effect. As seen in the CD spectra of L-RAO (blue line) and D-RAO (red line) on Fe\({}_{3}\)O\({}_{4}\), there are two overlapping features: a narrow feature in the deep-UV range due to enantiopure RAO crystals and a broad feature covering the UV-visible range due to magnetized Fe\({}_{3}\)O\({}_{4}\). Hence, the spectrum of D-RAO (L-RAO) on Fe\({}_{3}\)O\({}_{4}\) is an overlay of two spectra: D-RAO (L-RAO) on quartz and Fe\({}_{3}\)O\({}_{4}\) magnetized by the north (south) pole of the magnet. To our delight, this is consistent with our latest work on spin-selective crystallization of RAO on magnetized Fe\({}_{3}\)O\({}_{4}\), in which we showed that Fe\({}_{3}\)O\({}_{4}\) magnetized by north (south) pole is promoting the crystallization of D-RAO (L-RAO) [20]. Thanks to this consistence, a _cooperative_ feedback between the selective crystallization of RAO and chirality-induced magnetization of Fe\({}_{3}\)O\({}_{4}\) can be conceived.
Although it is hard to measure the strength of the magnetization induced by RAO precisely, comparing the CD signal to magnetite magnetized by an external magnetic field of known strength gives an estimate. With that, we estimate an effective magnetic field of about 2.5 mT (\(\sim 50\) times the modern geomagnetic field) that would have induced the same magnetization on magnetite as the chirality-induced one by RAO. For this estimate we used the CD angles at 600 nm and the external magnetic field value of 16 mT measured at the sample location.
We used solid-state CD spectroscopy to probe the chirality-induced magnetization on magnetite and for these measurements we took advantage of the fact that chiral RAO crystals and magnetized Fe\({}_{3}\)O\({}_{4}\) give CD signal in different regions of the spectrum with individually recognizable features. Due to the absorptive nature of the measurement we did not exclusively measure the chirality-induced magnetization at the surface, but we also measured the bulk magnetization of the sample. Therefore, the obtained signal is diluted by the bulk Fe\({}_{3}\)O\({}_{4}\) with no net magnetization. However, using magneto-optical Kerr effect spectroscopy, we exclusively probed the surface magnetization by a reflective measurement and also obtained the magnetic coercivity of the surface upon the adsorption of chiral crystals.
Magneto-optical Kerr effect (MOKE) describes the change in light polarization reflected from a magnetized surface. MOKE microscopy is an optical imaging technique that relies on the magneto-optical Kerr effect. In the MOKE microscopy, an image is created by the interference of the polarized reference light with the light reflected off a magnetized surface. Therefore, a MOKE microscope image is a polarization contrast image where the bright and dark colors indicate a constructive and destructive interference respectively. A detailed explanation of the technique can be found in [33, 34].
We used a commercial (Evico-Magnetics GmbH) MOKE microscope to image the magnetic domains of a nickel surface and probed the effect of chiral crystals on the surface magnetic properties. We first imaged the magnetic domains of the substrate in its demagnetized state and made sure that we could resolve the domains. For the MOKE measurements we could not use the polycrystalline Fe\({}_{3}\)O\({}_{4}\) surfaces as we used for the CD measurements, due to their small domain size below the optical resolution of the microscope (\(\sim 1\mu m\)). We fabricated a 30-nm-thick nickel (Ni) surface covered by a 5 nm gold (Au) layer by electron
Figure 4: **With a magneto-optical Kerr effect (MOKE) microscope, surface magnetic domains are imaged.****A.** We imaged the magnetic domains of a Nickel/Gold substrate with a MOKE microscope in the longitudinal configuration. We formed the RAO crystals on the surface as a thin line and imaged the domains surrounding the crystals. **B.** Kerr microscope images of the magnetic domains with the D-RAO crystals are shown. White dashed lines surround the area where the crystals are formed, as deduced from the optical images. Each image here shows a snapshot of the domains as an external, in-plane magnetic field is swept from -3 mT to +3 mT and then back to -3 mT. Top left image shows that all of the domains are aligned by the strong and negative magnetic field of -3 mT. As the magnetic field is swept from -3 mT to +2 mT, the free domains outside of the regions affected by the crystals flip their magnetization (top right panel). However, the regions nearby the chiral crystals do not flip and this creates the image contrast. These regions require a more positive field to flip their magnetization due to the extra effective magnetic field locally induced by the chiral crystals. Bottom right image shows that a more positive magnetic field of +3 mT is able to flip the domains around the crystals and aligns all of the magnetic domains. Lastly, as the magnetic is swept back from +3 mT to -2 mT, the free domains align themselves under the magnetic field and the domains nearby the crystals do not flip until a more negative field is applied (bottom left panel). **C.** Kerr hysteresis loop of the free domains (blue box), sufficiently away from the chiral crystals have a coercive field of around 1.5 mT (blue line). However, the domains nearby the RAO crystals (red box) are affected by an effective magnetic field due to the spin exchange interaction and magnetic dipolar coupling. This local effective field provided by the chiral crystals translates into a higher coercive field of around 2.5 mT for the nearby magnetic domains (red line).
beam evaporation for the MOKE measurements. The Au coating is applied to prevent the oxidation of Ni under ambient conditions. After demagnetizing the sample, we measured its in-plane domain size to be around 10 microns (Fig. S19), suitable for the MOKE measurements. We did not optimize the thickness of the ferromagnetic layer or test a multi-layer sample topology to reduce the out-of-plane magnetic anisotropy [35]. We obtained the highest MOKE contrast by imaging the in-plane magnetic domains and we observed a sharp phase transition from a demagnetized state to a saturated one as we gradually increased the external longitudinal magnetic field.
Because the MOKE measurements rely on an optical image of the magnetic surface, scattering from the crystals affect the image and it is not possible to image directly underneath the crystals with high accuracy. Therefore, we formed small crystals to minimize the scattering and analyzed the magnetic domains near the crystals instead of directly under them. For every MOKE measurement we first took a magnetically saturated reference image at a given polarization and kept the light polarization and sample position stable during the magnetic field scan. We used active feedback with a piezo controller to mechanically stabilize the sample. Each MOKE image is then subtracted from the reference measurement and an image contrast due to the surface magnetization is obtained. It is important to note that the sign and absolute value of the MOKE contrast is not a physical magnetic moment value as it corresponds to a photon count change with respect to a reference image. However, for two MOKE images with a common background, the difference in the image contrast is physical and it reflects a difference in the magnetization. The assignment of darker and brighter image colors to negative and positive magnetization values is also arbitrary and we assigned brighter (darker) colors to more negatively (positively) magnetized domains without loss of generality.
Having studied the bare sample under the MOKE microscope, we formed enantiopure RAO crystals on the Ni/Au surface and observed a drastic change in the magnetic properties. We formed the crystals either by spin-coating or drop casting a 20 mM aqueous solution of enantiopure RAO. The crystals were about 1-10 microns and they formed a dense layer on the magnetic surface. For the spin-coating experiments, we covered one side of the magnetic sample with a tape such that at the intersection of the covered and uncovered sites, a thin strip of RAO crystals form upon spinning the solution, as seen in Fig. 4. We also formed small crystals by rapidly cooling the solution of drop-casted L- and D-RAO in the fridge (\(-18^{\circ}\)C) and made sure that we obtained similar results with spin-coating methods (Extended Data Fig. 1).
After forming the crystals on the Ni/Au surface, we imaged the magnetic domains at room temperature, under external magnetic field in the longitudinal configuration for which the in-plane magnetization is parallel to the reflection surface, as seen in Fig. 4A. We applied the in-plane magnetic field with switchable polarity using a pair of electromagnets built around the imaging plane of the microscope. For the MOKE measurements, we first saturated the magnetic surface by applying a large magnetic field (-3 mT) and took the reference image (top-left image in Fig. 4B). We then started to sweep the magnetic field to the opposite direction, from -3 mT to 3 mT. During this sweep, free domains far away from the crystals flipped their magnetization at about 1.5 mT, yet, the domains around the crystals kept their negative magnetization until the field is raised to 2.5 mT. This gives rise to a MOKE contrast between the free domains and the ones near the crystals as seen in the top-right image in Fig. 4B corresponding to a 2 mT field. Then, by applying 3 mT field we saturated the domains again but now in the opposite direction (bottom-right image in Fig. 4B). Afterwards, we swept the magnetic field back in the more negative direction and we observed a similar behavior. At around -1.5 mT field the free domains flipped their magnetization and the domains around the crystals did not until -2.5 mT, resulting in a similar MOKE contrast seen in the bottom-left image in Fig. 4B.
As a control, we fabricated a sample surface with a magnetic (Ni/Au) and non-magnetic (Si) areas adjacent to each other and we formed the D-RAO crystals on this surface. We then imaged the intersection under the MOKE microscope and observed the effect of chiral crystals only on the magnetic side of the surface (Fig. S23). This measurement confirms that the observed effect is not an imaging artifact and is due to a magnetization contrast. In addition, we repeated the MOKE measurements with achiral compounds, sodium chloride (NaCl) and glycine, and confirmed that the achiral crystals do not interact with the magnetic surface (Extended Data Fig. 1).
We also measured the Kerr hysteresis loop of the magnetic domains far from and near the RAO crystals and observed a significant change in the coercivity of the magnetic domains due to the chiral crystals. This can be explained by an effective magnetic field influencing the domains nearby the chiral crystals due to the
spin-exchange and magnetic dipolar interactions. As shown in Fig. 4C, we recorded the MOKE contrast change during an in-plane magnetic field sweep for two regions: a region far from the area covered by the crystals (blue box) and a region nearby the crystals (red box). Hysteresis measurements (Fig. 4C and Extended Data Fig. 1) show that chiral crystals significantly increase the coercivity (magnetic resistance) of the magnetic surface by about 20 times the modern geomagnetic field although the magnetic domains we could analyze were not directly under the crystals. The prebiotic importance of this is two-fold. First, the presence of chiral crystals make the surface magnetically harder, preserving the surface magnetization under higher external demagnetizing fields than Earth's geomagnetic field. Second, the magnetization is obtained and retained on an area larger than the one covered by the crystals. Therefore, the magnetization triggered by the chiral crystals is non-linear and spreads like an avalanche. This avalanche magnetization allow for a feedback between the magnetic surface and RAO crystals upon which higher surface magnetization can accompany higher enantiomeric excess (Fig. 1).
## 4 Spin-Polarization Properties of RAO
Lastly, the spin-polarization properties of L- and D-RAO were measured with a magnetic conductive atomic force microscope (mc-AFM). The mc-AFM setup consists of a non-magnetic AFM tip measuring the current through the adsorbed molecules on the magnetic substrate as a function of the applied electric potential and the magnetization direction of the substrate, as shown in Fig. 5A. The magnetization of the substrate is switched by an external magnetic field.
With the mc-AFM setup, we measured the spin-polarized current passing through a chiral layer of RAO molecules as a function of the applied voltage across the chiral layer. We measured the current (\(I\)) for an up (\(I_{\uparrow}\)) and down (\(I_{\downarrow}\)) magnetized surface for each enantiomer, where up (\(\uparrow\)) and down (\(\downarrow\)) refer to the direction of the north magnetic pole relative to the adsorbed layer. Thereby, we measured the difference
Figure 5: **Spin filtering effect of RAO is measured by magnetic-conductive atomic force microscopy (mc-AFM).****A.** Schematic of the mc-AFM setup. We formed a multilayer of enantiopure L- or D-RAO on a magnetized Nickel/Gold substrate by spin-coating. **B.** Averaged current-voltage (I-V) curves of L- and D-RAO displayed on the top and bottom panels, respectively. The red (blue) curves correspond to the current measured while the magnet was pointing upwards/north (downwards/south). **C.** Percent spin-polarization of L- and D-RAO are calculated from the non-linear region of the averaged I-V curves by \(\left(\frac{I_{\uparrow}-I_{\downarrow}}{I_{\uparrow}+I_{\downarrow}}\right) \times 100\). Black lines are linear fits to the curves whose ordinates are used to extract the spin-polarization. L-RAO has a positive spin-polarization of 25%, while D-RAO has a negative spin-polarization of -28%.
between the transport efficiency of up and down spin polarized electrons through L- and D- molecules as a direct measurement of the CISS properties of RAO.
As the substrate, we used the same Ni/Au surfaces used in the MOKE measurements due to the high conductivity of gold. Magnetite surfaces are not ideal for the mc-AFM measurements as magnetite is not a good conductor at room temperature. We then prepared RAO multi-layers on Ni/Au surfaces by spin-coating. We used 20 mM aqueous solution of enantiopure RAO for spin-coating.
For the mc-AFM measurements, we used the contact mode and kept the AFM tip grounded while scanning the substrate voltage from -2V to +2V. We simultaneously measured the current flowing perpendicular to the substrate through the chiral multi-layer and obtained the averaged voltage-current (I-V) curves in Fig. 5B. During these measurements the substrate kept magnetized with an external, out-of-plane magnetic field of 0.5T.
As seen in the averaged I-V curves in Fig. 5B, electron transfer through RAO layers is spin-selective. In the positive voltage regime, up-magnetized current (\(I_{\uparrow}\), north) is more efficiently transferred through the L and down-magnetized (\(I_{\downarrow}\), south) current is more efficiently transferred through the D enantiomer. Therefore, chiral molecules introduce a spin-selective resistivity to the system.
We can calculate the relative spin-polarization of RAO multi-layers by subtracting the up and down-spin currents in the non-linear region (voltage above the charge injection threshold) of the I-V curves and dividing by the total current: \(\left(\frac{I_{\uparrow}-I_{\downarrow}}{I_{\uparrow}+I_{\downarrow}}\right) \times 100\). As shown in Fig. 5C, L-RAO (blue) and D-RAO (red) have approximately equal and opposite spin-polarization of about \(\pm 25\%\), extracted from the linear fits (black lines) to the data in the non-linear region of the I-V curves.
We should mention that this is a lower bound to the intrinsic spin-polarization of RAO as the probed area by the AFM tip does not only contain an orderly arrangement of RAO molecules but likely an ensemble of quasi-orderly packed layers. Therefore, the measured spin-polarization is averaged over this quasi-orderly arrangement of RAO. In principle, I-V curves could be measured over a crystal of RAO and a much higher spin-polarization could be achieved. However, because RAO crystals are not conductive, the thickness of the crystal has to be small enough such that a current can be measured.
## Discussion
A prebiotic proto metabolism relies on the prebiotic synthesis of homochiral nucleotides until a full transition to nascent biology is achieved. Therefore, achieving homochirality once may not be sufficient for a prebiotic network and the persistence of attained homochirality can be required. A transient and/or spontaneous chiral bias may induce a fugitive enantiomeric excess at a point in prebiotic chemistry. However, this does not guarantee the maintenance of homochirality in a persistent basis. A solution to this problem is to maintain the same chiral bias in the environment and keep reinforcing the previously attained chiral excess in a deterministic fashion. Average magnetization state of the magnetic surface must be preserved in the environment to ensure the persistence of homochirality. Chirality-induced avalanche magnetization phenomenon helps to achieve this in multiple ways. First, by amplifying the natural, authigenic magnetization it sets the stage for spin-selective processes (e.g. crystallization, reduction) with higher degrees of enantioselectivity. Second, it spreads the uniform surface magnetization in a non-linear manner due to the avalanche effect and helps to realize spin-selective phenomena on a larger scale. Finally, it increases the magnetic resistance (coercivity) of the surface by about 20 times the modern geomagnetic field. Although the Hadean geodynamic record is controversial, the combination of early Archean measurements and modeling indicates that early Earth's magnetic field strength is weaker than the modern one by about a factor of 4 [36]. This further ensures that the same chiral bias due to the uniform magnetization can be present in the environment despite possible geomagnetic reversals.
We proposed that magnetic surfaces could serve as a chiral bias, based on the enantiospecific interaction between chiral molecules and magnetic substrates--as presented when the CISS effect was studied. Chirality-induced avalanche magnetization is a robust effect. Like other CISS-based phenomena, it relies on (\(\sim 0.01-1\)eV) spin-exchange interactions instead of much weaker (a few nano-eV per Gauss) magnetic interactions. Therefore, the effect can strongly manifest itself under ambient conditions and in natural environments.
Chirality-induced avalanche magnetization phenomenon is a mechanism to enhance the natural magnetization of magnetic rocks. By crystallizing chiral molecules on magnetic minerals, surface spins can be aligned, and these minerals can then accommodate efficient spin-selective asymmetric processes on early Earth. In our previous work, we used magnetic surfaces to break the chiral symmetry and obtained homochiral crystals of RAO from racemic starting materials. To fully manifest the effect in experimental timescales, we used strong external magnetic fields and uniformly magnetized the substrates. However, applying strong magnetic fields does not mimic early Earth conditions. As demonstrated here, chirality-induced avalanche magnetization is a prebiotically plausible and robust mechanism to obtain a uniformly magnetized surface. Therefore, we do not rely on the presence of an unnatural external field to facilitate spin-selective, asymmetric processes. However, a small natural magnetic field is still helpful as a source of hemisphere-scale symmetry-breaking. Authigenic magnetization of magnetic minerals under the geomagnetic field can initiate the spin-selective processes and guarantee a common direction on a large scale. Our suggested mechanism utilizes a small natural asymmetry in distribution of magnetic domains distribution and enhances it to a large scale and uniformity. Without this common trigger, different mesoscopic locations on a lake can end up enriching different enantiomers in a stochastic way.
Magnetite is the most commonly found magnetic mineral on Earth. Moreover, it is the most magnetic natural magnetic mineral [8]. Authigenic magnetite sediments form and get magnetized under the geomagnetic field in a wide range of depositional environments such as the Gale crater [7, 37, 6]. In these environments, magnetite is typically found as inclusions in silica-based rocks and can be highly polarized as single-domain particles [6, 38]. For those reasons, magnetite is likely the most suitable natural magnetic mineral to accommodate the mechanism we consider.
As is well established, the standard Ising model is a simplified model of a ferromagnet, and it accounts the formation of magnetic domains--distinct regions with uniform magnetization [39, 40]. The formation and collective alignment of magnetic domains is the primary reason why the magnetization induced by the chiral molecules spread like an avalanche, in a non-linear fashion. In addition, the size of the magnetic domains compared to the size of chiral crystals is an important parameter. As such, there should be enough number of chiral molecules to flip sufficient number of spins in a domain to induce domain flipping.
In any ferromagnetic material, spin-exchange interaction aligns the nearby dipoles by forcing them to point in the same direction, resulting in a magnetic domain. However, as the system contains more aligned dipoles in a larger domain, it generates a large magnetic field around itself and stores high a magnetostatic energy. In order to minimize its internal energy, the ferromagnet splits into smaller magnetic domains with opposite dipolar alignment--reducing the surrounding magnetic field. Therefore, the balance between spin-exchange and magnetostatic energies at a given temperature dictate the size of magnetic domains. In a realistic ferromagnet with a crystal lattice the system is more complicated as it takes more energy to magnetize the material in a certain direction (called the easy-axis) compared to the others. Therefore, the formation and size of domains are further balanced by this factor called the magnetocrystalline anisotropy energy.
In our case, we suggest adding another exchange force to the standard ferromagnetic exchange interaction. This exchange force between the chiral molecules and the ferromagnetic material is strong and localized [16]. Therefore, although the real ferromagnetic materials and chiral molecular systems are complicated, avalanche magnetization of a magnetic surface by the chiral molecules can be simply modeled by a two-dimensional Ising-like Hamiltonian, \(H\):
\[H=-h_{0}\sum_{i}S_{i}-J\sum_{\langle ij\rangle}S_{i}S_{j}-J_{c}\sum_{\langle ik \rangle}S_{i}\sigma_{k}\]
where \(S_{i}\) is the spin of the magnetic surface at a given site \(i\), \(\sigma\) is the transient spin vector localized on an electric pole of a chiral molecule, \(h_{0}\) is an external magnetic field, \(J\) is the spin-exchange energy of the magnetic surface, and \(J_{c}\) is the spin-exchange interaction energy between the surface and chiral molecular spins. The first two terms of this Hamiltonian are the standard Ising model and the third term accounts for the interaction between the magnetic surface and chiral molecules. In the limit of frozen chiral molecular spins, the third term of the Hamiltonian simplifies to a sum over only the surface spins: \(J_{c}N_{nb}\sigma\sum_{i}S_{i}\), where \(N_{nb}\) is the number of neighboring chiral molecules interacting with a given surface spin. Therefore,
the external magnetic field affecting the surface spins is replaced by an effective field in the presence of chiral molecules: \(h_{0}\to h_{0}+(N_{\text{n}b}\sigma)J_{c}\). This field can be considered as an effective magnetic field due to the chiral molecules. However, due to its spin-exchange origins, it is much stronger (\(\sim 0.01-1\text{eV}\)) and shorter-ranged (a few nanometers) than a regular magnetic field. The two-dimensional square-lattice Ising model is a simple but accurate approximation of our system, and it predicts a second-order phase transition below the critical temperature (\(T_{c}/J\approx 2.269\)) [41]. As can be seen in Fig. S33, the formation of magnetic domains at a temperature above the critical temperature results in a surface with no net magnetization. However, upon the addition of chiral molecules, the domains under and nearby the chiral molecules start flipping to a common direction and the surface carries a net magnetization whose direction is dictated by the molecular chirality. At lower temperatures, the spread of magnetization should be more drastic due to the formation of larger magnetic domains. However, the CISS-induced spin-exchange interaction energy (\(J_{c}\)) of chiral molecules also decreases at higher temperatures resulting in a less efficient domain flipping. The trade-off between these two should be explored with further experimentation.
## Acknowledgments
The authors thank John Sutherland for helpful discussions, suggestions, and feedback. We thank Ziwei Liu for the synthesis of RAO samples. We also acknowledge other members of the Simons Collaboration on the Origins of Life and the Harvard Origins of Life Initiative for fruitful discussions that shaped the ideas behind this work. This work was supported by a grant from the Simons Foundation 290360 to D.D.S.
## Author contributions
S.F.O., Y.P., R.N., and D.D.S. designed the research; S.F.O. and D.K.B. made the magnetic surfaces and took the CD measurements; S.F.O. and Y.K. did the MOKE experiments; S.F.O. and Y.S. did the SQUID and spin-coating experiments, A.K. took the mc-AFM measurements; S.F.O. analyzed the data; S.F.O. wrote the paper; all authors contributed to the editing of the manuscript and the Supplementary Information.
## Competing interests
The authors declare no competing interests.
## Data availability
The data that support the findings of this study are available within the paper and its Supplementary Information. Additional materials are available from the corresponding author (S.F.O.), upon reasonable request. |
2308.00402 | Metrics to Quantify Global Consistency in Synthetic Medical Images | Image synthesis is increasingly being adopted in medical image processing,
for example for data augmentation or inter-modality image translation. In these
critical applications, the generated images must fulfill a high standard of
biological correctness. A particular requirement for these images is global
consistency, i.e an image being overall coherent and structured so that all
parts of the image fit together in a realistic and meaningful way. Yet,
established image quality metrics do not explicitly quantify this property of
synthetic images. In this work, we introduce two metrics that can measure the
global consistency of synthetic images on a per-image basis. To measure the
global consistency, we presume that a realistic image exhibits consistent
properties, e.g., a person's body fat in a whole-body MRI, throughout the
depicted object or scene. Hence, we quantify global consistency by predicting
and comparing explicit attributes of images on patches using supervised trained
neural networks. Next, we adapt this strategy to an unlabeled setting by
measuring the similarity of implicit image features predicted by a
self-supervised trained network. Our results demonstrate that predicting
explicit attributes of synthetic images on patches can distinguish globally
consistent from inconsistent images. Implicit representations of images are
less sensitive to assess global consistency but are still serviceable when
labeled data is unavailable. Compared to established metrics, such as the FID,
our method can explicitly measure global consistency on a per-image basis,
enabling a dedicated analysis of the biological plausibility of single
synthetic images. | Daniel Scholz, Benedikt Wiestler, Daniel Rueckert, Martin J. Menten | 2023-08-01T09:29:39Z | http://arxiv.org/abs/2308.00402v1 | # Metrics to Quantify Global Consistency in Synthetic Medical Images
###### Abstract
Image synthesis is increasingly being adopted in medical image processing, for example for data augmentation or inter-modality image translation. In these critical applications, the generated images must fulfill a high standard of biological correctness. A particular requirement for these images is global consistency, i.e an image being overall coherent and structured so that all parts of the image fit together in a realistic and meaningful way. Yet, established image quality metrics do not explicitly quantify this property of synthetic images. In this work, we introduce two metrics that can measure the global consistency of synthetic images on a per-image basis. To measure the global consistency, we presume that a realistic image exhibits consistent properties, e.g., a person's body fat in a whole-body MRI, throughout the depicted object or scene. Hence, we quantify global consistency by predicting and comparing explicit attributes of images on patches using supervised trained neural networks. Next, we adapt this strategy to an unlabeled setting by measuring the similarity of implicit image features predicted by a self-supervised trained network. Our results demonstrate that predicting explicit attributes of synthetic images on patches can distinguish globally consistent from inconsistent images. Implicit representations of images are less sensitive to assess global consistency but are still serviceable when labeled data is unavailable. Compared to established metrics, such as the FID, our method can explicitly measure global consistency on a per-image basis, enabling a dedicated analysis of the biological plausibility of single synthetic images.
Keywords:Generative Modeling Synthetic Images Image Quality Metrics Global Consistency
## 1 Introduction
With recent improvements in deep learning-based generative modeling [9, 25, 4], image synthesis is increasingly utilized in medical image processing. It has seen application in inter-modality transfer [18], counterfactual image generation [20],
anomaly detection [6], data augmentation [28], and synthetic dataset generation [22]. When using synthetic images in critical medical systems, it is vital to ensure the biological correctness of the images. One crucial aspect of image realism is its global consistency [5, 12, 31]. Global consistency refers to an image's overall coherence and structure so that all parts of the image fit together in a realistic and plausible way. While several others have researched methods to improve the global consistency of synthetic images [11, 16], these works do not quantitatively assess the global consistency of these images in a standardized fashion. This is because existing metrics, such as Inception Score [24], Frechet Inception Distance (FID) [8], and Precision and Recall [23, 15], only measure image quality in terms of fidelity and variety.
In this work, we introduce solutions to measure the global consistency of synthetic images. To this end, we make the following contributions.
* We propose an approach to quantify global consistency by determining attributes on different image regions. We call this method _explicit_ quantification of global consistency.
* Next, we adapt this approach to a setting in which explicit labels are not available. To this end, we utilize the cosine similarity between feature vectors of patches in the image as a global consistency measure. These _implicit_ features are predicted by neural networks trained in a self-supervised fashion.
* In extensive experiments, we compare our proposed metrics with FID, one of the most established image quality metrics, with regard to its ability to measure global consistency in synthetic images. We perform our experiments
Figure 1: We present a novel method to quantify global consistency in generated images. Most established image quality metrics, like FID [8], are not designed to measure the biological correctness of medical images. Conversely, our approach measures the global consistency of synthetic medical images, like whole-body MRIs, based on their explicit and implicit features.
on the challenging task of whole-body magnetic resonance image (MRI) synthesis, in which it is crucial that the various depicted body parts match.
## 2 Related Works
### Global Consistency in Image Synthesis
The notion of global consistency in image synthesis has been researched in computer vision. Multiple important works [12, 31] describe synthesizing complex images with multiple objects as challenging and lacking global coherence. Integrating the attention mechanism [30] into the GAN architecture [11, 5] facilitates generating more globally consistent images. To evaluate their adherence to the properties in the real data, Hudson _et al._[11] statistically compare property co-occurrences in the generated images, similar to [28]. The use of large auto-regressive models advances the generation of ultra-high-resolution images while maintaining global consistency [16]. They use a block-wise FID to assess the quality of individual blocks in the image, which only evaluates the realism of individual patches but does not measure the global consistency within a single image. In summary, none of these works have dedicated quantitative metrics for global consistency.
### Metrics Measuring Quality of Generated Images
Several metrics, such as Inception Score [24], Frechet Inception Distance (FID) [8], and Precision and Recall [15, 23], have been proposed in the literature to assess the quality of synthetic images. The most established metric, the FID [8], measures image quality and variation in a single value by comparing the distribution over features from sets of real and synthetic images. Multiple variants have been proposed in the literature to address the limitations of FID. These variants focus on overcoming the bias towards a large number of samples [1, 3], the lack of spatial features [29] or standardization of its calculation [21]. However, the global consistency remains, at most, only validated as part of general image fidelity.
Zhang _et al._[32] measure a learned perceptual image patch similarity (LPIPS) between patches of two separate images. While this metric is conceptually similar to ours, their work focuses on evaluating different kinds of representations and similarity measures between two images for their correspondence to human judgment. However, they do not assess global consistency within a single image. Sun _et al._[28] evaluate their hierarchical amortized GAN by quantifying the accuracy of clinical predictions on synthetic images. Their evaluation strategy only compares statistics over the clinical predictions between real and synthetic data but does not incorporate per-image analysis. In general, existing metrics do not explicitly address the quantification of global consistency.
### GANs for Whole-body MRI Synthesis
Only few works have researched the challenging task of generating synthetic whole-body MRIs. Mensing _et al._[19] adapt a FastGAN [17] and a StyleGAN2
[10, 14] to generate whole-body MRIs. They primarily evaluate their generated images using the Frechet Inception Distance (FID) [8]. However, they do not focus on assessing global consistency of the synthetic images.
## 3 Method
We propose two novel metrics to measure the global consistency of synthetic images. We distinguish between _implicit_ and _explicit_ quantification of global consistency, which are described in the following (see Figure 2).
### Explicit Quantification
Our method for explicitly quantifying global consistency is based on the notion that biological properties should be consistent in different parts of a person's body. For example, a person's body mass index (BMI) should be similar when viewing the superior part of a whole-body MRI depicting the torso and the inferior part containing the legs. To assess its global consistency, we compare various biological attributes, such as age, body fat percentage, or BMI, in two parts of the synthetic images. While individual organs might age at different rates [26], our method assumes that the overall age of the superior part and inferior part of a person's body still contain consistent age-related information. In addition, the body fat mass between the limbs and the trunk correlates and can hence serve as marker for consistency in a synthetic image [13]. We generate two views of the whole-body MRI by simply cropping the superior and inferior half
Figure 2: Two strategies to assess the global consistency of an image based on the feature representations of the superior and inferior half of the body. _Explicit_: Absolute error between an explicit attribute predicted from the feature representation using some regression head \(f\). _Implicit_: Cosine similarity \(S_{C}\) between the feature representations.
of the image. Other possible cropping modes include random cropping, cropping based on semantic regions, such as organs, and grid-structured cropping. The two views are each evaluated using dedicated _referee neural networks_. We train several neural networks in a supervised fashion to predict one of three different biological properties for either the superior or inferior image view.
By comparing the predicted attributes via the absolute error, we can obtain a proxy measure for the global consistency of a synthetic image. For a more holistic analysis, we simultaneously compare an average error of all biological attributes.
### Implicit Quantification
Detailed annotations of the data are not always available, rendering supervised training of referee networks infeasible. Therefore, we propose the use of implicit features extracted via a network that has been trained via self-supervision as an alternative measure for global consistency.
As before, we crop two views from the synthetic image and extract one feature vector for each view by applying an encoder network. Here, the encoder network is trained using SimCLR [2], a self-supervised contrastive learning framework alleviating the need for labels during training. SimCLR is trained to return similar representations for two views of the same image and diverging representations for two views of different images. The similarity between the embedding of the two views is obtained by calculating their cosine similarity. To calculate a global consistency measure for a given image, we obtain the cosine similarity between the embeddings of the superior and inferior views.
### Experimental Setup
We conduct experiments using 44205 whole-body MRIs from the UK Biobank population study [27], which we split into 36013 training images, 4096 validation images, and 4096 test images. We extract the slice along the coronal plane in the intensity center of mass of the 3d volumes and normalize them to the range of \([0,1]\). We train one ResNet50 [7] network per attribute on the training set as a referee network for the explicit quantification experiments. We also fit a ResNet50 using SimCLR [2] to our training images to extract features for the implicit quantification strategy.
The validation images are used to evaluate the accuracy of the referee networks for the explicit quantification strategy. We find that the networks achieve good performance on the attribute estimation. The mean absolute error (MAE) for age estimation is 3.9 years \(\pm\) 2.98 years on the superior half and 4.4 years \(\pm\) 3.35 years on the inferior half. Similarly, we achieve an MAE of 0.97 \(\pm\) 0.83 on the superior and 1.11 \(\pm\) 0.93 on the inferior half for BMI estimation and 2.10% \(\pm\) 1.70% on the superior and 2.36% \(\pm\) 1.89% on the inferior half for body fat percentage prediction. Ultimately, we compare the variation in biological properties of the explicit metric, the cosine similarity of the implicit metric, and the FID on all test set images.
## 4 Results
### Distinguishing Consistent From Inconsistent Images
Initially, we analyze the two proposed metrics on a dataset of consistent and inconsistent images. We construct the inconsistent images by stitching the superior part and inferior part of two different whole-body MRIs from the test set together (see Figure 3). The sharp edge at the seam of the inconsistent images is a very distinctive feature. In order to avoid the metrics being influenced by it, we remove the central 10% of both the consistent and inconsistent images.
We compare our two metrics with the FID [8], which is calculated using two distinct sets of images. One half of the consistent images serves as the reference dataset for calculating the FID of either the other half of the consistent images or the inconsistent images, respectively.
Our metrics differentiate well between consistent and inconsistent images (see Table 1, top). For the explicit strategy, we report the mean over the superior-inferior errors of age, BMI, and body fat percentage prediction after normalizing them to a range between 0 and 1. While the FID is also influenced by global consistency, our metric distinguishes more clearly between consistent and inconsistent.
We present a detailed analysis of the explicit attribute errors in Figure 4. The experiment shows that body fat percentage and BMI are more distinctive biological attributes than age.
Additionally, we investigate the correlation between our implicit and explicit metrics to verify the utility of the implicit strategy in the absence of labels (see Figure 5). These findings suggest the potential utility of the implicit quantification strategy as a weaker alternative to explicit quantification.
### Global Consistency in Synthetic Whole-Body MRIs
We conduct an exemplary assessment of global consistency on 1000 synthetic images using our implicit and explicit metrics and the FID. The synthetic whole
Figure 3: A comparison of an original whole-body MRI (left) with the modified versions used in our experiments, i.e, consistent (middle) and inconsistent (right) superior-inferior combinations with the central 10% removed.
body MRIs were generated using a StyleGAN2 [14] that we trained on images of the UK Biobank [27]. The results suggest an overall high global consistency and a low error in biological attributes in the synthetic images (Table 1, bottom). The images show overall high fidelity to the real images due to the comparable FID to the real images.
Our metrics differ only slightly between real and synthetic in the per-attribute analysis (see Figure 4, bottom). The high values in our metrics indicate a high degree of global consistency in the synthetic images.
## 5 Discussion and Conclusion
In this work, we have proposed two strategies to quantify the global consistency in synthetic medical images. We found that global consistency influences established metrics for synthetic image quality, such as the FID, yet the differences between consistent and inconsistent images are more pronounced in our novel metrics. Our first metric explicitly quantifies the error between predicted biological attributes in the superior and inferior half of a single whole-body MR image. However, this approach relies on labels to train neural networks that determine the biological attributes. As a solution, we also presented a second metric based on implicit representations that can be obtained via a self-supervised trained network. Both strategies have proven suitable for assessing synthetic medical images in terms of their biological plausibility.
We envision that our work will complement the existing landscape of image quality metrics - especially in medical imaging - and that it will be used to develop and benchmark generative models that synthesize globally consistent medical images. An extension of our work to the 3D domain is theoretically simple but may be practically challenging due to the additional complexity when training SimCLR for the implicit and the referee networks for the explicit metric. Moreover, global consistency analysis for other image modalities and use cases can be enabled through retraining the feature extraction networks on domain specific data with corresponding augmentations. Ultimately, we believe our work
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & FID (\(\downarrow\)) & Explicit (Ours, \(\downarrow\)) & Implicit (Ours, \(\uparrow\)) \\ \hline Consistent & **14.10** & **0.09 \(\pm\) 0.05** & **0.59 \(\pm\) 0.12** \\ Inconsistent & 16.10 & 0.24 \(\pm\) 0.11 & 0.37 \(\pm\) 0.17 \\ \hline Real & **14.10** & 0.09 \(\pm\) 0.050 & **0.59 \(\pm\) 0.12** \\ Synthetic & 17.13 & 0.09 \(\pm\) 0.049 & 0.55 \(\pm\) 0.14 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of our explicit global consistency metrics, implicit global consistency metric, and FID in two different experiments. In the first one, we calculate all metrics on constructed consistent and inconsistent images. In the second experiment, the metrics are compared for real and synthetic datasets, akin to the envisioned use case of our proposed method.
Figure 4: The per-attribute results of the explicit absolute errors between the superior and inferior part of the consistent and inconsistent images (top) and real and synthetic images (bottom). The rightmost column: an average over the 0-1-normalized per-attribute errors.
Figure 5: The correlations between our implicit and explicit metrics verifying the utility of the implicit strategy in the absence of labels on real images.
can potentially increase the trust in using synthetic data for critical medical applications in the future.
#### 5.0.1 Acknowledgments
This research has been conducted using the UK Biobank Resource under Application Number 87802. This work is funded by the Munich Center for Machine Learning.
|
2301.03400 | GPDs in asymmetric frames | It is often taken for granted that Generalized Parton Distributions (GPDs)
are defined in the "symmetric" frame, where the transferred momentum is
symmetrically distributed between the incoming/outgoing hadrons. However, such
frames pose computational challenges for the lattice QCD practitioners. In
these proceedings, we lay the foundation for lattice QCD calculations of GPDs
in "asymmetric" frames, where the transferred momentum is not symmetrically
distributed between the incoming/outgoing hadrons. The novelty of our work
relies on the parameterization of the matrix elements in terms of
Lorentz-invariant amplitudes, which not only helps in establishing relations
between the said frames but also helps in isolating higher-twist
contaminations. As an example, we focus on the unpolarized GPDs for spin-1/2
particles. | Shohini Bhattacharya, Krzysztof Cichy, Martha Constantinou, Jack Dodson, Xiang Gao, Andreas Metz, Swagato Mukherjee, Aurora Scapellato, Fernanda Steffens, Yong Zhao | 2023-01-09T14:55:23Z | http://arxiv.org/abs/2301.03400v1 | # GPDs in asymmetric frames
###### Abstract:
It is often taken for granted that Generalized Parton Distributions (GPDs) are defined in the "symmetric" frame, where the transferred momentum is symmetrically distributed between the incoming/outgoing hadrons. However, such frames pose computational challenges for the lattice QCD practitioners. In these proceedings, we lay the foundation for lattice QCD calculations of GPDs in "asymmetric" frames, where the transferred momentum is not symmetrically distributed between the incoming/outgoing hadrons. The novelty of our work relies on the parameterization of the matrix elements in terms of Lorentz-invariant amplitudes, which not only helps in establishing relations between the said frames but also helps in isolating higher-twist contaminations. As an example, we focus on the unpolarized GPDs for spin-1/2 particles.
## 1 Introduction
Generalized Parton Distributions (GPDs) are the 3D generalizations of the collinear Parton Distribution Functions (PDFs) [1, 2]. There are several motivations to study GPDs:
* For \(\xi=0\) the Fourier transforms of the GPDs are related to the impact-parameter distributions which provide information about the three-dimensional distribution of partons -- (one-dimensional) longitudinal momentum distribution; (two-dimensional) transverse spatial distribution, see for example Ref. [3].
* Twist-2 GPDs are related to the total angular momentum of partons [1].
* One should look for other ways to access GPDs because of the challenges involved in their extraction through the processes of Deep Virtual Compton Scattering (DVCS) [2] and meson production [4]. Challenges are caused by the sensitivity of differential cross-sections to only \(x\)-integrals of GPDs, and not GPDs themselves [1, 2]. Therefore, it is desirable to extract the \(x\)-dependence of the GPDs from first principles within Lattice QCD. However, for a very long time this was not possible because of time-dependence of these quantities. As a result, all of the lattice calculations were limited to the calculations of lowest Mellin moments of the GPDs, see Ref. [5]. In 2013, there was a path-breaking proposal by X. Ji to calculate instead auxiliary quantities called "quasi-GPDs" [6, 7, 8]. This approach relies on the extraction of matrix elements for boosted hadrons involving spatially-separated fields. Ever since this proposal, enormous progress has taken place, see some reviews [9, 10, 11]. In fact, Ref. [12] provides the first-ever lattice-QCD results of the unpolarized and helicity GPDs of the nucleon from the quasi-distribution approach. Lattice QCD calculations have the potential to not only provide insight into the experimentally-inaccessible features of GPDs, but also help in extracting the "full" GPDs from the existing experimental data.
## 2 Formalisms to calculate GPDs in asymmetric frames
### Frames: Symmetric and asymmetric
The most widely used frame of reference to calculate GPDs is the symmetric frame. For this frame, the momentum transfer is symmetrically distributed between the incoming (\(p_{i}\)) and the outgoing hadrons (\(p_{f}\)) (see left plot of Fig. 1). However, one can also think of a frame where the
Figure 1: Graphical representation of the two frames employed in this work. Left plot: Symmetric frame. Right plot: Asymmetric frame.
momentum transfer is not equally shared between the incoming and outgoing hadrons, but is rather exclusively applied to the incoming hadron (see, right plot of Fig. 1). Such a frame is known as an asymmetric frame.
Lattice calculations of GPDs has primarily been confined to symmetric frames. However, such frames pose serious computational challenges because they require separate calculation for each values of the momentum transfer (\(\Delta\)), resulting in increased computational costs. So the question that we strive to address in this work is: Can we lay a formalism to systematically perform lattice calculations of GPDs in asymmetric frames (which is expected to be computationally less expensive)? In this work, we argue that there are two approaches to solving this question. In the first approach, we will show that it is possible to relate the two frames via an appropriate Lorentz transformation. In the second approach, we will propose a Lorentz-covariant decomposition of the lattice matrix elements in terms of Lorentz-invariant (frame-independent) amplitudes. These amplitudes will then be used to make connections between the two frames. As a byproduct, we will show that this approach helps in identifying higher-twist contaminations which may be present in quasi-GPDs at finite values of momentum.
### Lorentz transformation approach
In this section, we explain the Lorentz transformation (LT) approach. First, it is straightforward to realize that a LT along the \(z\)-direction is not optimal for lattice calculations because this requires a spatial operator distance (say \(z=(0,0_{\perp},z^{3}\neq 0)\)) to pick up a temporal component (that is \(z\xrightarrow{\rm LT}(z^{0}\neq 0,0_{\perp},z^{3})\)). However, a LT applied to any direction transverse to the \(z\)-axis does not change the spatial nature of operator distances. This transformation is called as the "transverse boost". We explain this by considering a transverse boost in the \(x\)-direction and for the simplest case of zero skewness. The logic can be generalized for any general transverse boost and for arbitrary values of skewness.
We begin by relating the incoming state in the two frames, \(p_{i}^{s}=(E_{i}^{s},-\Delta^{1,s}/2,0,P^{3})\) and \(p_{i}^{a}=(E_{i}^{a},-\Delta^{1,a},0,P^{3})\). LT provides \(p^{s}=\Lambda_{\rm LT}\,p^{a}\),
\[\begin{pmatrix}E_{i}^{s}\\ p_{i}^{1,s}\\ p_{i}^{2,s}\\ p_{i}^{3,s}\end{pmatrix}=\begin{pmatrix}\gamma&-\gamma\beta&0&0\\ -\gamma\beta&\gamma&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\times\begin{pmatrix}E_{i}^{a}\\ -\Delta^{1,a}\\ 0\\ p^{3}\end{pmatrix}. \tag{1}\]
This gives,
\[E_{i}^{s}=\gamma(E_{i}^{a}+\beta\Delta^{1,a})\, \tag{2}\]
and,
\[p_{i}^{1,s}=-\gamma(\beta E_{i}^{a}+\Delta^{1,a})\quad\rightarrow\quad\Delta^ {1,s}=2\gamma(\beta E_{i}^{a}+\Delta^{1,a}). \tag{3}\]
Similarly, the outgoing state in the two frames, \(p_{f}^{s}=(E_{f}^{s},\Delta^{1,s}/2,0,P^{3})\) and \(p_{f}^{a}=(E_{f}^{a},0,0,P^{3})\) can also be related. (Keep in mind that the energies of the incoming and outgoing states are different in the asymmetric frame.) We then find,
\[E_{i}^{s}=\gamma E_{f}^{a}\, \tag{4}\]
and,
\[p_{f}^{1,s}=-\gamma\beta E_{f}^{a}\quad\to\quad\Delta^{1,s}=-2\gamma\beta E_{f}^{ a}. \tag{5}\]
From Eqs. (2) and (4), we find,
\[\beta=-\left(\frac{E_{i}^{a}-E_{f}^{a}}{\Delta^{1,a}}\right). \tag{6}\]
From Eqs. (3) and (5), we find,
\[\beta=-\frac{\Delta^{1,a}}{E_{i}^{a}+E_{f}^{a}}. \tag{7}\]
Then, Eqs. (6) and (7) imply,
\[\Delta^{1,a}=\sqrt{(E_{i}^{a})^{2}-(E_{f}^{a})^{2}}\,. \tag{8}\]
Hence, \(\beta\) can be written as,
\[\beta=-\sqrt{\frac{E_{i}^{a}-E_{f}^{a}}{E_{i}^{a}+E_{f}^{a}}}<0. \tag{9}\]
This implies \(\Delta^{0,a}<0\), and
\[\gamma=\frac{1}{\sqrt{1-\beta^{2}}}=\sqrt{\frac{E_{i}^{a}+E_{f}^{a}}{2E_{f}^{ a}}}. \tag{10}\]
Therefore, by using the expressions for \((\beta,\gamma)\), we can write down uniquely the symmetric frame variables (\(E_{i}^{s},\Delta^{1,s}\)) in terms of the asymmetric frame variables (\(E_{i}^{a},E_{f}^{a},\Delta^{1,a}\)): The energy should be,
\[E_{i}^{s}=\gamma E_{f}^{a}=\sqrt{\frac{E_{f}^{a}(E_{i}^{a}+E_{f}^{a})}{2}}\, \tag{11}\]
and the transverse-momentum transfer,
\[\Delta^{1,s} =-2\gamma\beta E_{f}^{a}\,\] \[\text{or,} \Delta^{1,s} =2\sqrt{\frac{E_{f}^{a}(E_{i}^{a}-E_{f}^{a})}{2}}=2\sqrt{\frac{E_ {f}^{a}}{2(E_{i}^{a}+E_{f}^{a})}}\,\Delta^{1,a}. \tag{12}\]
We repeat that the above method can be generalized for \(\widetilde{\Delta}_{\perp}=(\Delta^{1},\Delta^{2})\) and for arbitrary values of skewness.
Now that we have sketched the idea of how to relate the kinematical variables between the two frames, we proceed to understand how the matrix elements defining quasi-GPDs transform between
the two frames. For this purpose, we focus on spin-0 particles such as the pion. (The method can be generalized for spin-1/2 particles.) The (unpolarized) pion GPD is defined as,
\[F^{\mu}(z,P,\Delta)=\langle p_{f}\,|\bar{q}(-\tfrac{z}{2})\gamma^{\mu}\,\mathcal{ W}(-\tfrac{z}{2},\tfrac{z}{2})q(\tfrac{z}{2})|p_{i}\rangle\,. \tag{13}\]
Here, \(\mathcal{W}\) is a straight Wilson line required to make the correlator gauge invariant. Historically, (unpolarized) quasi-GPDs have been defined through matrix elements of the operator \(\gamma^{0}\), see for instance Refs. [12, 13]. By applying the transverse boost Eq. (1), we find that the matrix element \(\langle..\gamma^{0}..\rangle\) in the symmetric frame can be expressed in terms of matrix elements of different operators \(\langle..(\gamma^{0}+\gamma^{1})..\rangle\) in the asymmetric frame,
\[\langle p_{f}\,|\bar{q}(-\tfrac{z}{2})\gamma^{0}\,\mathcal{W}(- \tfrac{z^{3}}{2},\tfrac{z^{3}}{2})\,q(\tfrac{z}{2})|p_{i}\rangle^{s} =\gamma\langle p_{f}\,|\bar{q}(-\tfrac{z}{2})\gamma^{0}\, \mathcal{W}(-\tfrac{z^{3}}{2},\tfrac{z^{3}}{2})\,q(\tfrac{z}{2})|p_{i}\rangle ^{a}\] \[-\gamma\beta\langle p_{f}\,|\bar{q}(-\tfrac{z}{2})\gamma^{1}\, \mathcal{W}(-\tfrac{z^{3}}{2},\tfrac{z^{3}}{2})\,q(\tfrac{z}{2})|p_{i}\rangle ^{a}. \tag{14}\]
This equation simply reflects how the 0th component of a 4-vector changes under the Lorentz transformation Eq. (1). Therefore, this implies that a transverse boost that fixes \((\beta,\gamma)\) (Eqs. (9) and (10)) allows for an exact calculation of quasi-GPDs in the symmetric frame through matrix elements of the asymmetric frame. However, Eq. (14) also shows that a quasi-GPD defined through the operator \(\gamma^{0}\) is not Lorentz invariant. In the limit of a large momentum, we recover,
\[\lim_{P^{3}\to\infty}\langle..\gamma^{0}..\rangle^{s}\;\approx\;\langle.. \gamma^{0}..\rangle^{a}+\mathcal{O}\left(\frac{1}{P^{3}}\right)\langle.. \gamma^{1}..\rangle^{a}\;\to\;\langle..\gamma^{0}..\rangle^{a}\,, \tag{15}\]
which means that the contribution from the matrix element \(\langle..\gamma^{1}..\rangle\) maybe viewed as a power correction at finite values of momentum \(P^{3}\).
### Amplitude approach: Spin-1/2 particles
In this section, we explain the amplitude approach through the example of spin-1/2 particles, such as the proton. (We refer to Ref. [14] for details on spin-0 particles.) As a first step, we build a Lorentz-covariant decomposition of the vector matrix element in terms of the available vectors \((P^{\mu},z^{\mu},\Delta^{\mu})\). By considering constraints from parity, we find that the general structure of the vector matrix element involves eight linearly-independent Dirac structures multiplied by eight Lorentz-invariant (frame-independent) amplitudes,
\[F^{\mu}(z,P,\Delta) =\bar{u}(p_{f}\,,\lambda^{\prime})\left[\frac{P^{\mu}}{m}A_{1}+mz ^{\mu}A_{2}+\frac{\Delta^{\mu}}{m}A_{3}+im\sigma^{\mu z}A_{4}+\frac{i\sigma^{ \mu\Delta}}{m}A_{5}\right.\] \[+\left.\frac{P^{\mu}i\sigma^{z\Delta}}{m}A_{6}+mz^{\mu}i\sigma^{ z\Delta}A_{7}+\frac{\Delta^{\mu}i\sigma^{z\Delta}}{m}A_{8}\right]u(p_{i}, \lambda)\,. \tag{16}\]
Here \(\sigma^{\mu\nu}\equiv\tfrac{i}{2}(\gamma^{\mu}\gamma^{\nu}-\gamma^{\nu}\gamma ^{\mu})\), \(\sigma^{\mu z}\equiv\sigma^{\mu\rho}z_{\rho}\), \(\sigma^{\mu\Delta}\equiv\sigma^{\mu\rho}\Delta_{\rho}\), \(\sigma^{z\Delta}\equiv\sigma^{\rho\tau}z_{\rho}\Delta_{\tau}\), \(z\equiv(z^{0}=0,z_{\perp}=0_{\perp},z^{3}\neq 0)\). (For a derivation of Eq. (16), we refer to Ref. [15]. See also Ref. [16] where the vector matrix element has been parameterized in the momentum space for a straight Wilson line.) For brevity, we use the compact notation \(A_{i}\equiv A_{i}(z\cdot P,z\cdot\Delta,\Delta^{2},z^{2})\), with \(A_{i}\)'s being the Lorentz-invariant amplitudes whose arguments are functions of Lorentz scalars1.
For spin-\(1/2\) particles, the vector matrix element can be parameterized in terms of two light-cone GPDs \(H\) and \(E\)[17],
\[F^{+}(z,P^{s/a},\Delta^{s/a})=\bar{u}^{s/a}(p_{f}^{s/a},\lambda^{ \prime})\Bigg{[}\gamma^{+}H(z,P^{s/a},\Delta^{s/a})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{i\sigma^{+ \mu}\Delta_{\mu}^{s/a}}{2m}E(z,P^{s/a},\Delta^{s/a})\Bigg{]}u^{s/a}(p_{i}^{s/a },\lambda)\,. \tag{17}\]
By using \(\mu=+\) in Eq. (16), followed by a subsequent change of basis, it is possible to map the \(A_{i}\)'s onto the \(H\) and \(E\) GPDs in Eq. (17). The results are,
\[H(z,P^{s/a},\Delta^{s/a}) =A_{1}+\frac{\Delta^{+,s/a}}{P^{+,s/a}}A_{3}\,, \tag{18}\] \[E(z,P^{s/a},\Delta^{s/a}) =-A_{1}-\frac{\Delta^{+,s/a}}{P^{+,s/a}}A_{3}+2A_{5}+2P^{+,s/a}z^ {-}A_{6}+2\Delta^{+,s/a}z^{-}A_{8}\,. \tag{19}\]
Keep in mind that the arguments of the \(A_{i}\)'s for light-cone GPDs have no dependence on \(z^{2}\). Also, \(z^{\mu}=(0,z^{-},0_{\perp})\) and \(\Delta^{+}/P^{+}=z\cdot\Delta/z\cdot P\), etc. Thus, it is possible to write the above expressions in a Lorentz invariant way as,
\[H(z\cdot P^{s/a},z\cdot\Delta^{s/a},(\Delta^{s/a})^{2}) =A_{1}+\frac{\Delta^{s/a}\cdot z}{p^{s/a}\cdot z}A_{3}\,, \tag{20}\] \[E(z\cdot P^{s/a},z\cdot\Delta^{s/a},(\Delta^{s/a})^{2}) =-A_{1}-\frac{\Delta^{s/a}\cdot z}{p^{s/a}\cdot z}A_{3}+2A_{5}+2P ^{s/a}\cdot zA_{6}+2\Delta^{s/a}\cdot zA_{8}\,. \tag{21}\]
This means the light-cone GPDs are frame-independent as long as the Lorentz scalars \((z\cdot P^{s/a},z\cdot\Delta^{s/a},(\Delta^{s/a})^{2})\) are the same in the two frames.
Next, we turn to the quasi-GPDs \(\mathcal{H}\) and \(\mathcal{E}\), which historically have been defined in terms of matrix elements of \(\gamma^{0}\) operator as [18, 19],
\[F^{0}(z,P^{s/a},\Delta^{s/a}) =\langle p_{f}^{s/a},\lambda^{\prime}|\bar{q}(-\tfrac{z}{2}) \gamma^{0}q(\tfrac{z}{2})|p_{i}^{s/a},\lambda\rangle\] \[=\bar{u}^{s/a}(p_{f}^{s/a},\lambda^{\prime})\Bigg{[}\gamma^{0} \mathcal{H}_{0}^{s/a}(z,P^{s/a},\Delta^{s/a})\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\frac{i\sigma^{0\mu}\Delta_ {\mu}^{s/a}}{2m}\mathcal{E}_{0}^{s/a}(z,P^{s/a},\Delta^{s/a})\Bigg{]}u^{s/a}( p_{i}^{s/a},\lambda)\,. \tag{22}\]
If we use \(\mu=0\) in Eq. (16), then after performing a change of basis it is possible to map the \(A_{i}\)'s onto the quasi-GPDs in Eq. (22). The relations in the symmetric frame read,
\[\mathcal{H}_{0}^{s}(z,P^{s},\Delta^{s}) =A_{1}+\frac{\Delta^{0,s}}{p^{0,s}}A_{3}-\frac{m^{2}\Delta^{0,s}z ^{3}}{2P^{0,s}p^{3,s}}A_{4}+\Bigg{[}\frac{(\Delta^{0,s})^{2}z^{3}}{2P^{3,s}} -\frac{\Delta^{0,s}\Delta^{3,s}z^{3}P^{0,s}}{2(P^{3,s})^{2}}-\frac{z^{3}( \Delta^{s}_{\perp})^{2}}{2P^{3,s}}\Bigg{]}A_{6}\] \[+\Bigg{[}\frac{(\Delta^{0,s})^{3}z^{3}}{2P^{0,s}p^{3,s}}-\frac{( \Delta^{0,s})^{2}\Delta^{3,s}z^{3}}{2(P^{3,s})^{2}}-\frac{0^{s}z^{3}(\Delta^{ s}_{\perp})^{2}}{2P^{0,s}p^{3,s}}\Bigg{]}A_{8}\,, \tag{23}\] \[\mathcal{E}_{0}^{s}(z,P^{s},\Delta^{s}) =-A_{1}-\frac{\Delta^{0,s}}{p^{0,s}}A_{3}+\frac{m^{2}\Delta^{0,s} z^{3}}{2P^{0,s}p^{3,s}}A_{4}+2A_{5}+\Bigg{[}-\frac{(\Delta^{0,s})^{2}z^{3}}{2P^{3,s}} +\frac{P^{0,s}\Delta^{0,s}\Delta^{3,s}z^{3}}{2(P^{3,s})^{2}}+\frac{z^{3}( \Delta^{s}_{\perp})^{2}}{2P^{3,s}}\] \[-\frac{2z^{3}(P^{0,s})^{2}}{P^{3,s}}\Bigg{]}A_{6}+\Bigg{[}-\frac{( \Delta^{0,s})^{3}z^{3}}{2P^{0,s}p^{3,s}}+\frac{(\Delta^{0,s})^{2}\Delta^{3,s} z^{3}}{2(P^{3,s})^{2}}+\frac{\Delta^{0,s}z^{3}(\Delta^{s}_{\perp})^{2}}{2P^{0,s}p^{3,s} }-\frac{2z^{3}P^{0,s}\Delta^{0,s}}{P^{3,s}}\Bigg{]}A_{8}\,. \tag{24}\]
On the other hand, the relations in the asymmetric frame read,
\[\mathcal{H}_{0}^{a}(z,P^{a},\Delta^{a})=A_{1}+\frac{\Delta^{0,a}}{P^ {0,a}}A_{3}-\left[\frac{m^{2}\Delta^{0,a}z^{3}}{2P^{0,a}P^{3,a}}-\frac{1}{(1+ \frac{\Delta^{3,a}}{2P^{0,a}})}\frac{m^{2}\Delta^{0,a}\Delta^{3,a}z^{3}}{4P^{0,a}(P^{3,a})^{2}}\right]A_{4}\] \[+\left[\frac{(\Delta^{0,a})^{2}z^{3}}{2P^{3,a}}-\frac{1}{(1+\frac {\Delta^{3,a}}{2P^{3,a}})}\frac{(\Delta^{0,a})^{2}\Delta^{3,a}z^{3}}{4(P^{3,a} )^{2}}-\frac{1}{(1+\frac{\Delta^{3,a}}{2P^{3,a}})}\frac{P^{0,a}\Delta^{0,a} \Delta^{3,a}z^{3}}{2(P^{3,a})^{2}}-\frac{z^{3}(\Delta_{\perp}^{a})^{2}}{2P^{3,a}}\right]A_{6}\] \[+\left[\frac{(\Delta^{0,a})^{3}z^{3}}{2P^{0,a}P^{3,a}}-\frac{1}{( 1+\frac{\Delta^{3,a}}{2P^{3,a}})}\frac{(\Delta^{0,a})^{3}\Delta^{3,a}z^{3}}{4 P^{0,a}(P^{3,a})^{2}}-\frac{1}{(1+\frac{\Delta^{3,a}}{2P^{3,a}})}\frac{( \Delta^{0,a})^{2}\Delta^{3,a}z^{3}}{2(P^{3,a})^{2}}-\frac{z^{3}(\Delta_{\perp }^{a})^{2}\Delta^{0,a}}{2P^{0,a}P^{3,a}}\right]A_{8}\,, \tag{25}\]
\[\mathcal{E}_{0}^{a}(z,P^{a},\Delta^{a})=-A_{1}-\frac{\Delta^{0,a} }{P^{0,a}}A_{3}-\left[-\frac{m^{2}\Delta^{0,a}z^{3}}{2P^{0,a}P^{3,a}}-\frac{1} {(1+\frac{\Delta^{3,a}}{2P^{3,a}})}\left(\frac{m^{2}z^{3}}{P^{3,a}}-\frac{m^{ 2}\Delta^{0,a}\Delta^{3,a}z^{3}}{4P^{0,a}(P^{3,a})^{2}}\right)\right]A_{4}+2A_ {5}\] \[+\left[-\frac{(\Delta^{0,a})^{2}z^{3}}{2P^{3,a}}-\frac{1}{(1+ \frac{\Delta^{3,a}}{2P^{3,a}})}\left(\frac{P^{0,a}\Delta^{0,a}z^{3}}{P^{3,a}} -\frac{(\Delta^{0,a})^{2}\Delta^{3,a}z^{3}}{4(P^{3,a})^{2}}\right)-\frac{1}{(1+ \frac{\Delta^{3,a}}{2P^{3,a}})}\left(\frac{2z^{3}(P^{0,a})^{2}}{P^{3,a}}\right.\] \[-\frac{P^{0,a}\Delta^{0,a}\Delta^{3,a}z^{3}}{2(P^{3,a})^{2}} \right)+\frac{z^{3}(\Delta_{\perp}^{a})^{2}}{2P^{3,a}}\left]A_{6}+\left[- \frac{(\Delta^{0,a})^{3}z^{3}}{2P^{0,a}P^{3,a}}-\frac{1}{(1+\frac{\Delta^{3,a }}{2P^{3,a}})}\left(\frac{(\Delta^{0,a})^{2}z^{3}}{P^{3,a}}-\frac{(\Delta^{0,a })^{3}\Delta^{3,a}z^{3}}{4\overline{P}^{0,a}(P^{3,a})^{2}}\right)\right.\] \[-\frac{1}{(1+\frac{\Delta^{3,a}}{2P^{3,a}})}\left(\frac{2z^{3}P^ {0,a}\Delta^{0,a}}{P^{3,a}}-\frac{(\Delta^{0,a})^{2}\Delta^{3,a}z^{3}}{2(P^{3,a})^{2}}\right)+\frac{z^{3}(\Delta_{\perp}^{a})^{2}\Delta^{0,a}}{2P^{0,a}P^{3,a}}\left]A_{8}\,. \tag{26}\]
However, one can think of other definitions of quasi-GPDs. For this purpose, we recall the position-space matching relation between, for instance, light-cone GPD \(H\) and quasi-GPD \(\mathcal{H}\)[13]:
\[\mathcal{H}\big{(}z\cdot P,-2\xi(z\cdot P),\Delta^{2},z^{2},\mu^{2}\big{)}= \int_{-1}^{1}du\ \bar{C}\ (u,z\cdot P,\xi,z^{2},\mu^{2})\ H\big{(}u(z\cdot P),-2u\xi(z\cdot P),\Delta^{2},\mu^{2}\big{)}\,. \tag{27}\]
Here, \(\bar{C}\) is the pertubatively-calculable matching coefficient [13] and \(\mu\) is the renormalization scale in the \(\overline{\rm MS}\) scheme. At leading order in \(\alpha_{s}\), the above formula indicates that \(\mathcal{H}\) collapses to \(H\) in the light-cone limit \(z^{2}\to 0\),
\[\lim_{z^{2}\to 0}\mathcal{H}(z\cdot P,z\cdot\Delta,\Delta^{2},z^{2})=H(z \cdot P,z\cdot\Delta,\Delta^{2},0)+\mathcal{O}(\alpha_{s}). \tag{28}\]
Therefore, a natural way to define the quasi-GPDs \(\mathcal{H}\) and \(\mathcal{E}\) is through a Lorentz-invariant generalization of the light-cone definitions in Eqs. (20) and (21) to \(z^{2}\neq 0\), i.e.,
\[\mathcal{H}(z\cdot P^{s/a},z\cdot\Delta^{s/a},(\Delta^{s/a})^{2},z^{2})=A_{1}+ \frac{\Delta^{s/a}\cdot z}{Ps^{s/a}\cdot z}A_{3}\,, \tag{29}\]
\[\mathcal{E}(z\cdot P^{s/a},z\cdot\Delta^{s/a},(\Delta^{s/a})^{2},z^{2})=-A_{1}- \frac{\Delta^{s/a}\cdot z}{Ps^{s/a}\cdot z}A_{3}+2A_{5}+2P^{s/a}\cdot zA_{6}+2 \Delta^{s/a}\cdot zA_{8}\,, \tag{30}\]
where now the arguments of the \(A_{i}\)'s have a non-zero dependence on \(z^{2}\). We expect the definitions in Eqs. (29-30) to have two advantages: First, these definitions may converge faster to the light-cone
GPDs because of the similarities in their functional forms with their (respective) light-cone GPDs. (Such a statement is inspired from Ref. [20], where similar arguments were made for the quasi-PDFs. See also the next paragraph for explicit explanations.) Second, these definitions differ from their light-cone GPDs by frame-independent power corrections; contrast with historic definitions which are frame-dependent.
We now discuss in detail the various definitions of quasi-GPDs: We notice that for finite values of the momentum, the historic definitions of quasi-GPDs (\({\cal H}_{0}^{s/a}(A_{i};z)\,,{\cal E}_{0}^{s/a}(A_{i};z)\)) in Eqs. (23)-(26) involve additional amplitudes that are not present in the light-cone GPDs, Eqs. (20)-(21). This is not the case for the Lorentz-invariant definitions of quasi-GPDs (\({\cal H}(A_{i};z)\,,{\cal E}(A_{i};z)\)) in Eqs. (29)-(30). (Note that this is different from the (unpolarized) quasi-PDF case where arguments were made in favor of \(\gamma^{0}\) (against \(\gamma^{3}\)) because of the absence of such additional amplitudes relative to the (unpolarized) light-cone PDF case [20].) Therefore, the additional amplitudes in (\({\cal H}_{0}^{s/a}(A_{i};z)\,,{\cal E}_{0}^{s/a}(A_{i};z)\)) may be viewed as contaminations from explicit power corrections, which one would have to suppress by going to larger and larger values of momentum. Hence, we believe that (\({\cal H}(A_{i};z)\,,{\cal E}(A_{i};z)\)) may converge relatively faster to their (respective) light-cone GPDs, simply because of the absence of such additional amplitudes. (Of course, \(({\cal H}(A_{i};z)\,,{\cal E}(A_{i};z))\) also have power corrections, but they are implicit within the amplitudes themselves. Our argument above is for the power corrections that are explicit.) Our reasoning is perhaps too simple and for sure needs further substantiation. In fact, it may be that the actual convergence of the various definitions of quasi-GPDs is determined by the underlying dynamics. Note that the Lorentz non-invariance of the historical definitions of quasi-GPDs implies that the basis vectors \((\gamma^{0},i\sigma^{0\,\kappa^{s/a}})\) do not form a complete set for spatially-separated bi-local operators for finite values of momentum. Therefore, we can argue that the Lorentz-invariant definitions are in fact just a redefinition of quasi-GPDs in terms of a suitable linear combination of operators (which turns out to be \(\gamma_{\perp}\)) that make them functions of Lorentz scalars [14].
In Ref. [14] and [21], we compare numerically the different definitions of quasi-GPDs for \(\xi=0\) to get an idea about the relative size of power corrections. Finally, we remark on the matching coefficient for the different definitions of quasi-GPDs: It is known that the GPD matching coefficient for the operator \(\gamma^{0}\) reduces to that for the corresponding PDF when \(\xi=0\), even if \(t\neq 0\)[13]. The PDF matching coefficient for \(\gamma^{0}\) is for the amplitude \(A_{1}\), which is also the only contributing amplitude to the LI definition of the GPD when \(\xi=0\). Therefore, the matching coefficients for the \(\gamma^{0}\) and the LI definitions of the GPDs are equal. We will elaborate this point more, including the general case of \(\xi\neq 0\), in a forthcoming publication.
## 3 Summary
In these proceedings, we have laid down the theoretical tools to perform lattice QCD calculations of GPDs in asymmetric frames. We have highlighted two approaches to performing such calculations:
* Lorentz transformation (LT) approach (Sec. 2.2): We have shown that there exists a LT called the "transverse boost" (transverse with respect to the Wilson Line) that allows one to uniquely relate the kinematical variables as well as the matrix elements in the two frames.
* Amplitude approach (Sec. 2.3): We have proposed a Lorentz-covariant decomposition of the vector matrix element in terms of Lorentz-invariant/frame-independent amplitudes. The amplitudes can be used as tools to relate the two frames. This approach also shows that at finite values of the boost momentum the historic definitions of quasi-GPDs (defined through \(\gamma^{0}\)) have additional amplitudes that are not present in the light-cone limit. This motivates us to come up with alternative definitions of quasi-GPDs that may potentially converge faster. One such candidate can be the case where one chooses the same functional form as the light-cone GPDs subjected to include \(z^{2}\neq 0\). Naively, because of the similarity in the functional forms (or because of the absence of additional amplitudes), one may expect such a definition of quasi-GPD to converge faster to the light-cone GPD. Such a definition is also frame-independent, contrary to the historic definitions.
## Acknowledgements
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through Contract No. DE-SC0012704, No. DE-AC02-06CH11357 and within the framework of Scientific Discovery through Advance Computing (SciDAC) award Fundamental Nuclear Physics at the Exascale and Beyond (S. B. and S. M.). K. C. is supported by the National Science Centre (Poland) grants SONATA BIS no. 2016/22/E/ST2/00013 and OPUS no. 2021/43/B/ST2/00497. M. C., J. D. and A. S. acknowledge financial support by the U.S. Department of Energy, Office of Nuclear Physics, Early Career Award under Grant No. DE-SC0020405. J. D. also received support by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the TMD Topical Collaboration. The work of A. M. has been supported by the National Science Foundation under grant number PHY-2110472, and also by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, within the framework of the TMD Topical Collaboration. F. S. was funded by by the NSFC and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center TRR110 "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG Project-ID 196253076 - TRR 110). YZ was partially supported by an LDRD initiative at Argonne National Laboratory under Project No. 2020-0020. Computations for this work were carried out in part on facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This research was supported in part by PLGrid Infrastructure (Prometheus supercomputer at AGH Cyfronet in Cracow). Computations were also partially performed at the Poznan Supercomputing and Networking Center (Eagle supercomputer), the Interdisciplinary Centre for Mathematical and Computational Modelling of the Warsaw University (Okeanos supercomputer), and at the Academic Computer Centre in Gdansk (Tryton supercomputer). The gauge configurations have been generated by the Extended Twisted Mass Collaboration on the KNL (A2) Partition of Marconi at CINECA, through the Prace project Pra13_3304 "SIMPHYS". Inversions were performed using the DD-\(\alpha\)AMG solver [22] with twisted mass support [23]. |
2310.15523 | Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning | For graph self-supervised learning (GSSL), masked autoencoder (MAE) follows
the generative paradigm and learns to reconstruct masked graph edges or node
features. Contrastive Learning (CL) maximizes the similarity between augmented
views of the same graph and is widely used for GSSL. However, MAE and CL are
considered separately in existing works for GSSL. We observe that the MAE and
CL paradigms are complementary and propose the graph contrastive masked
autoencoder (GCMAE) framework to unify them. Specifically, by focusing on local
edges or node features, MAE cannot capture global information of the graph and
is sensitive to particular edges and features. On the contrary, CL excels in
extracting global information because it considers the relation between graphs.
As such, we equip GCMAE with an MAE branch and a CL branch, and the two
branches share a common encoder, which allows the MAE branch to exploit the
global information extracted by the CL branch. To force GCMAE to capture global
graph structures, we train it to reconstruct the entire adjacency matrix
instead of only the masked edges as in existing works. Moreover, a
discrimination loss is proposed for feature reconstruction, which improves the
disparity between node embeddings rather than reducing the reconstruction error
to tackle the feature smoothing problem of MAE. We evaluate GCMAE on four
popular graph tasks (i.e., node classification, node clustering, link
prediction, and graph classification) and compare with 14 state-of-the-art
baselines. The results show that GCMAE consistently provides good accuracy
across these tasks, and the maximum accuracy improvement is up to 3.2% compared
with the best-performing baseline. | Yuxiang Wang, Xiao Yan, Chuang Hu, Fangcheng Fu, Wentao Zhang, Hao Wang, Shuo Shang, Jiawei Jiang | 2023-10-24T05:06:06Z | http://arxiv.org/abs/2310.15523v1 | # Generative and Contrastive Paradigms Are Complementary for Graph Self-Supervised Learning
###### Abstract.
For graph self-supervised learning (GSSL), _masked autoencoder_ (MAE) follows the generative paradigm and learns to reconstruct masked graph edges or node features. _Contrastive Learning_ (CL) maximizes the similarity between augmented views of the same graph and is widely used for GSSL. However, MAE and CL are considered separately in existing works for GSSL. We observe that the MAE and CL paradigms are complementary and propose the _graph contrastive masked autoencoder_ (GCMAE) framework to unify them. Specifically, by focusing on local edges or node features, MAE cannot capture global information of the graph and is sensitive to particular edges and features. On the contrary, CL excels in extracting global information because it considers the relation between graphs. As such, we equip GCMAE with an MAE branch and a CL branch, and the two branches share a common encoder, which allows the MAE branch to exploit the global information extracted by the CL branch. To force GCMAE to capture global graph structures, we train it to reconstruct the entire adjacency matrix instead of only the masked edges as in existing works. Moreover, a discrimination loss is proposed for feature reconstruction, which improves the disparity between node embeddings rather than reducing the reconstruction error to tackle the feature smoothing problem of MAE. We evaluate GCMAE on four popular graph tasks (i.e., node classification, node clustering, link prediction, and graph classification) and compare it with 14 state-of-the-art baselines. The results show that GCMAE consistently provides good accuracy across these tasks, and the maximum accuracy improvement is up to 3.2% compared with the best-performing baseline.
## 1. Introduction
Graphs model entities as nodes and the relations among the entities as edges and prevail in many domains such as social networks (Goh et al., 2017; Wang et al., 2018), finance (Zhu et al., 2018), biology (Goh et al., 2017), and medicine (Zhu et al., 2018). By conducting message passing on the edges and utilizing neural networks to aggregate messages on the nodes, graph neural network (GNN) models perform well for various graph tasks, e.g., node classification, node clustering, link prediction, and graph classification (Goh et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). These tasks facilitate many important applications including recommendation, fraud detection, community detection, and pharmacy (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). To train GNN models, _graph self-supervised learning_ (GSSL) is becoming increasingly popular because it does not require label information (Wang et al., 2018; Wang et al., 2018), which can be rare or expensive to obtain in practice. Existing works handle GSSL with two main paradigms (Zhu et al., 2018; Wang et al., 2018), i.e., _masked autoencoder_ (MAE) and _contrastive learning_ (CL).
Graph MAE methods usually come with an encoder and a decoder, which are both GNN models (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). The encoder computes node embeddings from a corrupted view of the graph, where some of the edges or node features are masked; the decoder is trained to reconstruct the masked edges or node features using these node embeddings. Graph MAE methods achieve high accuracy for graph tasks but we observe that they still have two limitations.
Graph MAE _misses global information_ of the graph, which results in sub-optimal performance for tasks that require a view of the entire graph (e.g., graph classification) (Goh et al., 2017; Wang et al., 2018). This is because both the encoder and decoder GNN models use a small number of layers (e.g., 2 or 3), and a \(k\)-layer GNN can only consider the \(k\)-hop neighbors of each node (Goh et al., 2017). Thus, both the encoder and decoder consider the local neighborhood.
Graph MAE is prone to _feature smoothing_, which means that neighboring nodes tend to have similar reconstructed features and harms performance for tasks that rely on node features (e.g., node classification). This is because the encoder and decoder GNN models aggregate the neighbors to compute the embedding for each node, and due to this local aggregation, GNN models are widely known to suffer from the over-smoothing problem (Goh et al., 2017; Wang et al., 2018), which also explains why GNN cannot use many layers.
Graph CL methods usually generate multiple perturbed views of a graph via data augmentation and train the model to maximize the similarity between positive pairs (e.g., an anchor node and its corresponding node in another view) and minimize the similarity between negative pairs (e.g., all other nodes except the positive pairs) (Wang et al., 2018; Wang et al., 2018). As graph CL can contrast nodes that are far apart (e.g., \(>\) 3 hops) in multiple views, it captures global information of the entire graph. However, without reconstruction loss for the node features and edges, CL is inferior to MAE in learning local information for the nodes or edges, and thus may have sub-optimal performance for tasks that require such information (e.g., link prediction and clustering) (Wang et al., 2018; Wang et al., 2018).
The above analysis shows that graph MAE and CL methods potentially complement each other in capturing local and global
information of the graph. Thus, combining MAE and CL may yield better performance than using them individually. Figure 1 shows such an example with node clustering. As a representative graph CL method, CCA-SSG (Zhou et al., 2017) has poor clustering for the nodes due to the lack of the local information. For GraphMAE (Kang et al., 2018), a state-of-the-art graph MAE method, the node clusters are more noticeable by capturing local information. However, by combining MAE and CL, our GCMAE yields the best node clustering and hence the highest accuracy among the three methods. Although the idea sounds straightforward, a framework that combines MAE and CL needs to tackle two key technical challenges.
MAE and CL have different graph augmentation logic and learning objectives (i.e., reconstruction and discrimination), and thus a unified view is required to combine them.
MAE and CL focus on local and global information respectively, so the model architecture should ensure that the two kinds of information complement each to reap the benefits.
To tackle these challenges, we design the _graph contrastive masked autoencoder_ (i.e., GCMAE ) framework. We first show that MAE and CL can be unified in one mathematical framework despite their apparent differences. In particular, we express MAE as a special kind of CL, which uses masking for data augmentation and maximizes the similarity between the original and masked graphs. This analysis inspires us to use _an MAE branch_ and _a CL branch_ in GCMAE and share _the encoder_ for the two branches. The MAE branch is trained to reconstruct the original graph while the CL branch is trained to contrast two augmented views; and the shared encoder is responsible for mapping the graphs (both original and augmented) to the embedding spaces to serve as inputs for the two branches. By design, the shared encoder conducts input mapping and transfers information between both branches. As such, local information and global information are condensed in the shared encoder and benefit both MAE and CL. Furthermore, to tackle the feature smoothing problem of MAE that makes the reconstructed features of neighboring nodes similar, we introduce a _discrimination loss_ for model training, which enlarges the variance of the node embeddings to avoid smoothing. Instead of training MAE to reconstruct only the masked edges as in existing works, we reconstruct the entire adjacency matrix such that MAE can learn to capture global information of the graph structures.
We evaluate our GCMAE on four popular graph tasks, i.e., node classification, node clustering, link prediction, and graph classification. These tasks have essentially different properties and require different information in the graph. We compare GCMAE with state-of-the-art baselines, including graph MAE methods such as GraphMAE (Kang et al., 2018), SeeGera (Zhou et al., 2017), S2GAE (Zhou et al., 2017), and MaskGAE (Zhou et al., 2017), and graph CL methods such as GDI (Zhou et al., 2017), MVGRL (Kang et al., 2018), GRACE (Zhou et al., 2018), and CCA-SSG (Zhou et al., 2017). The results show that GCMAE yields consistently good performance on the four graph tasks and is the most accurate method in almost all cases (i.e., a case means a dataset plus a task). As shown in Table 1, GCMAE outperforms both contrastive and MAE methods, and the improvements over the best-performing baselines are significant. We also conduct an ablation study for our model designs (e.g., shared encoder and the loss terms), and the results suggest that the designs are effective in improving accuracy.
To summarize, we make the following contributions:
* We observe the limitations of existing graph MAE and CL methods for graph self-supervised learning (GSSL) and propose to combine MAE and CL for enhanced performance.
* We design the GCMAE framework, which is the first to jointly utilize MAE and CL for GSSL to our knowledge.
* We equip GCMAE with tailored model designs, e.g., the _shared encoder_ for information sharing between MAE and CL, the _discrimination loss_ to combat feature smoothing, and training supervision with _adjacency matrix reconstruction_.
* We conduct extensive experiments to evaluate GCMAE and compare it with state-of-the-art, demonstrating that GCMAE is general across graph tasks and effective in accuracy.
## 2. Preliminaries
In this part, we introduce the basics of MAE and CL for GSSL to facilitate our formulation. The discussions on specific MAE and CL methods are provided in Section 6. We use \(G=(\mathbf{V},\mathbf{E})\) to denote a graph, where \(\mathbf{V}=\{\varrho_{1},\varrho_{2},\cdots,\varrho_{N}\}\) is the set of nodes with \(|\mathbf{V}|=N\) and \(\mathbf{E}\) is the set of edges. The adjacency matrix of the graph is \(\mathbf{A}\in\{0,1\}^{N\times N}\), and the node feature matrix is \(\mathbf{X}\in\mathbb{R}^{N\times d}\), where \(x_{i}\in\mathbb{R}^{d}\) is the feature vector of \(\varrho_{i}\) with dimension \(d\) and \(\mathbf{A}_{ij}=1\) if edge \((\varrho_{i},\varrho_{j})\in\mathbf{E}\).
### Contrastive Learning (CL) for GSSL
CL has been very popular for GSSL due to its good performance for various application scenarios, such as social network analysis (Zhou et al., 2017), molecular detection (Zhou et al., 2017), and financial deception (Zhou et al., 2018). Generally, graph CL follows a "augmenting-contrasting" pattern, where the
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Graph Task & vs. Contrastive & vs. MAE & Others \\ \hline Node classification & 4.8\% & 2.2\% & 12.0\% \\ Link prediction & 4.4\% & 1.5\% & - \\ Node clustering & 8.8\% & 3.2\% & 14.7\% \\ Graph classification & 2.5\% & 4.2\% & - \\ \hline \hline \end{tabular}
\end{table}
Table 1. Performance improvements of our GCMAE over the best-performing baselines in each category for different tasks. _Others_ are specialized methods for each task.
Figure 1. We conduct node clustering on the Cora and visualize the node embeddings learned by our GCMAE, GraphMAE, and CCA-SSG with t-SNE. GraphMAE and CCA-SSG represent MAE and CL methods, respectively. Nodes of the same category are plotted in the same color. We use Normalized Mutual Information (NMI) to evaluate the clustering effect, with a larger value indicating better accuracy.
graph views are corrupted via the data augmentation strategies \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) (e.g., node feature masking and node dropping), and then a GNN encoder \(f_{\theta}\) is used to encode the corrupted views into node embeddings. The goal of CL is to maximize the mutual information between the two corrupted views, which is achieved by maximizing the similarity of node embeddings as a surrogate. Thus, CL does not require label information and conducts learning by distinguishing between node embeddings in different views. Figure 2 (a) shows the framework of graph contrastive learning. The objective of CL is essentially to maximize the _similarity function_ between two node embeddings encoded through a GNN encoder \(f_{\theta}\), which can be formulated as:
\[\max_{\theta}\mathop{\mathbb{E}}_{x\sim\mathcal{D}}S\left(f_{\theta}\left( \mathcal{T}_{1}\left(\mathbf{A},\mathbf{X}\right)\right),f_{\theta}\left(\mathcal{T}_{ 2}\left(\mathbf{A},\mathbf{X}\right)\right)\right), \tag{1}\]
where \(\theta\) represents the network weights, node feature vector \(x\) follows the input data distribution \(\mathcal{D}\), and \(S\left(\cdot,\cdot\right)\) is the similarity function to measure the similarity between node embeddings.
### Masked Autoencoder (MAE) for GSSL
Different from graph CL methods, graph MAE follows a "masking-reconstructing" pattern. The overview of graph MAE methods is shown in Figure 2 (b). In general, graph MAE methods first randomly mask node features or edges with mask patches \(\mathbf{M}\), where
Figure 3. Structure of our GCMAE framework, which consists of an MAE branch and a contrastive branch with a shared encoder.
Figure 2. Architectures of the generative paradigm, contrastive paradigm, and our GCMAE for graph self-supervised learning. _Contrast_ means the contrastive loss while _Reconstruct_ means the reconstruction loss.
\(x\odot M\) represents visible tokens and \(x\odot(1-M)\) means masked tokens. Graph MAE leverages the encoder \(f_{\theta}\) to encode the visible tokens into node embeddings, and then the decoder \(d_{\phi}\) attempts to reconstruct the masked tokens by decoding from the node embeddings. Thus, we wish to maximize the similarity between the masked tokens and the reconstructed tokens:
\[\max_{\theta,\phi}\mathop{\mathbb{E}}_{x\sim D}S\left(d_{\phi}\left(h\right) \odot(1-M),x\odot(1-M)\right),h=f_{\theta}\left(x\odot M\right), \tag{2}\]
where \(\odot\) means element-wise product, \(h\) is the node embedding learned by the encoder, and \(S\left(\cdot,\cdot\right)\) is the _similarity function_ for MAE modeling.
## 3. The GCMAE Framework
Since graph MAE cannot benefit from global information, it can only learn from limited neighbor nodes, which may eventually lead to similar graph representations. We are inspired by the successful application of CL, where global information can be learned by contrasting the anchor node with distant nodes. The overall framework is shown in Figure 3.
### Unifying CL and MAE for GSSL
Even though contrastive and generative approaches have achieved individual success, there is a lack of systematic analysis regarding their correlation and compatibility in one single framework. Motivated as such, we aim to explore a _unified_ Contrastive MAE paradigm to combine contrastive paradigm and MAE paradigm. From Equation 2, we can conclude that graph MAE essentially maximizes the similarity between node embeddings reconstructed via decoder and the masked tokens, which is formally analogous to Equation 1. In other words, both the contrastive paradigm and the generative paradigm are maximizing the similarity between two elements within the function.
Let us still focus on Equation 2, graph MAE wishes to find a theoretically optimal decoder that can reconstruct the masked tokens losslessly. Suppose we can achieve the decoder \(d^{\prime}_{\phi^{\prime}}\) parameterized by \(\phi^{\prime}\) that satisfies \(d^{\prime}_{\phi^{\prime}}\) (\(f_{\theta}\left(x\odot(1-M)\right)\)) \(\cdot(1-M)\approx x\odot(1-M)\) as closely as possible. Then, we transform Equation 2 into the following form:
\[\max_{\theta,\phi^{\prime}}S(d_{\phi}(h)\odot(1-M)-d^{\prime}_{\phi^{\prime}} (h)\odot(1-M)), \tag{3}\]
where \(\phi^{\prime}\) is optimized in the following form:
\[\phi^{\prime}\!=\!\arg\max_{\phi^{\prime}}\mathop{\mathbb{E}}_{x^{\prime}\sim D }S(d^{\prime}_{\phi^{\prime}}(f_{\theta}(x^{\prime}\odot(1-M)))\odot(1-M)-x^{ \prime}\odot(1-M)), \tag{4}\]
where \(x^{\prime}\) is the feature token reconstructed by the optimal decoder \(d^{\prime}_{\phi^{\prime}}\). Notice that since \(d\) and \(d^{\prime}\) have the same architecture, we let \(d=d^{\prime}\). Inspired by (Zhou et al., 2017), we further simplify Equation 3, and define the loss function for MAE:
\[\mathcal{L}(h_{1},h_{2},\phi,\phi^{\prime})=\max_{\phi,\phi^{\prime}}S(d_{ \phi}(h_{1}),d_{\phi^{\prime}}(h_{2}))\odot(1-M), \tag{5}\]
where \(h_{1},h_{2}\) are the hidden embeddings derived from two augmentation strategies:
\[\begin{cases}&h_{1}=f_{\theta}(\mathcal{T}_{1}(x))=f_{\theta}(x\odot M),\\ &h_{2}=f_{\theta}(\mathcal{T}_{2}(x))=f_{\theta}(x\odot(1-M)).\end{cases} \tag{6}\]
In this way, we rewrite the generative paradigm into a form similar to the contrastive paradigm, both with data augmentation and optimizing a similarity function.
In order to integrate MAE and CL into one optimization framework, a novel self-supervised paradigm, _Contrastive MAE_ paradigm. The MAE branch is trained to reconstruct the masked tokens using similarity function \(S_{1}\) while the CL branch is trained to contrast two augmented views using similarity function \(S_{2}\):
\[\mathcal{L}(x,M,h_{1},h_{2},\theta,\phi)\!=\!\max_{\theta,\phi}S_{1}(d_{\phi} (h_{1}),x\odot(1-M))\!+\!S_{2}(h_{1},h_{2}). \tag{7}\]
**Summary.** Despite the recent attempts by contrastive MAE methods (Zhou et al., 2017; Wang et al., 2018), understanding their underlying rationale remains an open question. Therefore, we provide a theoretical analysis showing that the nature of the generative paradigm is essentially similar to that of CL. Rather than treating them separately, these two paradigms can potentially complement each other, compensating for their respective shortcomings and obtaining better performance. Moreover, both the generative and contrastive paradigms share similar optimization objectives, which enables us to optimize them within a unified framework.
### Structure of the GCMAE Framework
_Objective_. In this paper, we aim to unify graph MAE and CL into a single framework and further reveal the intrinsic relation between graph MAE and CL, where CL can help graph MAE to achieve global information, through which we hope to further improve the performance of graph MAE on various downstream tasks.
_Overview:_ We propose a novel graph SSL framework to incorporate CL into graph MAE. Overall, our framework consists of two core modules: the MAE Module and the Contrastive Module, which are used to reconstruct masked nodes and contrast two corrupted views, respectively. To achieve our design goal, we introduce three main components:
* **Shared encoder.** We take the masked graph in Graph MAE as an augmented view, and then generate another view by randomly dropping nodes. We leverage a shared encoder to connect both modules and encode the corrupted views into hidden node embeddings. In this way, the contrastive module can transfer the global information to the MAE module. The node embedding generated from MAE module is used to calculate feature reconstruction loss using Scaled Cosine Error (SCE) \(\mathcal{L}_{SCE}\), and another node embedding is leveraged to compute contrastive loss \(\mathcal{L}_{C}\).
* **Discrimination loss.** Low discriminative input node features will cause graph MAE to generate similar node representations. Thus, we propose a novel discrimination loss \(\mathcal{L}_{Var}\), which improves feature discriminability by increasing the variance between node hidden embeddings (i.e., the output of the shared encoder).
* **Adjacency matrix reconstruction.** In order to further improve performance on link prediction and node clustering, we reconstruct the entire graph and calculate the adjacency matrix reconstruction error \(\mathcal{L}_{E}\). This is because reconstructing the limited masked edges is not enough to learn the entire graph structures.
Therefore, the overall loss function for GCMAE is defined as follows:
\[\mathcal{J}=\mathcal{L}_{SCE}+\alpha\mathcal{L}_{C}+\lambda\mathcal{L}_{E}+\mu \mathcal{L}_{Var}, \tag{8}\]
where \(\alpha,\lambda\) and \(\mu\) are hyper-parameter used to adjust the weights. In summary, the contrastive module learns global information by contrasting two corrupted views and transfers it to the MAE module with a shared encoder. Before decoding, discrimination loss is leveraged to improve the discrimination between node embeddings to avoid yielding similar node representations. We reconstruct the node features and adjacency matrix in the MAE module.
## 4. Model Designs
In this section, we present a detailed description of the proposed GCMAE. We introduce each core component individually with the following three questions in our mind:
* **Q1:** How to learn global information for graph MAE?
* **Q2:** How to train a decent encoder to learn the entire graph structures?
* **Q3:** How to enhance the feature discrimination?
Recently, GraphMAE (Zhou et al., 2017) has attracted great attention in graph SSL due to its simple but effective framework. Therefore, we choose GraphMAE as the backbone in this paper. For a node set \(\mathbf{V}\), we randomly select a subset of node \(\widetilde{\mathbf{V}}\subseteq\mathbf{V}\) and mask their corresponding features \(\mathbf{X}\), i.e., setting its feature values to 0. We sample \(\widetilde{\mathbf{V}}\) following a specific distribution, i.e., Bernoulli distribution. This masking strategy is a common but effective strategy in a wide range of previous applications (Zhou et al., 2018; Wang et al., 2019). We consider this node masking operation as a random data augmentation in graph CL methods (Zhou et al., 2018; Wang et al., 2019). Then the node feature \(\hat{x}_{i}\) for node \(v_{i}\in\widetilde{\mathbf{V}}\) in the masked feature mask \(\hat{\mathbf{X}}\) can be defined as follow:
\[\hat{x}_{i}=\begin{cases}0,&v_{i}\in\widetilde{\mathbf{V}}\\ x_{i},&v_{i}\in\mathbf{V}\end{cases} \tag{9}\]
Further, given an encoder \(f_{E}\) and GNN decoder \(f_{D}\) define the embedding code:
\[\mathbf{H_{1}}=f_{E}(\mathbf{A},\hat{\mathbf{X}})\quad\mathbf{Z}=f_{D}(\mathbf{A},\mathbf{H_{1}}), \tag{10}\]
where \(\mathbf{H_{1}}\in\mathbb{R}^{N\times d^{n}}\) is the hidden embedding of the feature masking view encoded by \(f_{E}\) and then used as the input of GNN decoder \(f_{\hat{d}}\) to obtain the node representations \(\mathbf{Z}\in\mathbb{R}^{N\times d^{n}}\). \(d^{n}\) is the dimension of hidden embedding. Then we calculate the SCE loss:
\[\mathcal{L}_{\text{SCE}}=\frac{1}{|\widetilde{V}|}\sum_{v_{i}\in\widetilde{V} }(1-\frac{x_{i}^{T}h_{i}}{\|x_{i}\|\cdot\|x_{i}\|})^{\gamma},\gamma>1, \tag{11}\]
where \(\gamma\) is leveraged to adjust the hyperparameter of loss convergence speed, \(h_{i}\) is the hidden embedding vector of node \(v_{i}\), and \(z_{i}\) is the node representation of node \(v_{i}\).
### Shared Encoder
We reveal the intrinsic correlation between CL and Graph MAE and mathematically analyze the feasibility of jointly optimizing them in Section 3.1. However, how to efficiently transfer global information without complicated structural design is still an unsolved problem. Therefore, we ask here:
_how to transfer global information from the contrastive module to the MAE module?_
An intuitive solution is to train two encoders to encode the CL branch and the MAE branch separately, and then fuse the learned node embeddings. However, this inevitably faces two issues:
training two codes independently cannot realize information transfer between branches.
the strategy of fusion embedding will introduce an unnecessary design burden. Moreover, the accuracy performance depends on the lower bound of either embedding, and low-quality node embeddings will directly affect the performance.
To tackle the problem, we introduce a shared encoder that simultaneously encodes two augmented views to learn local and global information. The contrastive module leverages the shared encoder to transfer the global information from the entire graph to assist Graph MAE in learning meaningful representations and fine-tuning local network parameters. We do not need an additional embedding fusion strategy, since the shared encoder can directly yield the unify node embeddings. Thus, we first use the shared encoder \(f_{E}\) to encode the corrupted graph \(\tilde{\mathbf{A}}\) augmented by random node dropping into hidden embeddings:
\[H_{2}=f_{E}(\tilde{\mathbf{A}},\mathbf{X}), \tag{12}\]
where \(\mathbf{H_{2}}\in\mathbb{R}^{N\times d^{n}}\). Then, we utilize two projectors \(g_{1}\), \(g_{2}\) to map \(\mathbf{H_{1}}\),\(\mathbf{H_{2}}\) into different vector spaces:
\[\mathbf{U}=\sigma(g_{1}(\mathbf{H_{1}},\psi_{1})),\quad\mathbf{V}=\sigma(g_{2}(\mathbf{H_{2}}, \psi_{2})), \tag{13}\]
where \(\sigma(\cdot)\) is a nonlinear activation function. The projector consists of a simple two-layer perceptron model with bias \(\psi_{1},\psi_{2}\). We use the classical InfoNCE as the contrastive loss function, defined for each positive sample pair \((u_{i},v_{i})\) as:
\[\ell_{\text{c}}=-\log\frac{\exp(s(u_{i}^{a},v_{i}^{b})/\tau)}{\sum_{j=1}^{n} [\exp(s(u_{i}^{a},v_{j}^{b}))/\tau+\exp(s(u_{i}^{a},v_{j}^{b})/\tau)]}, \tag{14}\]
where \(\tau\) is the temperature parameter and \(s(\cdot)\) is the cosine similarity between \(u_{i}\) and \(v_{i}\), while \(u_{i}^{a}\in\mathbf{U}\) and \(u_{j}^{b}\in\mathbf{V}\) denote the projected vector of \(v_{i}\) and \(v_{j}\), respectively. Since the two projectors are symmetric, the loss for the node pair \((v_{i},u_{i})\) is defined similarly. The overall objective of the contrastive module is to maximize the average mutual information of all positive sample pairs in the two views. Formally, it is defined as:
\[\mathcal{L}_{C}=\frac{1}{2N}\sum_{i=1}^{N}[\ell_{\text{c}}(u_{i},v_{i})+\ell_{ \text{c}}(v_{i},u_{i})]. \tag{15}\]
### Adjacency Matrix Reconstruction
Graph data exhibits a unique property of complex topology when compared to images and text, as graph contains complex graph topology and node features. Several studies have focused on adjacency matrix reconstruction (Zhou et al., 2017; Wang et al., 2019). Since it not only helps models pay more attention to link relationships but also facilitates the encoder to get rid of the constraints brought by fitting extreme feature values (Wang et al., 2019). By incorporating the adjacency matrix, the overall objective lightens the weight of feature reconstruction, thereby eliminating the incomplete learned knowledge caused by a single objective. Different from MaskGAE (Wang et al., 2019) reconstructing limited edges, we reconstruct node features and adjacency matrices at the
same time, which can help the model capture the global graph structures rather than particular edges or paths. In this work, we employ the output representation of the decoder to directly reconstruct the adjacency matrix:
\[\ell_{MSE}=\frac{1}{N^{2}}\sum_{i,j}(\hat{\mathbf{A}}_{ij}-\mathbf{A}_{ij})^{2},\quad \hat{\mathbf{A}}=\left\langle Z,Z^{T}\right\rangle=ZZ^{T}, \tag{16}\]
where \(\hat{\mathbf{A}}_{ij}\) is the probability of an edge existing between nodes \(v_{i}\) and \(v_{j}\). The MSE function measures the distance between the reconstructed adjacency matrix and the original adjacency matrix, ensuring consistency between the latent embedding and the input graph structure. However, due to the discretization and sparsity of the adjacency matrix, solely using MSE would make the model overfit to zero values. Therefore, we adopt Cross-Entropy (CE) to determine the existence of an edge between two nodes:
\[\ell_{BCE}=-\frac{1}{N^{2}}\sum_{i,j}\big{[}\mathbf{A}_{ij}\cdot\log\hat{\mathbf{A}}_{ ij}+(1-\mathbf{A}_{ij})\cdot\log(1-\hat{\mathbf{A}}_{ij})\big{]}. \tag{17}\]
In addition to the above issues, graph data is low-density and the degree distribution follows a power-law distribution, which cannot be adequately estimated by simple MSE and CE. The coexistence of both low-degree and high-degree nodes further renders Euclidean spaces invalid and hampers the training speed. To address this problem, we put forth the Relative Distance (RD) loss function:
\[\ell_{DIST}=-\log\frac{\sum_{(i,j)\in\mathbf{E}}D(z_{i},z_{j})}{\sum_{(i,j)\notin \mathbf{E}}D(z_{i},z_{j})}, \tag{18}\]
where \(D(\cdot,\cdot)\) defines the distance between node \(v_{i}\) and node \(v_{j}\), and \(z\) is the node representation of the decoder in the MAE module. The numerator and denominator in the RD loss function correspond to the sum of distances between adjacent nodes and non-adjacent nodes, respectively. Reconstructing the original degree distribution is an NP-hard problem. Inspired by CL, we are no longer obsessed with learning the original target distribution, but shift to a proxy task of evaluating node similarity. The final adjacency matrix reconstruction error is a combination of the three loss functions mentioned above:
\[\mathcal{L}_{E}=\ell_{MSE}+\ell_{BCE}+\ell_{DIST}. \tag{19}\]
### Discrimination Loss
Graph MAE approaches have achieved good classification performance, however, the presence of difficult-to-distinguish dimensions in the features can easily lead to deteriorated training. In comparison to the sparsity and discretization characteristics of graph data, pixel and text feature vectors possess a higher information density and more significant discrimination between each other. The low discrimination in graph data is primarily due to node features being vector representations of text, which are compressed representations of keywords obtained through feature extractors.
Previous research has demonstrated a strong correlation between feature discrimination and the performance of Graph MAE (GCMAE, 2017). Graph MAE may mislead the model and facilitate the propagation of harmful substances when reconstructing node features, as numerous information may be related to erroneous evaluations. In other words, graph MAE attempts to reconstruct the cancerous node features, which results in model collapse and drives the learned representations to be similar. This observation further inspires us to design a novel loss function to narrow down the gap between Graph MAE and feature discrimination.
To address the above challenges, we introduce a variance-based discrimination loss, which aims to assist the encoder in learning discriminative embeddings and cushion the impact caused by erroneous information to the network. More importantly, this variance term enforces different node representations within the same embedding matrix are diverse, compensating for the lack of original feature discrimination. The regularized standard deviation is defined as follows:
\[\mathcal{L}_{Var}(h,\epsilon)=\sqrt{\text{var}(h)+\epsilon}, \tag{20}\]
where \(\epsilon\) is a small scalar to prevent numerical instability when reducing to 0 values. \(\text{Var}(h)\) is the variance of hidden embeddings (i.e., the output of the shared encoder). This regularization encourages the encoder to map inputs to a different space within a specific range of variances, thereby preventing the model from collapsing to the same vector.
Algorithm 1 summarizes the overall training process of GCMAE with all the loss terms.
### Limitations
Although GCMAE enjoys the advantages of both CL and MAE and thus provides strong performance as we will show in Section 5, it still reveals several limitations. One drawback of GCMAE is that its training time may be relatively long because it uses two branches for CL and MAE and learns to reconstruct the entire adjacency matrix. As the adjacency matrix contains many edges for large graphs, the time consumption could be high. To alleviate this problem, we sample multiple sub-graphs from the original graph for reconstruction. As we will report in Section 5.4, the training time of GCMAE is comparable to the baseline methods.
## 5. Experimental Evaluation
In this part, we extensively evaluate our GCMAE along with state-of-the-art baselines on 4 graph tasks, i.e., node classification, link prediction, node clustering, and graph classification. We aim to to answer the following research questions:
* RQ1: How does GCMAE compare with the baselines in terms of the _accuracy for the graph tasks_?
* RQ2: Does GCMAE _achieve its design goals_, i.e., improving MAE with CL?
* RQ3: How _efficient_ it is to train GCMAE?
* RQ4: How do _our designs and the hyper-parameters_ affect the performance of GCMAE?
### Experiment Settings
**Datasets.** We conduct the experiments on 10 public graph datasets, which are widely used to evaluate graph self-supervised learning methods [22, 50, 65]. In particular, the 4 citation networks in Table 2 are used for node classification, link prediction, and node clustering, and the 6 graphs in Table 3 are used for graph classification. Note that each dataset in Table 2 is a single large graph while each dataset in Table 3 contains many small graphs for graph classification. We intentionally choose these graphs for diversity, i.e. they have nodes ranging from thousands to millions, with varying classes.
**Baselines.** We compare GCMAE with 14 state-of-the-art methods on the 4 graph tasks, which span the following 3 categories:
* **Supervised methods** directly train their models with labeled data, and we choose 2 classical GNN models, i.e., GCN [27] and GAT [49], for node classification.
* **Contrastive methods** learn to generate the node embeddings by discriminating positive and negative node pairs. Then, the embeddings serve as input for a separate model (e.g., SVM), which is tuned for the downstream task with labeled data. Following GraphMAE [22], we use DGI [50], MVGRL [18], GRACE [65], and CCA-SSG [60] for node classification, link prediction, and node clustering. For graph classification, we choose 4 graph-level contrastive methods, i.e., Infograph [46], GraphCL [59], JOAO [58], and InfoGCL [56].
* **Masked autoencoder (MAE) methods** adopt a "mask-reconstruct" structure to learn the node embeddings, and like the contrastive methods, a separate model is tuned for each downstream task. Following SeeGera [33], we use 4 graph MAE models, i.e., GraphMAE [22], SeeGera [33], S2GAE [47], and MaskGAE [30] for node classification, link prediction, clustering, and graph classification.
* **Deep clustering methods** are specially designed for node clustering and design objectives tailored for clustering to guide model training. We include 3 such methods, i.e., GC-VGE [16], SCGC [36], and GCC [12].
Note that some baselines may not apply for a task, e.g., the supervised methods only work for node classification, and thus we apply the baselines for the tasks when appropriate. For the contrastive and MAE methods, we use LIBSVM [4] to train SVM classifiers for node classification and graph classification following GraphMAE [22] and SeeGera [33]. 5-fold cross-validation is used to evaluate performance for the tasks. For link prediction, we fine-tune the final layer of the model using cross-entropy following MaskGAE [30]. For node clustering, we apply _K-means_[1] on the node embeddings. We use the Adam optimizer with a weight decay of 0.0001 for our method, and set the initial learning rate as 0.001. We conduct training on one NVIDIA GeForce RTX 4090 GPUs with 24GB memory.
**Performance metrics.** We are mainly interested in the accuracy of the methods and use well-established accuracy measures for each task [22]. In particular, we adopt the Accuracy score (ACC) for node classification, the Area Under the Curve (AUC) and Average Precision (AP) for link prediction, Normalized Mutual Information (NMI) and Adjusted Rand Index (ARI) for node clustering, and Accuracy score (ACC) for graph classification. For all these measures, larger values indicate better performance. For each case (i.e., dataset plus task), we report the average accuracy and standard deviation for each method over 5 runs with different seeds.
### Accuracy for the Graph Tasks (RQ1)
**Node classification.** Table 4 reports the accuracy scores of our GCMAE and the baselines for node classification. We observe that GCMAE is the most accurate method on all datasets, and compared with the best baseline, the accuracy improvements of GCMAE are 1.7% on Cora, 2.2% on Citeseer, 2.0% on PubMed, and 2.1% on Reddit. Considering the baselines, the supervised methods (i.e., GCN and
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline
**Dataset** & **\# Nodes** & **\# Edges** & **\# Features** & **\# Classes** \\ \hline Cora & 2,708 & 10,556 & 1,433 & 7 \\ Citeseer & 3,327 & 9,228 & 3,703 & 6 \\ PubMed & 19,717 & 88,651 & 500 & 3 \\ Reddit & 232,965 & 11,606,919 & 602 & 41 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Statistics of the datasets used for node classification, link prediction, and node clustering.
Figure 4. The embedding similarity between a node and its 5-hop neighbors w.r.t. the number of training epochs.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Dataset** & **\# Graphs** & **\# Classes** & **Avg. \# Nodes** \\ \hline IMDB-B & 1,000 & 2 & 19.8 \\ IMDB-M & 1,500 & 3 & 13 \\ COLLAB & 5,000 & 3 & 74.5 \\ MUTAG & 188 & 2 & 17.9 \\ REDDIT-B & 2,000 & 2 & 429.7 \\ NCI1 & 4,110 & 2 & 29.8 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Statistics of the datasets used for graph classification.
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c} \hline \hline & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{PubMed} & \multicolumn{2}{c}{Reddit} \\ \cline{3-10} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \multirow{4}{*}{Contrastive} & DGI & 93.88\(\pm\)1.00 & 93.60\(\pm\)1.14 & 95.98\(\pm\)0.72 & 96.18\(\pm\)0.68 & 96.30\(\pm\)0.20 & 95.65\(\pm\)0.26 & 97.05\(\pm\)0.42 & 96.74\(\pm\)0.16 \\ & MVGRL & 93.33\(\pm\)0.68 & 92.95\(\pm\)0.82 & 88.66\(\pm\)5.27 & 89.37\(\pm\)4.55 & 95.89\(\pm\)0.22 & 95.53\(\pm\)0.30 & OOM & OOM \\ & GRACE & 93.46\(\pm\)0.71 & 92.74\(\pm\)0.48 & 92.07\(\pm\)0.51 & 90.32\(\pm\)0.57 & 96.11\(\pm\)0.13 & 95.37\(\pm\)0.25 & 95.82\(\pm\)0.24 & 95.74\(\pm\)0.46 \\ & CCA-SSG & 93.88\(\pm\)0.95 & 93.74\(\pm\)1.15 & 94.69\(\pm\)0.95 & 95.06\(\pm\)0.91 & 96.63\(\pm\)0.15 & 95.97\(\pm\)0.23 & 97.74\(\pm\)0.20 & 97.58\(\pm\)0.12 \\ \hline \multirow{4}{*}{MAE} & GraphMAE & 90.70\(\pm\)0.01 & 89.52\(\pm\)0.01 & 70.55\(\pm\)0.05 & 74.50\(\pm\)0.04 & 69.12\(\pm\)0.01 & 87.92\(\pm\)0.01 & 96.85\(\pm\)0.24 & 96.77\(\pm\)0.35 \\ & SeeGera & 95.50\(\pm\)0.71 & 95.92\(\pm\)0.68 & 97.04\(\pm\)0.47 & 97.33\(\pm\)0.46 & 97.87\(\pm\)0.20 & 97.88\(\pm\)0.21 & - & - \\ & S2GAE & 95.05\(\pm\)0.76 & 95.01\(\pm\)0.62 & 94.85\(\pm\)0.49 & 94.84\(\pm\)0.23 & 98.45\(\pm\)0.03 & 98.22\(\pm\)0.05 & 97.02\(\pm\)0.31 & 97.10\(\pm\)0.27 \\ & MaskGAE & 96.66\(\pm\)0.17 & 96.29\(\pm\)0.23 & 98.00\(\pm\)0.23 & 98.25\(\pm\)0.16 & 99.06\(\pm\)0.05 & **98.99\(\pm\)0.06** & 97.75\(\pm\)0.20 & 97.67\(\pm\)0.14 \\ \hline ConMAE & GCMAE & **98.00\(\pm\)0.03** & **97.74\(\pm\)0.37** & **99.48\(\pm\)0.18** & **99.46\(\pm\)0.23** & **99.14\(\pm\)0.27** & 98.82\(\pm\)0.13 & **98.87\(\pm\)0.18** & **98.62\(\pm\)0.26** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Node clustering accuracy for the experimented methods. For each graph, we mark the most accurate method in boldface and the runner-up with underline.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & Method & Cora & Citeseer & PubMed & Reddit \\ \hline \multirow{4}{*}{Supervised} & GCN & 81.48\(\pm\)0.58 & 70.34\(\pm\)0.62 & 79.00\(\pm\)0.50 & 95.30\(\pm\)0.10 \\ & GAT & 82.99\(\pm\)0.65 & 72.51\(\pm\)0.71 & 79.02\(\pm\)0.32 & 96.00\(\pm\)0.10 \\ \hline \multirow{4}{*}{Contrastive} & DGI & 82.36\(\pm\)0.62 & 71.82\(\pm\)0.76 & 76.82\(\pm\)0.62 & 94.03\(\pm\)0.10 \\ & MVGRL & 83.48\(\pm\)0.53 & 73.27\(\pm\)0.56 & 80.11\(\pm\)0.77 & OOM \\ & GRACE & 81.86\(\pm\)0.42 & 71.21\(\pm\)0.53 & 80.62\(\pm\)0.43 & 94.72\(\pm\)0.04 \\ & CCA-SSG & 84.03\(\pm\)0.47 & 72.99\(\pm\)0.39 & 81.04\(\pm\)0.48 & 95.07\(\pm\)0.02 \\ \hline \multirow{4}{*}{MAE} & GraphMAE & 85.45\(\pm\)0.40 & 72.48\(\pm\)0.77 & 82.53\(\pm\)0.14 & 96.01\(\pm\)0.08 \\ & SeeGera & 85.56\(\pm\)0.25 & 72.81\(\pm\)0.13 & 83.01\(\pm\)0.32 & 95.66\(\pm\)0.30 \\ & S2GAE & 86.15\(\pm\)0.25 & 74.54\(\pm\)0.06 & 86.79\(\pm\)0.22 & 95.27\(\pm\)0.21 \\ & MaskGAE & 87.31\(\pm\)0.05 & 75.10\(\pm\)0.07 & 86.33\(\pm\)0.26 & 95.17\(\pm\)0.21 \\ \hline \hline ConMAE & GCMAE & **88.82\(\pm\)0.11** & **76.77\(\pm\)0.02** & **88.51\(\pm\)0.18** & **97.13\(\pm\)0.17** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Node classification accuracy scores for the experimented methods. For each graph, we mark the most accurate method in boldface and the runner-up with underline.
\begin{table}
\begin{tabular}{c|c|c c|c c|c c|c c} \hline \hline & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Cora} & \multicolumn{2}{c}{Citeseer} & \multicolumn{2}{c}{PubMed} & \multicolumn{2}{c}{Reddit} \\ \cline{3-10} & & AUC & AP & AUC & AP & AUC & AP & AUC & AP \\ \hline \multirow{4}{*}{Contrastive} & DGI & 93.88\(\pm\)1.00 & 93.60\(\pm\)1.14 & 95.98\(\pm\)0.72 & 96.18\(\pm\)0.68 & 96.30\(\pm\)0.20 & 95.65\(\pm\)0.26 & 97.05\(\pm\)0.42 & 96.74\(\pm\)0.16 \\ & MVGRL & 93.33\(\pm\)0.68 & 92.95\(\pm\)0.82 & 88.66\(\pm\)5.27 & 89.37\(\pm\)4.55 & 95.89\(\pm\)0.22 & 95.53\(\pm\)0.30 & OOM & OOM \\ & GRACE & 93.46\(\pm\)0.71 & 92.74\(\pm\)0.48 & 92.07\(\pm\)0.51 & 90.32\(\pm\)0.57 & 96.11\(\pm\)0.13 & 95.37\(\pm\)0.25 & 95.82\(\pm\)0.24 & 95.74\(\pm\)0.46 \\ & CCA-SSG & 93.88\(\pm\)0.95 & 93.74\(\pm\)1.15 & 94.69\(\pm\)0.95 & 95.06\(\pm\)0.91 & 96.63\(\pm\)0.15 & 95.97\(\pm\)0.23 & 97.74\(\pm\)0.20 & 97.58\(\pm\)0.12 \\ \hline \multirow{4}{*}{MAE} & GraphMAE & 90.70\(\pm\)0.01 & 89.52\(\pm\)0.01 & 70.55\(\pm\)0.05 & 74.50\(\pm\)0.04 & 6
GAT) perform the worst because they can only utilize label information while the other methods use self-supervised learning to introduce more supervision signals. The graph MAE methods generally perform better than the contrastive methods because node classification relies on the local information of each node (i.e., a local task), and graph MAE is better at capturing local information by learning to reconstruct individual node features and masked edges. Similar pattern is also observed in the results of other local tasks, i.e., link prediction and node clustering. The fact that GCMAE outperforms both contrastive and MAE methods suggests that our model designs allow us to enjoy the benefits of both paradigms.
**Link prediction.** Following SeeGera [33], S2GAE [47], we do not report the results of supervised methods for link prediction. Since DGI [50], MVGRL [18], and GraphMAE [22] do not report relevant experimental results, we train a network for each of them and output prediction results.
The results of the link prediction are shown in Table 5. GCMAE achieves the best prediction results (except AP on PubMed), with an average improvement of 1.1% on AUC and 0.8% when compared to the runner-up MaskGAE. The performance of GCMAE exceeds all contrastive methods by a large margin, with an average increase of 5.9% in AUC and 5.6% in AP. Compared to contrastive methods, we use adjacency matrix reconstruction as part of the total objective, forcing the model to pay more attention to graph structures. GraphMAE [22] performs poorly on link prediction, which indicates that only reconstructing node features can lead to performance degradation on link-level tasks. In contrast, MaskMAE [30] takes edges as reconstruction objectives, which is consistent with downstream tasks, and unsurprisingly becomes the strongest method among all baselines. Based on the observation, the adjacency matrix reconstruction brings more performance improvement to link prediction than edge reconstruction, because the model can capture more meaningful global structures of the graph.
**Node clustering.** As shown in Table 6, GCMAE achieves the best results among all baselines on the node clustering task. Especially on Citeseer, GCMAE improves by 2.2% on Reddit w.r.t. ARI when compared to the runner-up approach MaskGAE [30]. Compared with the MAE method, the improvements range from 0.2% to 3.2% regarding NMI and from 0.5% to 3.4% regarding ARI. Since GCMAE can learn the feature and structure difference between nodes of different clusters from global information, which helps the model clarify the boundaries of clusters. We can observe that the performance gap between the contrastive method and the MAE method on node clustering is not large, unlike node classification and link prediction. This is because the goal of node clustering is to divide the data set into different clusters, in other words, maximizing the similarity of intra-cluster nodes and expanding the difference of inter-cluster nodes. This goal is similar to the intrinsic mechanism of CL. Moreover, we choose 3 deep node clustering methods as the baseline, and GCMAE can still achieve an average improvement of 10.5% in NMI and 11.4% in ARI. This means that we can obtain high-quality node embeddings for the node clustering task without deliberately tailoring a clustering loss to guide the training process.
**Graph classification.** SeeGera [33] and MaskGAE [30] are not chosen as baselines due to the unavailable source code. Table 7 reports all the experimental results for graph classification. We can see that the contrastive methods and the graph MAE methods have comparable performance in graph classification, each achieving three runner-up results. In contrast, GCMAE achieves the highest accuracy on all datasets when compared to all baselines, the accuracy of our method is improved by an average of 1.3%. Our method benefits from both MAE and CL, and therefore can effectively distinguish the differences between graph-level embeddings by comparing multiple corrupted views. Compared to GraphMAE [22], by enhancing the feature discrimination through discrimination loss, our proposed method is able to learn meaningful representations even from features with limited information content, such as one-hot vectors based on labels or degrees. Overall, GCMAE does not only excel on node-level (i.e., node classification and node clustering) and link-level (i.e., link prediction) tasks, and it can generalize well to graph-level downstream tasks. The results clearly demonstrate
\begin{table}
\begin{tabular}{c|c|c c c c c c} \hline \hline & Method & IMDB-B & IMDB-M & COLLAB & MUTAG & REDDIT-B & NCI1 \\ \hline \multirow{4}{*}{Contrastive} & Infograph & 73.03\(\pm\)0.87 & 49.69\(\pm\)0.53 & 70.65\(\pm\)1.13 & 89.01\(\pm\)1.13 & 82.50\(\pm\)1.42 & 76.20\(\pm\)1.06 \\ & GraphCL & 71.14\(\pm\)0.44 & 48.58\(\pm\)0.67 & 71.36\(\pm\)1.15 & 86.80\(\pm\)1.34 & 89.53\(\pm\)0.84 & 77.87\(\pm\)0.41 \\ & JOAO & 70.21\(\pm\)3.08 & 49.20\(\pm\)0.77 & 69.50\(\pm\)0.36 & 87.35\(\pm\)1.02 & 85.29\(\pm\)1.35 & 78.07\(\pm\)0.47 \\ & MVGRL & 74.20\(\pm\)0.70 & 51.20\(\pm\)0.50 & OOM & 89.70\(\pm\)1.10 & 84.50\(\pm\)0.60 & OOM \\ & InfoGCL & 75.10\(\pm\)0.90 & 51.40\(\pm\)0.80 & 80.00\(\pm\)1.30 & 91.20\(\pm\)1.30 & OOM & 80.20\(\pm\)0.60 \\ \hline \multirow{2}{*}{MAE} & GraphMAE & 75.52\(\pm\)0.66 & 51.63\(\pm\)0.52 & 80.32\(\pm\)0.46 & 88.19\(\pm\)1.26 & 88.01\(\pm\)0.19 & 80.40\(\pm\)0.30 \\ & S2GAE & 75.76\(\pm\)0.62 & 51.79\(\pm\)0.36 & 81.02\(\pm\)0.53 & 88.26\(\pm\)0.76 & 87.83\(\pm\)0.27 & 80.80\(\pm\)0.24 \\ \hline \multirow{2}{*}{ConMAE} & GCMAE & **75.78\(\pm\)0.23** & **52.49\(\pm\)0.45** & **81.32\(\pm\)0.32** & **91.28\(\pm\)0.55** & **91.75\(\pm\)0.22** & **81.42\(\pm\)0.30** \\ \hline \hline \end{tabular}
\end{table}
Table 7. Graph classification accuracy for the experimented methods. For each dataset, we mark the most accurate method in boldface and the runner-up with underline.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Cora & Citeseer & PubMed \\ \hline MAE Encoder & 84.14 & 73.17 & 81.83 \\ Con. Encoder & 68.46 & 60.46 & 57.61 \\ Fusion Encoder & 85.61 & 71.71 & 78.63 \\ Shared Encoder & **88.82** & **76.77** & **88.51** \\ \hline \hline \end{tabular}
\end{table}
Table 8. Node classification accuracy when using different designs for the encoder of GCMAE.
the effectiveness of the proposed GCMAE framework and validate our claims.
### Anatomy of the Design Goals (RQ2)
**GCMAE learns global information.** To verify whether CL can help graph MAE to obtain global information, we visualize the similarity between nodes in GraphMAE and our GCMAE, respectively. Specifically, we randomly select distant nodes 5 hops away from the target node and then calculate the similarity between them. We can observe that as the training epoch increases, the similarity of target nodes to distant nodes remains at a low level, which means that MAE is prone to learn node embeddings from local information, according to the result shown in Figure 4 (a). When we combine the CL and graph MAE, GCMAE can gradually improve the similarity between target nodes to distant nodes. In other words, CL can make up for the shortcoming of GNN layers that are too shallow and potentially help graph MAE surpass the constraints of local receptive field and acquire global information, letting graph nodes gain useful knowledge from nodes or edges that are out of GNN's aggregation scope.
Note that as the number of training epochs increases, the similarity between the target node and the distant node does not continue to increase. As shown in Figure 4, after a certain number of epochs (i.e., epoch=90), the similarity tends to be stable in (0.4-0.6). Therefore, the model will not face the over-smoothing issue.
**The shared encoder effectively transfers global information.** We study the impact of shared encoder and unshared encoder on our model to evaluate whether the shared encoder can pass global information. Therefore, we conduct the node classification on Cora, Citeseer, and PubMed with the different types of encoder, Table 8 presents the accuracy results. We can find that "MAE Encoder" outperforms the other two independently parameterized encoders by a significant margin, but does not surpass the shared-parameter encoder. Kindly note that only using "MAE Encoder" means our method degenerates to GraphMAE (Zhou et al., 2018). The "Con. Encoder" does not perform as well as expected, which may be due to the excessive corruption of the input graph caused by a high mask ratio, leading to the failure of the contrastive encoder. "Fusion Encoder" represents the average sum of the embeddings generated by the "MAE Encoder" and "Con. Encoder". Naturally, "Fusion Encoder" may suffer from collapsed contrastive encoder, which results in suboptimal results. This is particularly important that the "Shared Encoder" achieves the best classification performance, which means CL can convey global information to the MAE module through the shared-parameter encoder, aiding MAE in perceiving long-range node semantics.
### The Training Efficiency of GCMAE (RQ3)
In order to study whether unifying CL and graph MAE will increase the training time consumption, we conduct node classification on 4 datasets: Cora, Citeseer, PubMed, and Reddit, and report the total time consumption. We choose CCA (Zhou et al., 2018) with the best performance on node classification among the contrastive methods, backbone method GraphMAE (Zhou et al., 2018) and MaskGAE (Zhou et al., 2018) with the best performance among MAE methods as comparison methods. Table 9 shows the time consumption of all methods under the parameter setting with the highest node classification accuracy, that is, the sum of pre-training time and the fine-tuning time for downstream tasks.
We can observe that CCA-SSG has the least time consumption, which is due to the use of _canonical correlation analysis_ to optimize the calculation between the embeddings of the two views, which greatly reduces the time consumption caused by large matrix operations. Our GCMAE is on average 2\(\times\) faster than GraphMAE. This is because GraphMAE uses GAT (Zhou et al., 2018) as an encoder, and GAT needs to take the entire adjacency matrix as input when encoding the input graph. Even if we try to reduce the dimension of the hidden embeddings, it still introduces unacceptable time consumption when encountering large-scale graphs (e.g., Reddit with millions of nodes). Unlike GraphMAE, the overall time consumption of our method is similar to that of MaskGAE, because we both use GraphSAGE (Zhou et al., 2018) to encode node embeddings, which can sample multiple subgraphs from a large-scale graph for mini-batch training without inputting the entire graph into the network. However, our method is still slower than MaskGAE, because we reconstruct the entire adjacency matrix instead of only reconstructing partial edges like MaskGAE. Overall, the efficiency performance of GCMAE is comparable to prior works, and there is not a significant increase in time consumption due to the combination of graph MAE and CL.
### Ablation Study and Parameters (RQ4)
In this part, we conduct an ablation study for the designs of GCMAE and explore the influence of the parameters. We experiment with the task of node classification and note that the observations are similar for the other graph tasks.
**The components of GCMAE.** To study the effectiveness of each component, we compare our complete framework with GraphMAE (Zhou et al., 2018) and three variants: "w/o Con.", "w/o Stru. Rec." and "w/o Disc.". The results are presented in Table 10, where "w/o Con." means the removal of the CL loss, "w/o Stru. Rec." is our method without adjacency matrix reconstruction, and "w/o Disc." is our method without feature discrimination loss. All results correspond to the
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & Cora & Citeseer & PubMed \\ \hline GCMAE & **88.8** & **76.7** & **88.5** \\ w/o Contrast. & 87.3 & 75.7 & 87.4 \\ w/o Stru. Rec. & 86.0 & 73.5 & 86.7 \\ w/o Disc. Loss & 87.0 & 74.1 & 86.9 \\ GraphMAE & 85.5 & 72.5 & 82.5 \\ \hline \hline \end{tabular}
\end{table}
Table 10. Node classification accuracy when removing the key components of GCMAE.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method & Cora & Citeseer & PubMed & Reddit \\ \hline CCA-SSG & **2.2(s)** & **1.9(s)** & **4.6(s)** & **0.8(h)** \\ GraphMAE & 152.8(s) & 93.1(s) & 1270.1(s) & 18.2(h) \\ MaskGAE & 26.3(s) & 40.5(s) & 52.7(s) & 2.3(h) \\ GCMAE & 28.6(s) & 55.3(s) & 508.9(s) & 2.5(h) \\ \hline \hline \end{tabular}
\end{table}
Table 9. End-to-end training time of representative methods. “s” means seconds and “h” means hours.
accuracy values (%) of node classification tasks on 3 benchmark datasets: Cora, Citeseer, and PubMed. We can find that the removal of any of these components leads to a decrease in performance. In particular, the exclusion of adjacency matrix reconstruction has the most severe impact on the model, resulting in a decrease of 3.2% in Cora, 4.3% in Citeseer, and 2.4% in PubMed. Interestingly, even if we totally remove the contrastive module, our method still outperforms the GraphMAE (Krishnan et al., 2018). This is because the adjacency matrix reconstruction provides rich graph structure information and the discrimination loss improves the feature discrimination among nodes. In other words, adjacency matrix reconstruction and discrimination loss play an extremely crucial role in our framework. In summary, all the above three components contribute significantly to the final results.
**Effect of the hyper-parameters.** To study the influence of different hyper-parameter settings on the model, we conduct sensitivity experiments on mask rate and drop rate in GCMAE. We report the performance in Figure 5, where the \(x\)-axis, \(y\)-axis, and \(z\)-axis represent the feature mask rate \(p_{mask}\), the drop node rate \(p_{drop}\), and the F1-Score value, respectively. We can find that the variation trends on all datasets are consistent. When \(p_{mask}\) is large (0.5-0.8), the model performance remains within a satisfactory range, which can also be observed in previous graph MAE models. A higher mask rate means lower redundancy, in which can help the encoder recover missing node features from the few neighboring nodes. When \(p_{mask}\) is fixed, increasing \(p_{drop}\) can improve the classification accuracy. Overall, compared to \(p_{drop}\), \(p_{mask}\) plays a decisive role in model performance, the variation of \(p_{mask}\) directly affects the experiment performance, while changes in \(p_{drop}\) do not cause significant fluctuations in the final results.
The impact of network width and depth on performance has attracted significant attention in the CL methods (Zhu et al., 2017; Chen et al., 2018). Therefore, we investigate the effects of selecting multiple network scale parameters on our model. As shown in Figure 6, increasing the network width results in a performance improvement of approximately 5.9% in Cora, 3.1% in Citeseer, and 4.0% in PubMed, as long as the hidden size does not exceed 1024 (except for PubMed where the limit is 2048). In contrast, a smaller hidden size leads to a sharp drop in performance. GCMAE achieves the best performance on most benchmarks with a 512-dimensional embedding. This means a moderate hidden size is crucial for the model to learn informative and compact node embeddings for downstream tasks.
Meanwhile, we can observe that network depth has a relatively smaller impact on performance compared to network width. Figure 6 shows that when the network has 2 layers, the accuracy is highest across all datasets. As the depth increases, the performance gradually decreases, with a decline of 30.0% in Cora, 7.8% in Citeseer, and 3.5% in PubMed when the depth reaches 8. Surprisingly, GNNs like GIN and GAT achieve good results with shallow depths, contrary to empirical results in the field of computer vision (Krishnan et al., 2018). This may be due to the fact that deep GNNs are challenging to optimize. As the depth increases, the learned representations tend to become homogeneous, thus degrading the performance.
## 6. Related Work
In this part, we review representative works on contrastive and generative methods for graph self-supervised learning (GSSL).
### Contrastive Methods for Graph SSL
Inspired by the success of CL in computer vision and natural language processing (Dosovitskiy et al., 2016; Krizhevsky et al., 2014; Krizhevsky et al., 2014), many works develop constructive methods for graph learning (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Chen et al., 2018; Chen et al., 2018). These methods generally produce multiple corrupted views of the graph via data augmentation and maximize the similarity between these views. For instance, GraphCL (Krizhevsky et al., 2014) adopts four types of graph augmentations (i.e., node dropping, edge perturbation, attribute masking, and subgraph sampling) to incorporate different priors and learns to predict whether two graphs are generated from the same graph. To improve GraphCL, GCA (Wang et al., 2017) determines the importance of each node via its centrality and decides whether to mask a node according to its importance. The node with higher centrality is less likely to be masked. BGRL (Wang et al., 2018) proposes to conduct CL without negative samples and thus reduces model training time. DGI (Wang et al., 2018) conducts CL using patches of a graph and uses a read-out function to compute
Figure 5. The influence of edge mask rate \(p_{mask}\) and node drop node rate \(p_{drop}\) on the performance of node classification.
Figure 6. The influence of the hidden layer dimension and the number of layers in the GNN models on the accuracy of our GCMAE for node classification.
graph-level embedding from the node embeddings. GRACE (Golovolovolov et al., 2017) corrupts both the graph topology and the node features such that contrast learning can capture more information. MVGRL (Levy et al., 2017) observes that simply increasing the number of views does not improve performance and proposes to maximize the mutual information among the node and graph representations in different views. CCA-SSG (Wang et al., 2017) leverages canonical correlation analysis to speedup the computation of contrastive loss among multiple augmented views and reduce model training time. We observe that the contrastive methods are good at capturing global information of the graph but poor in learning the local information for particular edges and nodes. Thus, by augmenting MAE with CL, our GCMAE outperforms the existing GSSL methods.
### Generative SSL Method
Different from contrastive methods, generative methods aim to reconstruct input data from hidden embeddings via a decoder and then minimize the distance between the input graph and the reconstructed graph.
**Graph Autoencoder** (GAE) is a classical self-supervised learning method, which encodes the graph structure information into a low-dimensional latent space and then reconstructs the adjacency matrix from hidden embeddings (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017). For example, DNGR (Beng et al., 2015) uses a stacked denoising autoencoder (Wang et al., 2016) to encode and decode the PPMI matrix via multi-layer perceptrons. However, DNGR ignores the feature information when encoding node embeddings. Therefore, GAE (Eli et al., 2017) utilizes GCN (Wang et al., 2017) to encode node structural information and node feature information at the same time and then uses a dot-product operation for reconstructing the input graph. Variational GAE (VGAE) (Eli et al., 2017) learns the distribution of data, where KL divergence is used to measure the distance between the empirical distribution and the prior distribution. In order to further narrow the gap between the above two distributions, ARVGA (Wang et al., 2016) employs the training scheme of a generative adversarial network (Golovolovolov et al., 2017) to address the approximate problem. RGVAE (Wang et al., 2017) further imposes validity constraints on a graph variational autoencoder to regularize the output distribution of the decoder. Unlike the previous asymmetric decoder structure, GALA (Golovolovolov et al., 2017) builds a fully symmetric decoder, which facilitates the proposed autoencoder architecture to make use of the graph structure. Later studies focused on leveraging feature reconstruction or additional auxiliary information (Wang et al., 2016; Chen et al., 2017). Unfortunately, most of them mainly perform well on a single task such as node classification or link prediction, since they are limited by a sufficient reconstruction objective. However, our GCMAE surpasses these GAE methods on various downstream tasks due to both reconstructing node features and edge.
**Masked Autoencoders** learn graph representations by masking certain nodes or edges and then reconstructing the masked tokens (Wang et al., 2016; Wang et al., 2016; Wang et al., 2016). This strategy allows the graph to use its own structure and feature information in a self-supervised manner without expensive label annotations. Recently, GraphMAE (Golovolovolov et al., 2017) enforces the model to reconstruct the original graph from redundant node features by masking node features and applying a re-masking strategy before a GNN decoder. Instead of masking node features, MaskGAE (Wang et al., 2016) selects edges as the masked token and then reconstructs the graph edges or random masked path accordingly. MaskGAE achieves superior performance in link prediction tasks compared to other graph MAE methods, but its performance in classification tasks is not as satisfactory as feature-based MAE because it does not reconstruct the node features. S2GAE (Wang et al., 2016) proposes a cross-correlation decoder to explicitly capture the similarity of the relationship between two connected nodes at different granularities. SeeGera (Wang et al., 2016) is a hierarchical variational framework that jointly embeds nodes and features in the encoder and reconstructs links and features in the decoder, where an additional structure/feature masking layer is added to improve the generalization ability of the model. Based on the above observations, these graph MAE methods all suffer from inaccessible global information, resulting in sub-optimal performance. However, GCMAE surpasses these graph MAE methods, as unifying CL and graph MAE enjoys the benefits of both paradigms and yields more high-quality node embeddings.
## 7. Conclusion
In this paper, we observed that the two main paradigms for graph self-supervised learning, i.e., masked autoencoder and contrastive learning, have their own limitations but complement each other. Thus, we proposed the GCMAE framework to jointly utilize MAE and contrastive learning for enhanced performance. GCMAE comes with tailored model designs including a shared encoder for information exchange, discrimination loss to tackle feature smoothing, and adjacency matrix reconstruction to learn global information of the graph. We conducted extensive experiments to evaluate GCMAE on various graph tasks. The results show that GCMAE outperforms state-of-the-art GSSL methods by a large margin and is general across graph tasks. |
2302.05521 | The Conformal Laplacian and Positive Scalar Curvature Metrics on
Manifolds with Boundary | We give examples of spin $4$-manifolds with boundary that do not admit
metrics of positive scalar curvature and nonnegative mean curvature. These
manifolds in fact have the stronger property that the conformal Laplacian with
appropriate boundary conditions is never positive. The obstruction to the
positivity of the conformal Laplacian is given by a real-valued $\xi$-invariant
associated to the APS theorem for the twisted Dirac operator. We use analytic
techniques related to the prescribed scalar curvature problem in conformal
geometry to move beyond earlier work where the metric is a product near the
boundary. | Steven Rosenberg, Daniel Ruberman, Jie Xu | 2023-02-10T21:50:20Z | http://arxiv.org/abs/2302.05521v6 | # The conformal Laplacian and positive scalar curvature metrics on manifolds with boundary
###### Abstract.
We give examples of spin \(4\)-manifolds with boundary that do not admit metrics of positive scalar curvature and nonnegative mean curvature. These manifolds in fact have the stronger property that the conformal Laplacian with appropriate boundary conditions is never positive. The obstruction to the positivity of the conformal Laplacian is given by a real-valued \(\xi\)-invariant associated to the APS theorem for the twisted Dirac operator. We use analytic techniques related to the prescribed scalar curvature problem in conformal geometry to move beyond earlier work where the metric is a product near the boundary.
The second author was partially supported by NSF Grant DMS-1928930 while he was in residence at the Simons Laufer Mathematical Sciences Institute (formerly known as MSRI), as well as NSF FRG Grant DMS-1952790.
Math. Subj. Class. 2020: 53C21 (primary), 35J66, 53C27, 58J05, 58J20 (secondary).
## 1. Introduction
In this paper, we study the \(\eta\)-invariants of the \(\
## 2. Positivity of the Conformal Laplacian in the interior and on the Boundary
Let \((M^{n},g)\) be a connected, compact, orientable smooth Riemannian manifold with smooth boundary \(\partial M,\)\(n=\dim M\geqslant 3\). In this section we prove that if the conformal Laplacian \(\Box_{g}\) with Robin boundary conditions is positive on \(M\), then the first eigenvalue \(\zeta_{1}\) of the conformal Laplacian \(\Box_{\mathfrak{i}^{*}g}\) on \((\partial M,\mathfrak{i}^{*}g)\) is positive (Thm. 2.2). This yields various geometric and analytic obstructions to extending a metric on \(\partial M\) to a metric on \(M\) with positive conformal Laplacian (Thm. 2.1). Thm. 2.2 will be used in SS4.
### Notation and Basic Setup
We have the inclusion map \(\imath:\partial M\hookrightarrow M.\) The unit outward normal vector field along \(\partial M\) is denoted by \(\nu.\)\(R_{g}\) is the scalar curvature of \(g\), \(\mathrm{Ric}_{g}(X,Y)\) is the Ricci curvature tensor, \(h_{g}=\mathrm{Tr}(A_{g})\) is the mean curvature of \(\partial M\), where \(A_{g}\) is the second fundamental form on \(\partial M\): \(A_{g}(X,Y)=\left(\nabla_{X}Y\right)^{\perp}\) for \(X,Y\in T(\partial M)\) where \(\nabla\) is the Levi-Civita connection of \(g\) and \(\perp\) is projection to the normal direction.
The positive definite Laplace-Beltrami operator is
\[-\Delta_{g}u=-\mathrm{Tr}(\mathrm{Hess}_{u})=-\frac{1}{\sqrt{\det(g)}} \partial_{i}\left(g^{ij}\sqrt{\det(g)}\partial_{j}\right).\]
For
\[a=\frac{4(n-1)}{n-2},p=\frac{2n}{n-2},\]
the conformal Laplacian is
\[\Box_{g}u:=-a\Delta_{g}u+R_{g}u;\]
other sources define \(\Box_{g}\) to be \(a^{-1}\) times our \(\Box_{g}.\) We use a Robin boundary operator on \(\partial M\):
\[B_{g}u:=\frac{\partial u}{\partial\nu}+\frac{2}{p-2}h_{g}u.\]
\(\eta_{1}=\eta_{1,g}\) is the first eigenvalue of conformal Laplacian with respect to the Robin condition \(B_{g}u=0\):
\[\Box_{g}\varphi=\eta_{1}\varphi\text{ in }M,B_{g}\varphi=0\text{ on }\partial M. \tag{2}\]
\(\zeta_{1}\) is the first eigenvalue of the conformal Laplacian on the closed manifold \(\partial M:\Box_{\mathfrak{i}^{*}g}\psi=\zeta_{1}\psi\) on \(\partial M.\)
Let \(\lambda(\partial M,i^{*}g)\) be the Yamabe invariant on \(\partial M\). We have
\[\Box_{g}>0\Leftrightarrow\eta_{1}>0,\ \ \Box_{\mathfrak{i}^{*}g}>0 \Leftrightarrow\zeta_{1}>0\Leftrightarrow\lambda(\partial M,i^{*}g)>0.\]
Here \(\mathrm{sgn}(\zeta_{1})=\mathrm{sgn}(\lambda(\partial M,i^{*}g))\) by [18, Lemma 2.5], and both signs are conformal invariants.
Let \(X\) be a smooth vector field and \(f\) a smooth function on \(M\). In a local orthonormal frame \(\{e_{1},\ldots,e_{n}\}\), the vector field \(\nabla_{X}f=X(f)=df(X)\) has the local expression \(X^{i}\nabla_{i}f\) for \(X=X^{i}e_{i}\). In particular, \(\nabla_{n}f=\frac{\partial f}{\partial\nu}\) on \(\partial M\).
The prescribed scalar curvature problem both in the interior \(M\) and on the boundary \(\partial M\) is treated in [27]. We recall that for a given function \(R\in\mathcal{C}^{\infty}(M)\), the existence of a metric \(\tilde{g}\) conformal to \(g\) with \(R_{\tilde{g}}=R\) reduces to the existence of a positive, smooth solution of the PDE
\[\Box_{g}u=Ru^{p-1}\text{ in }M,u=f\text{ on }\partial M. \tag{3}\]
for some appropriate choice of \(f\) on \(\partial M\). Here \(\tilde{g}=u^{p-2}g=u^{4/(n-2)}g\).
### Results on Ric and \(A_{g}\)
Motivated by the relationship between \(R_{g}|_{\partial M}\), \(R_{t^{*}g}\), \(A_{g}\), and \(\operatorname{Ric}(\nu,\nu)\) (see (16)), we give some results on Ricci curvature and the second fundamental form under conformal change. We will use two related notations for conformal changes on \(M\):
\[\tilde{g}=u^{p-2}g=e^{2\phi}g.\]
At \(p\in\partial M\), we have a local orthonormal frame \(\{e_{1},\ldots,e_{n-1},\nu\}:=\{e_{1},\ldots,e_{n-1},e_{n}\}\) of \(T_{p}M\). In terms of \(\phi\), the well-known transformation law for the Ricci curvature tensor is
\[\operatorname{Ric}_{\tilde{g},ij}=\operatorname{Ric}_{g,ij}-(n-2)\left(\nabla _{i}\nabla_{j}\phi-\nabla_{i}\phi\nabla_{j}\phi\right)-\left(\Delta_{g}\phi+(n -2)|d\phi|_{g}^{2}\right)g_{ij}. \tag{4}\]
We are interested in the sign of \(\operatorname{Ric}_{\tilde{g},nn}=\operatorname{Ric}_{\tilde{g}}(\nu,\nu)\); see Theorem 2.1. Our immediate goal is to show the existence of a smooth solution of the following PDE, which contains key terms in (4),
\[-(n-2)\nabla_{n}\nabla_{n}\phi-\Delta_{g}\phi=F\text{ in }M,\frac{\partial \phi}{\partial\nu}=0\text{ on }\partial M, \tag{5}\]
provided the smooth function \(F\) satisfies \(\int_{M}Fd\text{Vol}_{g}=0\).
We start with an easier result.
**Lemma 2.1**.: _For \(f\in\mathcal{C}^{\infty}(M)\), the PDE_
\[-(n-2)\nabla_{n}\nabla_{n}\phi-\Delta_{g}\phi+\phi=f\text{ in }M,\frac{\partial \phi}{\partial\nu}=0\text{ on }\partial M, \tag{6}\]
_admits a solution \(\phi\in\mathcal{C}^{\infty}(M)\)._
Proof.: The corresponding bilinear form with respect to the operator in (6) is
\[B[u,v]=(n-2)\int_{M}\nabla_{n}u\cdot\nabla_{n}v\ d\text{Vol}_{g}+\int_{M} \nabla_{g}u\cdot\nabla_{g}v\ d\text{Vol}_{g}+\int_{M}uv\ d\text{Vol}_{g}. \tag{7}\]
Applying the standard Lax-Milgram argument to the bilinear form \(B[u,v]\), we conclude that there exists \(u\in H^{1}(M,g)\) solving (6). The smoothness of \(u\) follows from standard elliptic regularity arguments.
We can also treat nontrivial boundary conditions.
**Corollary 2.1**.: _For \(f\in\mathcal{C}^{\infty}(M)\) and \(f^{\prime}\in\mathcal{C}^{\infty}(\partial M)\), the PDE_
\[-(n-2)\nabla_{n}\nabla_{n}\phi-\Delta_{g}\phi+\phi=f\text{ in }M,\frac{ \partial\phi}{\partial\nu}=f^{\prime}\text{ on }\partial M, \tag{8}\]
_admits a unique solution \(\phi\in\mathcal{C}^{\infty}(M)\)._
Proof.: We can find a function \(S\in\mathcal{C}^{\infty}(M)\) such that \(\frac{\partial S}{\partial\nu}=f^{\prime}\) on \(\partial M\). For \(\Phi=\phi-S,\)\(\Phi\) satisfies
\[-(n-2)\nabla_{n}\nabla_{n}\Phi-\Delta_{g}\Phi+\Phi=f^{\prime\prime}\text{ in }M,\frac{\partial\Phi}{\partial\nu}=0\text{ on }\partial M\]
for some smooth function \(f^{\prime\prime}\). We then apply Lem. 2.1.
Now we can solve a slightly more general form of (5). Let \(dS_{g}\) denote the volume form of \(i^{*}g\) on \(\partial M\).
**Proposition 2.1**.: _For \(F,f\in\mathcal{C}^{\infty}(M)\) such that_
\[\int_{M}Fd\text{Vol}_{g}=\int_{\partial M}fdS_{g}, \tag{9}\]
_the PDE_
\[-(n-2)\nabla_{n}\nabla_{n}\phi-\Delta_{g}\phi=F\text{ in }M,\frac{\partial \phi}{\partial\nu}=f\text{ on }\partial M, \tag{10}\]
_admits a solution \(\phi\in\mathcal{C}^{\infty}(M)\)._
This will be used in (13) below.
Proof.: The proof is essentially due to Taylor [23, Ch. 5. SS6]. We will apply the Fredholm alternative to show the existence of \(\phi\) in the Sobolev space \(H^{k+2}(M,g),k\geqslant 0\), solving (10). By standard elliptic regularity, \(\phi\) satisfies
\[\left\|\phi\right\|_{H^{k+2}(M,g)}\leqslant C\left(\left\|F\right\|_{H^{k}(M,g )}+\left\|f\right\|_{H^{k+1}(Mg)}+\left\|\phi\right\|_{H^{k}(M,g)}\right),\]
so \(\phi\) will be smooth.
By [23, App. A, Prop. 6.7] the linear operator
\[\mathcal{T}:H^{k+2}(M,g)\to H^{k}(M,g)\oplus\left(H^{k+1}(M,g)\cap H^{k+\frac{ 1}{2}}(\partial M,\imath^{*}g)\right),\]
\[\mathcal{T}\phi=\left(-(n-2)\nabla_{n}\nabla_{n}\phi-\Delta_{g}\phi,\frac{ \partial\phi}{\partial\nu}\right),\]
has closed range. By (9), \((1,1)\in\mathcal{C}^{\infty}(M)\oplus\mathcal{C}^{\infty}(\partial M)\) is orthogonal to the range of \(\mathcal{T}\).
We now show that \((1,1)\) spans the orthogonal complement to the range of \(\mathcal{T}\). This is equivalent to showing that \(\mathcal{T}\) is a Fredholm operator of index zero. Define
\[\mathcal{T}^{{}^{\prime}}:H^{k+2}(M,g)\to H^{k}(M,g)\oplus\left(H^{k+1}(M,g) \cap H^{k+\frac{1}{2}}(\partial M,\imath^{*}g)\right),\]
\[\mathcal{T}^{{}^{\prime}}\phi=\left(-(n-2)\nabla_{n}\nabla_{n}\phi-\Delta_{g} \phi+\phi,\frac{\partial\phi}{\partial\nu}\right),\]
and define the compact operator \(\mathcal{K}\) by
\[\mathcal{K}:H^{k+2}(M,g)\to H^{k}(M,g)\oplus\left(H^{k+1}(M,g)\cap H^{k+ \frac{1}{2}}(\partial M,\imath^{*}g)\right),\mathcal{K}\phi=(\phi,0)\,.\]
By Cor. 2.1, \(\mathcal{T}^{\prime}\) is an isomorphism. Since \(\mathcal{T}^{\prime}=\mathcal{T}+\mathcal{K}\), the index of \(\mathcal{T}\) equals the index of the isomorphism \(\mathcal{T}^{\prime}\), which is zero.
We use these results to investigate the relationship between \(R_{\tilde{g}}\) on \(\partial M\) and \(R_{\imath^{*}\tilde{g}}\) for a conformal change \(\tilde{g}=u^{p-2}g\). We want a solution of (10) for \(f=0\) and an appropriate \(F\). We know we need \(\int_{M}F\ d\mathrm{Vol}_{g}=0.\) To construct \(F\), define
\[K:=\max_{x\in\partial M}\left\{\left(2\mathrm{Ric}_{g}(\nu,\nu)+|A_{g}|^{2}-h _{g}^{2}\right)|_{x}\right\},\ K_{1}:=\max_{x\in\partial M}\{\mathrm{Ric}_{g} (\nu,\nu)|_{x}\}. \tag{11}\]
We first find \(F\) on \(\partial M\) such that
\[2F+K<-1\ \text{and}\ F+K_{1}<-1\ \text{on}\ \partial M.\]
Set \(M_{0}=\{x\in M:\mathrm{dist}_{g}(x,\partial M)<\epsilon\}\) for some \(\epsilon>0\) to be determined. We then take \(F=c>0\) to be a positive constant on \(M-M_{0}\). We allow \(F\leqslant 0\) on \(M_{0}\). For \(\epsilon\) small enough, we can take \(c\) to also be small and still satisfy \(\int_{M}F\ d\mathrm{Vol}_{g}=0\), since the volume of \(M_{0}\) is small. It follows that we can construct \(F\) such that
\[\int_{M}Fd\mathrm{Vol}_{g}=0,\|F\|_{\mathcal{L}^{1}(M,g)}\ll 1,2F+K<-1\ \text{on}\ \partial M\ \text{and}\ F+K_{1}<-1\ \text{on}\ \partial M. \tag{12}\]
Fix one such \(F\). By Proposition 2.1, there is a smooth function \(\phi_{0}\) such that
\[-(n-2)\nabla_{n}\nabla_{n}\phi_{0}-\Delta_{g}\phi_{0}=F\ \text{in}\ M,\frac{ \partial\phi_{0}}{\partial\nu}=0\ \text{on}\ \partial M. \tag{13}\]
Although \(\tilde{g}=e^{2\phi_{0}}g\) for this \(\phi_{0}\) does not necessarily have \(R_{\tilde{g}}>0\), we can show that \(\mathrm{Ric}_{\tilde{g}}(\nu,\nu)=\mathrm{Ric}_{\tilde{g},nn}<-1\) everywhere on \(\partial M\). To see this, we first take local coordinates on a collar of the
boundary such that \(\partial_{x^{n}}=\nu\), \(g_{in}=0\) for \(i\neq n\), and \(g_{nn}=1.\) By (4) on \(\partial M\) with \(i=j=n\), we have
\[\operatorname{Ric}_{\tilde{g},nn}\biggr{|}_{\partial M} =\operatorname{Ric}_{g,nn}-(n-2)\left(\nabla_{n}\nabla_{n}\phi_{0} -\nabla_{n}\phi_{0}\nabla_{n}\phi_{0}\right)-\left(\Delta_{g}\phi_{0}+(n-2)|d \phi_{0}|_{g}^{2}\right)g_{nn}\] \[=\operatorname{Ric}_{g,nn}-(n-2)\nabla_{n}\nabla_{n}\phi_{0}- \Delta_{g}\phi_{0}-(n-2)|d\phi_{0}|_{g}^{2}\] \[\leqslant K_{1}+F-(n-2)|d\phi_{0}|_{g}^{2}<-1.\]
In addition, we can show that \(R_{\tilde{g}}<R_{t^{*}\tilde{g}}\) on \(\partial M\). We first recall that for \(\tilde{g}=e^{2\phi}g\), the second fundamental form transforms by
\[A_{\tilde{g}}(X,Y)=e^{\phi}A_{g}(X,Y)+\frac{\partial\phi}{\partial\nu}e^{\phi }g(X,Y),\]
where both sides are multiples of the outward unit normal vector for \(g\). For \(\phi=\phi_{0}\) in (13), it follows that
\[A_{\tilde{g}}(X,Y)=e^{\phi}A_{g}(X,Y). \tag{14}\]
The mean curvature under conformal change satisfies
\[h_{\tilde{g}}=e^{-\phi_{0}}h_{g}+\frac{\partial\phi_{0}}{\partial\nu}e^{-\phi }=e^{-\phi_{0}}h_{g}. \tag{15}\]
By taking the complete trace of the Gauss-Codazzi equation for \((M,\partial M)\), we obtain
\[R_{g}=2\operatorname{Ric}_{g}(\nu,\nu)+R_{t^{*}g}+|A_{g}|^{2}-h_{g}^{2}\text{ on }\partial M. \tag{16}\]
For any constant \(C\), the PDE (13) still holds if we replace \(\phi_{0}\) by \(\phi_{1}:=\phi_{0}+C\). We choose \(C\ll 0\) so that \(\phi_{1}<0\). For \(\tilde{g}=e^{2\phi_{1}}g\), by combining (14), (15), (16) and using \(\nabla_{n}\phi_{1}=0\) on \(\partial M\), we obtain
\[R_{\tilde{g}}\biggr{|}_{\partial M} =2\operatorname{Ric}_{\tilde{g},nn}+R_{t^{*}\tilde{g}}+|A_{ \tilde{g}}|^{2}-h_{\tilde{g}}^{2}\] \[=2\operatorname{Ric}_{g,nn}-2(n-2)\left(\nabla_{n}\nabla_{n}\phi _{1}-\nabla_{n}\phi_{1}\nabla_{n}\phi_{1}\right)-2\left(\Delta_{g}\phi_{0}+(n -2)|d\phi_{1}|_{g}^{2}\right)g_{nn}\] \[\quad\quad+R_{t^{*}\tilde{g}}+e^{2\phi_{1}}|A_{g}|^{2}-e^{-2\phi _{1}}h_{g}^{2}\] \[\leqslant R_{t^{*}\tilde{g}}+2\operatorname{Ric}_{g,nn}+|A_{g}|^{2}-h_ {g}^{2}+2F-2(n-2)|d\phi_{1}|_{g}^{2}\] \[\leqslant R_{t^{*}\tilde{g}}+K+2F-2(n-2)|d\phi_{1}|_{g}^{2}\] \[<R_{t^{*}\tilde{g}}.\]
We summarize this calculation.
**Theorem 2.1**.: _For any \((M,\partial M,g)\) of dimension at least three, there exists a conformal change \(\tilde{g}=e^{2\phi_{1}}g\) such that_
\[\operatorname{Ric}_{\tilde{g}}(\nu,\nu)<-1\text{ on }\partial M\text{ and }R_{\tilde{g}}<R_{t^{*}\tilde{g}}\text{ on }\partial M. \tag{17}\]
**Remark 2.1**.: This is a purely geometric result. In particular, if \(R_{\tilde{g}}>0\) on \(\partial M\), then \(R_{t^{*}\tilde{g}}>0\). As an analytic corollary, \(R_{\tilde{g}}>0\) implies the Yamabe invariant of \(\partial M\) is positive. We show in Thm. 2.2 that this Yamabe invariant is positive under the weaker hypothesis \(\eta_{1}>0\).
From now on we assume \(\eta_{1}>0\).
We can refine the last calculation to determine the sign of the Yamabe invariant for \((\partial M,i^{*}g)\), or equivalently the sign of the first eigenvalue \(\zeta_{1}\) of the conformal Laplacian \(\square_{i^{*}g}\).
Since \(\eta_{1}>0\), by the main theorem of [24], there exists a conformal change \(\tilde{g}=e^{2\phi_{2}}g\) such that \(R_{\tilde{g}}>0\) on \(M\) and \(h_{\tilde{g}}\equiv 0\), _i.e._,
\[R_{\tilde{g}}=e^{-2\phi_{2}}R_{g}-2(n-1)e^{-2\phi_{2}}\Delta_{g}\phi_{2}-(n-1) (n-2)e^{-2\phi_{2}}|d\phi_{2}|_{g}^{2}\geqslant B>0 \tag{18}\]
for some constant \(B\). This uses the standard equation relating \(R_{\tilde{g}}\) to \(R_{g}\). By (4), \(\operatorname{Ric}_{\tilde{g},nn}\) is unchanged if we replace \(\phi_{2}\) by
\[\phi_{3}=\phi_{2}+C,\]
for some negative constant \(C\) to be determined. By (16) for the metric \(\tilde{g}=\tilde{g}_{C}=e^{2\phi_{3}}g,\)
\[R_{\iota^{*}\tilde{g}} =R_{\tilde{g}}-2\operatorname{Ric}_{\tilde{g},nn}-|A_{\tilde{g}}| ^{2}+h_{\tilde{g}}^{2}\] \[=e^{-2C}\left(e^{-2\phi_{2}}R_{g}-2(n-1)e^{-2\phi_{2}}\Delta_{g} \phi_{2}-(n-1)(n-2)e^{-2\phi_{2}}|d\phi_{2}|_{g}^{2}\right)\] \[\qquad-2\operatorname{Ric}_{\tilde{g},nn}-e^{2C}e^{2\phi_{2}}|A_{ g}|^{2}+e^{-2C}\cdot 0\] \[\geqslant e^{-2C}\cdot B-2\operatorname{Ric}_{\tilde{g},nn}-e^{2C }e^{2\phi_{2}}|A_{g}|^{2}.\]
For \(C\ll 0,\) the first term on the right hand side is arbitrarily large, the second term remains constant, and the third term is close to zero. Thus for \(C\ll 0,\) we have
\[R_{\iota^{*}\tilde{g}}>1\;\text{on}\;\partial M. \tag{19}\]
This yields:
**Theorem 2.2**.: _Assume that \(\eta_{1}>0\) for \((M^{n},\partial M,g).\)_
1. _If_ \(n\geqslant 3\)_, then the first eigenvalue_ \(\zeta_{1}\) _of the conformal Laplacian_ \(\square_{\iota^{*}g}\) _on_ \((\partial M,\iota^{*}g)\) _is positive;_
2. _If_ \(n=3\)_, then_ \(\partial M\) _is diffeomorphic to a disjoint union of_ \(2\)_-spheres._
Proof.: After a conformal change, we can assume that \(R_{i^{*}\tilde{g}}\) is positive, so \(\square_{\iota^{*}\tilde{g}}\) has positive first eigenvalue. The sign of the first eigenvalue of the conformal Laplacian is a conformal invariant, so the first eigenvalue of \(\square_{\iota^{*}g}\) is positive.
(ii) Since the scalar curvature is a multiple of the Gaussian curvature on surfaces, each boundary component admits a metric of positive Gaussian curvature. By the Gauss-Bonnet theorem, each component is diffeomorphic to \(S^{2}.\)
**Remark 2.2**.: In [14], particularly Lemma 2.4, key assumptions are that \(\partial M\) is totally geodesic and has positive Yamabe invariant, both conditions on \(\partial M.\) By Theorem 2.2, the positivity of the Yamabe invariant follows from the assumption \(\eta_{1}>0,\) an interior condition. The totally geodesic condition is considered in Thm. 3.1.
This result gives a conformal geometric obstruction on \(\partial M\) to the positivity of \(\square_{g}\) on \(M\).
**Theorem 2.3**.: _Let \((X^{n},\bar{g})\) be a closed Riemannian manifold, with \(n\geqslant 2\). Let \(\zeta_{1}\) be the first eigenvalue of the conformal Laplacian \(\square_{\bar{g}}\) on \(X\)._
1. _If_ \(n\geqslant 3\) _and_ \(R_{\bar{g}}\leqslant 0\) _everywhere on_ \(X\)_, there exists no compact manifold_ \((M,g)\) _with_ \(\eta_{1}>0\)_, and with_ \(\partial M=X\) _and_ \(\iota^{*}g\) _conformal to_ \(\bar{g}\)_._
2. _If_ \(n\geqslant 3\) _and_ \(\zeta_{1}\leqslant 0\) _on_ \(X\)_, there exists no compact manifold_ \((M,g)\) _with_ \(\eta_{1}>0\)_, and with_ \(\partial M=X\) _and_ \(\iota^{*}g\) _conformal to_ \(\bar{g}\)_._
3. _If_ \(n=2\) _and_ \(\chi(X)\leqslant 0\)_, there is no compact_ \(3\)_-manifold_ \((M,g)\) _with_ \(\eta_{1}>0\)_, and with_ \(\partial M=X\) _and_ \(\iota^{*}g\) _conformal to_ \(\bar{g}\)_; similarly, there is no_ \(3\)_-manifold_ \((M,g)\) _with_ \(\eta_{1}>0\)_, and with_ \(\partial M=X\)_,_ \(i^{*}g\) _conformal to_ \(\bar{g}\)_, and positive Yamabe invariant for_ \(\bar{g}\)_._
For the proof, (i) implies (ii), (ii) follows Thm. 2.2(i), and (iii) follows from Thm. 2.2(ii).
**Example 2.1**.: To restate part (i) of the Theorem geometrically, if \(R_{\bar{g}}\leqslant 0\) on \(X,\) then there exists no compact manifold \((M,g)\) with \(R_{g}>0,h_{g}\geqslant 0,\) such that \(\partial M=X\) and \(\iota^{*}g\) conformal to \(\bar{g}\). This is easy if \(g\) is a product metric near \(\partial M\): \(R_{i^{*}g}=R_{g}>0\) on \(\partial M,\) so \(\zeta_{1,i^{*}g}\geqslant 0.\) On the other hand, \(R_{\bar{g}}\leqslant 0\) implies \(\zeta_{1,\bar{g}}\leqslant 0.\) Since the sign of \(\zeta_{1}\) is a conformal invariant, \(i^{*}g\) cannot be conformal to \(\bar{g}.\) The amount of analysis in SS2 indicates the large gap between the product and nonproduct cases.
## 3. Positivity of the conformal Laplacian: An boundary analytic obstruction
A psc metric on a closed manifold \((M,g)\) has \(\eta_{1}>0\), and the same result holds on a manifold with minimal boundary. In this section, we discuss situations where a partial converse holds. In Thm. 3.1, we prove that \(\eta_{1}>0\) and totally geodesic, resp. minimal boundary, imply the existence of a psc metric. This result is independent of the rest of the paper, but relates to [14] (see Remarks 2.2, 3.1). In Cor. 3.1, if \(\eta_{1}>0\), we find a conformally equivalent metric of positive constant psc both on the interior and on the boundary. Finally, in Cor. 3.2, we find a boundary analytic obstruction to \(\eta_{1}>0\), assuming \(\partial M\) is spin: \(\eta_{1}>0\) implies \(\partial M\) has no harmonic spinors. Thus the positivity of the first eigenvalue of the conformal Laplacian on \(M\) strongly influences the geometry of both \(M\) and \(\partial M.\)
**Theorem 3.1**.: _If \((M,\partial M,g)\) has \(\eta_{1}>0\) and \(\partial M\) totally geodesic, resp. minimal, then there exists a metric \(\tilde{g}\) conformal to \(g\) such that \(R_{\tilde{g}}>0\) and \(\partial M\) remains totally geodesic, resp. minimal, for \(\tilde{g}\)._
Sketch.: We treat the case where \(\partial M\) is totally geodesic; the minimal boundary case is similar. If \(R_{g}>0\) everywhere, we set \(\tilde{g}=g\) and use Thm. 2.2. If \(R_{g}<0\) somewhere, it suffices to show that given a nonconstant positive function \(S\in\mathcal{C}^{\infty}(\bar{M})\) which is a positive constant on an open interior submanifold \(\Omega\) on which \(R_{g}<0\), there exists a conformal metric \(\tilde{g}=u^{p-2}g\) such that \(R_{\tilde{g}}=S\) and \(\partial M\) is totally geodesic with respect to \(\tilde{g}\). Since \(h_{g}=0\), the Robin boundary condition reduces to a Neumann condition. By (3), we need the existence of a positive, smooth solution of
\[-a\Delta_{g}u+R_{g}u=Su^{p-1}\text{ in }M,\frac{\partial u}{\partial\nu}=0 \text{ on }\partial M. \tag{20}\]
The following argument essentially recaps Lemma 3.1, Lemma 3.2 and Theorem 3.2 of [26]. Consider the eigenvalue problem
\[-a\Delta_{g}\varphi+R_{g}\varphi=\eta_{1}\varphi\text{ in }M,\frac{\partial \varphi}{\partial\nu}=0\text{ on }\partial M.\]
Note that any scaling \(\delta\varphi\) also solves the eigenvalue problem. Since \(\eta_{1}>0\), we can fix \(\delta\) such that
\[\eta_{1}\left(\delta\varphi\right)>2^{p-2}\cdot S\left(\delta\varphi\right)^{ p-1}\text{ on }M.\]
(We require strict inequality to allow some room for a gluing procedure.) Depending on whether \((\Omega,g)\) is locally conformally flat or not, we apply Lemma 3.1 or Lemma 3.2 of [26] to construct a pair of sub-solutions and super-solutions.
More precisely, we can obtain a positive, smooth solution of the local Yamabe-type equation
\[-a\Delta_{g}u_{1}+R_{g}u_{1}=\lambda u_{1}^{p-1}=Su_{1}^{p-1}\text{ in }\Omega,u_{1}=0\text{ on }\partial\Omega,\]
by shrinking \(\Omega\) as necessary. The sub-solution is defined to be
\[u_{-}:=\begin{cases}u_{1},&\text{ in }\Omega,\\ 0,&\text{ in }M\backslash\overline{\Omega}.\end{cases}.\]
Since \(u_{-}\in H^{1}(M,g)\cap\mathcal{C}^{0}(\overline{M})\), \(u_{-}\geqslant 0\), and \(u_{-}\not\equiv 0\), \(u_{-}\) is a sub-solution in the weak sense.
Using the gluing procedure in [26, Lemma 3.2] (which uses that \(S\) is nonconstant), we can construct a function \(u_{2}\in\Omega\), which is equal to \(\delta\varphi\) in \(O=\{x\in\Omega,d(x,\partial\Omega)<\epsilon\}\) for some \(\epsilon>0\), such that
\[-a\Delta_{g}u_{2}+R_{g}u_{2}\geqslant\lambda u_{2}^{p-1}=Su_{2}^{p-1}\text{ in }\overline{\Omega}.\]
Furthermore, the gluing procedure guarantees that \(u_{2}\geqslant u_{1}\) pointwise in \(\Omega\). Define
\[u_{+}:=\begin{cases}u_{2},&\text{ in }\Omega,\\ \delta\varphi,&\text{ in }M\backslash\overline{\Omega}.\end{cases}\]
Based on the construction of \(u_{2}\), we conclude that \(u_{+}\in\mathcal{C}^{\infty}(M)\), \(u_{+}\geqslant u_{-}\geqslant 0\) on \(M\), and \(u_{+}\) is a super-solution in the classical sense. Applying the monotone iteration scheme [25, Theorem 2.4], we conclude that there exists a smooth, positive solution of (20). Finally, since \(\varphi\) has Neumann boundary condition, \(\partial M\) remains totally geodesic by (14).
**Remark 3.1**.: (i) With more work, we can make \(R_{\tilde{g}}\) a positive constant [24, Thm. 5.8].
(ii) As mentioned above, [14, Lemma 2.4] assumes that \(\partial M\) is totally geodesic and has positive Yamabe invariant, both conditions on \(\partial M.\) By Theorem 3.1, the positivity of the Yamabe invariant follows from the assumption \(\eta_{1}>0\), an interior condition.
We now find an analytic obstruction on \(\partial M\), assumed to be spin, to the condition \(\eta_{1}>0\), using the following result:
**Theorem 3.2**.: _[_27_, Thm. 1.1]_ _Let \((M^{n},\partial M,g)\) have \(n\geqslant 3.\) There exists a metric \(\tilde{g}\) conformal to \(g\) with \(R_{\tilde{g}}\) constant on the interior of \(M\) and \(R_{i^{*}\tilde{g}}\) constant on \(\partial M.\)_
When \(\eta_{1}>0\), Theorem 3.2 implies that \(R_{\tilde{g}}\) is a positive constant. With the help of Theorem 2.2, we can determine that \(R_{i^{*}\tilde{g}}\) is also a positive constant.
**Corollary 3.1**.: _Let \((M^{n},\partial M,g)\) have \(n\geqslant 3.\) If \(\eta_{1}>0\), there exists a conformal metric \(\tilde{g}=u^{p-2}g\) with \(R_{\tilde{g}}\) a positive constant on the interior of \(M\) and \(R_{i^{*}\tilde{g}}\) a positive constant on \(\partial M.\)_
Proof.: When \(n>3\), Theorem 2.2 gives \(\zeta_{1}>0.\) We first find a conformal factor \(\psi\) such that \(\psi^{4/(n-3)}i^{*}g\) has positive constant scalar curvature on \(\partial M\). To find \(u\) such that \(u^{4/(n-2)}g\) is as desired, we must have \(i^{*}(u^{4/(n-2)}g)=\psi^{4/(n-3)}i^{*}g\). Thus we consider the Dirichlet problem
\[-a\Delta_{g}u+R_{g}u=\lambda u^{p-1}\text{ in }M,u=\psi^{\frac{n-2}{n-3}}\text{ on }\partial M.\]
By [27, Thms. 4.3, 4.4], this PDE admits a smooth, positive solution \(u\) such that \(u^{p-2}g=u^{4/(n-2)}g\) has positive constant scalar curvature \(\lambda\) on the interior, and positive constant scalar curvature on the boundary.
When \(n=3\), we apply the uniformization theorem and Theorem 2.2 to obtain a conformal factor with positive constant Gaussian curvature on \(\partial M\). The rest follows exactly as in [27, Thm. 4.3].
Corollary 3.1 gives the analytic obstruction on \(\partial M\) mentioned above.
**Corollary 3.2**.: _Let \((M,\partial M,g)\) have \(n\geqslant 3\) with boundary \(\partial M\) spin. If \(\eta_{1}>0\), then \((\partial M,i^{*}g)\) has no nontrivial harmonic spinors._
Of course, as a boundary \(\partial M\) has vanishing \(\hat{A}\)-genus, so this obstruction is not topological.
Proof.: For \(\tilde{g}\) as in Cor. 3.1, \(R_{i^{*}\tilde{g}}\) is a positive constant. Now we apply the usual vanishing argument: by the Lichnerowicz formula, the Dirac operator \(\not{\partial}_{\partial M}\) satisfies
\[\not{\partial}_{\partial M}^{*}\not{\partial}_{\partial M}=\nabla^{*}\nabla+ \frac{1}{4}R_{i^{*}\tilde{g}},\]
where \(\nabla\) is the spinor connection associated to \(i^{*}\tilde{g}\). This implies that there are no harmonic spinors for \(i^{*}\tilde{g}\). The conclusion for \(i^{*}g\) follows, since the dimension of the space of harmonic spinors is a conformal invariant.
**Example 3.1**.: By [4, Thm. A] and [17, Thm. 4.5], every closed spin manifold \(X\) of dimension \(n\equiv 0,1,3,7\) (mod 8) admits a metric \(\bar{g}\) with nontrivial harmonic spinors. It is well known (see SS5.1) that the spin bordism groups \(\Omega^{\text{spin}}_{2k-1}\) are torsion in odd dimensions, so outside the case \(n\equiv 0\) (mod 8), \(NX\) (\(N\) disjoint copies of \(X\)) is the boundary of a spin manifold \(M\) for some \(N\). Then \(M\) admits no metric \(g\) with \(\eta_{1}>0\) and \(i^{*}g\) conformal to \(\bar{g}\) on each component of \(NX.\) In particular, \(\bar{g}\) on \(NX\) cannot be extended to a psc metric on \(M\) with \(NX\) minimal.
## 4. Positivity of the conformal Laplacian: A quasi-topological obstruction
In this section, we find a "quasi-topological" obstruction to a spin manifold with boundary admitting a metric with \(\eta_{1}>0\), namely a \(\mathbb{R}\)-valued \(\xi\)-invariant whose reduction \(\mathrm{mod}\ \mathbb{Z}\) is topological. As usual, this is also an obstruction to psc metrics with minimal or constant positive mean curvature on the boundary.
Let \((M^{n},g),n=4k\), be a spin manifold with boundary \(\partial M\), which is automatically spin. Let \(S=S^{+}\oplus S^{-}\) be the spinor bundle on \(M\), and let \(\not{\partial}=\not{\partial}_{g}:\Gamma(S^{+})\to\Gamma(S^{-})\) be the Dirac operator on positive spinors on \(M\). Let \((E=E_{\alpha},\nabla)\) be a flat bundle with connection associated to a unitary representation \(\alpha\) of \(\pi_{1}(M)\). The coupled Dirac operator is \(\not{\partial}_{g}^{E}=\not{\partial}\otimes\mathrm{Id}+\mathrm{Id}\otimes \nabla:\Gamma(S^{+}\otimes E)\to\Gamma(S^{-}\otimes E)\).
If \(M\) has a metric \(g\) with \(\eta_{1}>0\), then we can conformally change \(g\) to \(\tilde{g}\) so that \(R_{\tilde{g}}>0\) and \(h_{\tilde{g}}=0\)[26, Thm. 1.2].
**Lemma 4.1**.: _Let \((M,g)\) have \(\eta_{1}>0\), and let \(\tilde{g}\) be a conformally equivalent metric with \(R_{\tilde{g}}>0\) and \(h_{\tilde{g}}=0.\) Then \(\mathrm{ind}(\not{\partial}_{\tilde{g}})=0\) for the Dirac operator with APS boundary conditions, and \(\mathrm{ind}(\not{\partial}_{\tilde{g}}^{E})=0\) for the coupled Dirac operator with APS boundary conditions._
Proof.: The index \(\mathrm{ind}(\not{\partial}_{\tilde{g}})\) of the Dirac operator with APS boundary conditions vanishes by [16, Thm. 4]: If \(h_{\tilde{g}}\geqslant 0\), then the smallest eigenvalue \(\lambda_{1}\) of \(\not{\partial}^{E}\) with APS boundary conditions satisfies
\[\lambda_{1}^{2}\geqslant\frac{n+1}{4n}\min_{M}R_{\tilde{g}}.\]
Since \(R_{\tilde{g}}>0\), we cannot have \(\lambda_{1}=0\).
For the coupled Dirac operator, we just follow the proof of this theorem, replacing spinors \(\psi\in\Gamma(S)\) by sections \(\psi\otimes s:=a^{ij}\psi_{j}\otimes s_{i}\in\Gamma(S\otimes E)\). We can assume \(\{\psi_{j}\},\{s_{i}\}\) are local orthonormal parallel sections of \(S,E\), resp., at a fixed center point \(x\) of a coordinate chart \(U\), with \(a^{ij}\in C^{\infty}(U)\), by taking synchronous coordinates: take orthonormal frames of \(S\) and \(E\) at \(x\), and parallel translate them out (_e.g.,_ geodesic) rays starting at \(x\) and filling out \(U\). In addition, we assume that \(\{\psi_{j}\}\) is constructed in the standard way from a synchronous orthonormal frame \(\{e_{k}\}\). Specifically, we need equations (1) - (17) and (the \(\geqslant\) part of) Thm. 4 in [16] for \(\psi\otimes s\).
At \(x\), the key simplifications are (dropping the \(\tilde{g}\))
\[\not{\partial}^{E}(\psi\otimes s) =\not{\partial}(a^{ij}\psi_{j})\otimes s_{i}=e_{k}(a^{ij})e_{k} \cdot\psi_{j}\otimes s_{i},\] \[\nabla_{X}^{S\otimes E}(\psi\otimes s) :=(\nabla_{X}^{S}\otimes\mathrm{Id}+\mathrm{Id}\otimes\nabla_{X} ^{E})(a^{ij}\psi_{j}\otimes s_{i})=\nabla_{X}^{S}(a^{ij}\psi_{j})\otimes s_{ i}=X(a^{ij})\cdot\psi_{j}\otimes s_{i}.\]
Here the dot is Clifford multiplication, and we use \(\not{\partial}\psi_{j}=e_{k}\nabla_{e_{k}}^{S}\psi_{j}=0\) and \(\nabla^{E}s_{i}=0\) at \(x\). For example, for \(X\in TM\),
\[X\langle\psi\otimes s,\phi\otimes s^{\prime}\rangle =X\left(\langle a^{ij}\psi_{j},b^{kl}\psi_{l}\rangle_{S}\langle s _{i},s_{k}\rangle_{E}\right)=X\langle a^{ij}\psi_{j},b^{il}\psi_{l}\rangle_{S}\] \[=\sum_{i}\langle\nabla_{X}^{S}(a^{ij}\psi_{j}),b^{il}\psi_{l} \rangle+\sum_{i}\langle a^{ij}\psi_{j},\nabla_{X}^{S}(b^{il}\psi_{l})\rangle\] \[=\langle\nabla_{X}^{S\otimes E}(\psi\otimes s),\phi\otimes s^{ \prime}\rangle+\langle\psi\otimes s,\nabla_{X}^{S\otimes E}(\phi\otimes s^{ \prime})\rangle,\]
where the second line uses [16, (1)]: \(X\langle\psi,\phi\rangle_{S}=\langle\nabla_{X}^{S}\psi,\phi\rangle+\langle \psi,\nabla_{X}^{S}\phi\rangle.\) Thus we obtain the twisted version of [16, (1)] from the untwisted version.
Similarly, we get the twisted Lichnerowicz formula
\[(\not{\partial}^{E})^{*}\not{\partial}^{E}=(\nabla^{S\otimes E})^{*}\nabla^{S \otimes E}+\frac{R}{4}\otimes\mathrm{Id}=\left((\nabla^{S})^{*}\nabla^{S}+ \frac{R}{4}\right)\otimes\mathrm{Id} \tag{21}\]
from the Lichnerowicz formula \(\not{\partial}^{*}\!\!\not{\partial}=(\nabla^{S})^{*}\nabla^{S}+\frac{R}{4}.\) As a final example, in our notation [16, (9)] is
\[\not{\partial}^{S|_{\partial M}}\psi=\frac{n}{2}h\psi-\nu\cdot\not{\partial}^{S }\psi-\nabla^{S}_{\nu}\psi, \tag{22}\]
where \(\nu\) is the unit inward normal vector on \(\partial M\). Then by (22),
\[\not{\partial}^{(S\otimes E)|_{\partial M}}(\psi\otimes s) =\not{\partial}^{(S\otimes E)|_{\partial M}}(a^{ij}\psi_{j} \otimes s_{i})=\not{\partial}^{S|_{\partial M}}(a^{ij}\psi_{j})\otimes s_{i}\] \[=\left(\frac{n}{2}ha^{ij}\psi_{j}-\nu\cdot\not{\partial}^{S}(a^{ij }\psi_{j})-\nabla^{S}_{\nu}(a^{ij}\psi_{j})\right)\otimes s_{i}\] \[=\frac{n}{2}h(\psi\otimes s)-\nu\cdot\not{\partial}^{S\otimes E}( \psi\otimes s)-\nabla^{S\otimes E}_{\nu}(\psi\otimes s).\]
This is the twisted version of [16, (9)].
A straightforward but long check yields the twisted versions of (1)-(17), cumulating in the twisted version of [16, Thm. 4]: we still have \(\lambda_{1}^{2}\geqslant[(n+1)/4n]\cdot\min_{M}R_{\tilde{g}}.\) Thus the index of \(\not{\partial}^{E}_{\tilde{g}}\) vanishes.
We next show that the index of the Dirac operator with APS boundary conditions is a conformal invariant, whether or not \(\eta_{1}>0\).
**Lemma 4.2**.: _For \(\tilde{g}\) conformal to \(g\), \(\operatorname{ind}(\not{\partial}_{g})=\operatorname{ind}(\not{\partial}_{ \tilde{g}}),\) and \(\operatorname{ind}(\not{\partial}^{E}_{g})=\operatorname{ind}(\not{\partial}^ {E}_{\tilde{g}}).\)_
Proof.: The APS theorem for manifolds with non-product metric near the boundary [11, SS3] is
\[\operatorname{ind}(\not{\partial}_{g})=\int_{M}\hat{A}(\Omega_{g})+\int_{ \partial M}P(g)-\frac{1}{2}(\operatorname{dim}\,\ker(\not{\partial}_{i^{*}g}) +\eta_{g}(0)), \tag{23}\]
where \(\hat{A}(\Omega_{g})\) is the \(\hat{A}\)-polynomial in the curvature of \(g\), and \(P(g)\) is the Chern-Simons form related to the \(\hat{A}\)-polynomial. As a polynomial in the Pontrjagin classes, \(\hat{A}(\Omega_{g})\) is a conformal invariant. Dim \(\ker(\not{\partial}_{i^{*}g})\) is a conformal invariant, since the Dirac operator is conformally covariant, _i.e._. for \(\tilde{g}=e^{2\phi}g\), we have \(\not{\partial}_{\tilde{g}}=e^{-(n+1)\phi/2}\not{\partial}_{g}e^{(n-1)\phi/2}\)[15, SS4], [17, SS1.4]. The eta invariant is also a conformal invariant [20]. Finally, for \(g_{1}\) a smoothing on \(N=\partial M\times[0,1]\) of the metric \(g|_{\partial M}\) at \(t=0\) to a product metric near \(t=1\), we have \(\int_{\partial M}P(g)=\int_{N}\hat{A}(\Omega_{g_{1}}).\) We can clearly construct \(\tilde{g}_{1}\) conformal to \(g_{1}\) on \(N\) in the obvious notation. As above, this implies that \(\int_{\partial M}P(g)\) is a conformal invariant. Thus \(\operatorname{ind}(\not{\partial}_{g})=\operatorname{ind}(\not{\partial}_{ \tilde{g}}).\)
For the twisted case, we check that
\[\not{\partial}^{E}_{\tilde{g}}(a^{ij}\psi_{j}\otimes s_{i}) =(e^{-(n-1)\phi/2}\not{\partial}_{g}e^{(n-1)\phi/2}\otimes \operatorname{Id}+\operatorname{Id}\otimes\nabla^{E})(a^{ij}\psi_{j}\otimes s _{i})\] \[=e^{-(n-1)\phi/2}e_{k}(e^{(n-1)\phi/2}a^{ij})e_{k}\cdot\psi_{j} \otimes s^{i};\] \[\left(e^{-(n-1)\phi/2}\not{\partial}^{E}_{g}e^{(n-1)\phi/2} \right)(a^{ij}\psi_{j}\otimes s_{i})=e^{-(n-1)\phi/2}e_{k}(e^{(n-1)\phi/2}a^{ij })e_{k}\cdot\psi_{j}\otimes s_{i}.\]
Thus the twisted Dirac operator is also conformally covariant. (This does not use that \(\nabla^{E}\) is flat.) This implies that the \(\eta\)-invariant for \(\not{\partial}^{E}\) is also a conformal invariant. As in the previous paragraph, \(\operatorname{ind}(\not{\partial}^{E})\) is therefore a conformal invariant.
Let \((Y,g)\) be a Riemannian spin manifold. The \(\xi\)-invariant associated to the unitary representation \(\alpha\) of \(\pi_{1}(Y)\) is
\[\xi_{\alpha}(Y,g)=\frac{1}{2}\left(\dim\ \ker(\not{\partial}^{E_{\alpha}}_{g})+ \eta^{E_{\alpha}}_{g}(0)\right)-\frac{k}{2}\left(\dim\ \ker(\not{\partial}_{g})+\eta_{g}(0))\right), \tag{24}\]
where \(\eta^{E_{\alpha}}_{g}(0)\) is the \(\eta\)-invariant for \(\not{\partial}^{E}_{g}\), and \(k\) is the rank of \(\alpha.\) For a family of metrics \(\{g_{t}\},\)\(\xi_{\alpha}(Y,g_{t})\in\mathbb{R}\) is not independent of \(t,\) but can only jump by an integer amount when the dimension of either \(\not{\partial}_{g}\) or \(\not{\partial}^{E}_{g}\) jumps. Thus the mod \(\mathbb{Z}\) reduction \(\tilde{\xi}_{\alpha}(Y)\) of \(\xi_{\alpha}(Y,g)\) is a smooth invariant of \((Y,\alpha)\). We call \(\xi_{\alpha}(Y,g)\) quasi-topological, and now prove that it is an obstruction to \(\eta_{1}>0\) on spin manifolds.
**Theorem 4.1**.: _(i) Let \((M,\partial M,g)\) be a spin manifold with boundary with \(\eta_{1}>0\). Assume that \(\pi_{1}(\partial M)\) admits a unitary representation \(\alpha\) that extends to a unitary representation of \(\pi_{1}(M).\) Then \(\xi_{\alpha}(\partial M,g)=0.\)_
_(ii) If \((M,g)\) has \(R_{g}>0\) and \(h_{g}\geqslant 0\) and we have the representation hypothesis, then \(\xi_{\alpha}(\partial M,g)=0.\)_
Proof.: As usual, (i) implies (ii). For (i), the left hand side of (23) vanishes for \(\tilde{g}\) as in Lem. 4.1, and hence for \(g\) by Lemma 4.2. \(P(g)=P(\Omega_{g},\omega_{g})\) is a an invariant polynomial in the components of the curvature and connection forms of \(g\)[11, SS3]. On the right hand side, by Corollary 3.2 there is a different metric \(g^{\prime}\) conformal to \(g\) with \(\ker(\not{\partial}_{i^{*}g^{\prime}})=0\) on \(\partial M\). By the conformal invariance of this kernel, \(\ker(\not{\partial}_{i^{*}g})=0\). Therefore,
\[\eta_{g}(0)=-2\int_{M}\hat{A}(\Omega_{g})-2\int_{\partial M}P(g) \tag{25}\]
is the integral of locally computed curvature and connection terms.
The APS formula for \(\not{\partial}_{g}^{E}\) is
\[\operatorname{ind}(\not{\partial}_{g}^{E}) =\int_{M}\hat{A}(\Omega_{g})ch(\Omega_{E})+\int_{\partial M}P^{E}( g)-\frac{1}{2}(\dim\,\ker(\not{\partial}_{i^{*}g}^{E})+\eta_{g}^{E}(0))\] \[=k\int_{M}\hat{A}(\Omega_{g})+k\int_{\partial M}P(g)-\frac{1}{2}( \dim\,\ker(\not{\partial}_{i^{*}g}^{E})+\eta_{g}^{E}(0)).\]
This follows from
\[P^{E}(g) =c\int_{0}^{1}\operatorname{Tr}(\omega_{t}^{S\otimes E},\Omega_{t }^{S\otimes E},\dots,\Omega_{t}^{S\otimes E})dt=c\int_{0}^{1}\operatorname{ Tr}(\omega_{t}^{S}\otimes\operatorname{Id},\Omega_{t}^{S}\otimes\operatorname{Id}, \dots,\Omega_{t}^{S}\otimes\operatorname{Id})dt\] \[=ck\int_{0}^{1}\operatorname{Tr}(\omega_{t}^{S},\Omega_{t}^{S}, \dots,\Omega_{t}^{S})dt\]
where \(c\) is a dimension constant, in the usual notation for Chern-Simons forms. As in the last paragraph, we can assume that \(\operatorname{ind}(\not{\partial}_{g}^{E})=0\). The argument in Cor. 3.2 applies to \(\not{\partial}_{i^{*}g}^{E}\), by the twisted Licherowicz formula (21). Thus we can also assume \(\dim\,\ker(\not{\partial}_{i^{*}g}^{E})=0\). We obtain
\[\eta_{g}^{E}(0)=-2k\int_{M}\hat{A}(\Omega_{g})-2k\int_{\partial M}P(g). \tag{26}\]
From (25), (26), we get
\[\xi_{\alpha}(\partial M,g) =\frac{1}{2}\eta_{g}^{E_{\alpha}}(0)-\frac{k}{2}\eta_{g}(0)\] \[=-k\left(\int_{M}\hat{A}(\Omega_{g})+\int_{\partial M}P(g)\right) +k\left(\int_{M}\hat{A}(\Omega_{g})+\int_{\partial M}P(g)\right)\] \[=0.\]
**Remark 4.1**.: (i) The Theorem implies that the nonvanishing of the topological invariant \(\tilde{\xi}_{\alpha}(Y)\) is an obstruction to \(\eta_{1}>0\), but in practice computing \(\tilde{\xi}_{\alpha}(Y)\) comes down to computing \(\xi_{\alpha}(Y,g)\) and reducing mod \(\mathbb{Z}\).
(ii) (25) is proved in [2, Thm. 3.9], if \(M\) admits a psc metric \(g\) which is a product near \(\partial M.\) Such a metric has vanishing second fundamental form, so \(h_{g}=0\). Thus these metrics have \(\eta_{1}>0\), and so form a special case of the Theorem.
## 5. Manifolds with \(3\)-dimensional lens space boundaries
There are explicit formulas for the \(\tilde{\xi}\)-invariants of spherical space forms; these are particularly tractable for lens spaces with odd order fundamental group. Making use of these formulas, we present some concrete examples of spin \(4\)-manifolds with boundary a disjoint union of lens spaces that cannot admit a metric with \(\eta_{1}>0\), and hence have no psc metric with minimal boundary (or a metric with nonnegative scalar curvature and positive mean curvature).
The lens space \(L(p,q)\) is the quotient of \(S^{3}\subset\mathbb{C}^{2}\) by the action of \(\mathbb{Z}_{p}\) generated by
\[T\cdot(z_{1},z_{2})=(\omega z_{1},\omega^{q}z_{2}),\text{ where }\omega=e^{2\pi i /p}.\]
Here \(T\) is a fixed generator of \(\mathbb{Z}_{p}\), which can also be viewed as a fixed generator of \(\pi_{1}(L)\). The round metric on \(S^{3}\) descends to a metric on \(L\), which evidently has positive scalar curvature. The \(\tilde{\xi}\)-invariants with respect to this standard metric have been computed by Gilkey. The precise formula is a little unwieldy, but we only need a non-vanishing result.
**Lemma 5.1** (Gilkey).: _Let \(\alpha:\pi_{1}(L(p,q))\to\operatorname{U}(1)\) take \(T\) to \(\omega\). Then, with respect to any metric \(g\) of positive scalar curvature, \(\xi_{\alpha}(L(p,q),g)\neq 0\)._
Proof.: For \(p\) odd (so there is a unique spin structure), Gilkey [12, Theorem 2.5 (c)] shows that with respect to the standard round metric \(g_{0}\) on \(L(p,q)\)
\[\xi_{\alpha}(L(p,q),g_{0})\;=\;-(d/p)\cdot(p+1)/2\pmod{\mathbb{Z}},\]
where \(d\) is a certain integer relatively prime to \(48\) and \(p\). One can easily see that the right hand side is never zero modulo the integers, and hence \(\xi_{\alpha}(L(p,q),g)\) is not zero as a real number.
A recent result of Bamler and Kleiner [3] shows that the space of psc metrics on any \(3\)-manifold is contractible, and in particular is path-connected. By the vanishing of the kernel of the Dirac operator on a manifold with a psc metric, there is no spectral flow for the \(\eta\)-invariant of the Dirac operator along a path of psc metrics. The same holds for the twisted Dirac operator, and so \(\xi_{\alpha}\) is constant along a path of psc metrics. It follows that \(\xi_{\alpha}(L(p,q),g)\) is non-zero for any psc metric \(g\) on \(L(p,q)\).
### A rational coboundary for lens spaces
An argument with the Atiyah-Hirzebruch spectral sequence (AHSS) [8] says that the bordism group \(\Omega^{\operatorname{spin}}_{2k+1}(BG)\) is torsion for any finite group \(G\). In other words, for any odd-dimensional spin manifold \(Y\) and unitary representation \(\alpha\) of \(\pi_{1}(Y)\) that factors through a finite group, there is a spin manifold \(M\) with a unitary representation \(\hat{\alpha}\) of its fundamental group, and an integer \(N\) with \(\partial(M,\hat{\alpha})=N(Y,\alpha)\).
Here is a brief outline of the argument. The AHSS has \(E^{2}_{p,q}\cong H_{p}(BG;\Omega^{\operatorname{spin}}_{q})\) and converges to \(\Omega^{\operatorname{spin}}_{*}(BG)\). It is standard that for a finite group, the rational homology of \(BG\) vanishes in degree greater than zero. Moreover, the Anderson-Brown-Peterson calculation of the spin cobordism group [1] (see [22, page 336] for the rational calculation) implies that \(\Omega^{\operatorname{spin}}_{q}\otimes\mathbb{Q}=0\) when \(q\) is odd. It follows that the AHSS for \(\Omega^{\operatorname{spin}}_{*}\otimes\mathbb{Q}\) collapses immediately at the \(E^{2}\) term, and so \(\Omega^{\operatorname{spin}}_{2k+1}(BG)\otimes\mathbb{Q}=0\).
For the lens spaces with odd order fundamental group, we can give an explicit construction. The multiple \(N\) we find is not optimal; for instance, we find a spin \(4\)-manifold \(M\) with \(\partial(M,\hat{\alpha})=p^{2}(L(p,q),\alpha)\), whereas a closer examination of the AHSS shows that \(\Omega^{\operatorname{spin}}_{(}B\mathbb{Z}_{p})\cong\mathbb{Z}_{p}\), so \(N=p\) would do.
The construction begins with the surface \(\Sigma^{0}_{p}\) drawn (for \(p=3\)) in Figure 1, inspired by Figure 1 in [13]. It may described as a stack of \(p\) discs, joined by \(p\) half-twisted bands as shown in the figure. The surface \(\Sigma^{0}_{p}\) has a \(p\)-fold symmetry \(\tau\), given by counterclockwise rotation by angle \(2\pi/p\) with fixed point set \(p\) points, one on each of the discs. Furthermore \(\partial\Sigma^{0}_{p}\) is \(p\) circles, cyclically permuted
by \(\tau\). Hence the symmetry extends to an action of \(\mathbb{Z}_{p}\) on the closed surface
\[\Sigma_{p}=\Sigma_{p}^{0}\cup pD^{2}.\]
Note that the symmetry \(\tau\) on \(\Sigma_{p}^{0}\) comes from the \(p\)-fold symmetry of \(S^{3}\) given by rotation around the indicated axis. Now \(S^{3}\) is a spin manifold, and it is standard that this symmetry lifts to the spin bundle of \(S^{3}\). Therefore \(\tau\) lifts to the spin bundle of the restriction of the spin structure on \(S^{3}\) to \(\Sigma_{p}^{0}\).
We claim that this equivariant spin structure \(\mathfrak{s}\) extends to \(\Sigma_{p}\) as well. To see this, it suffices to show that the induced spin structure on each boundary component of \(\Sigma_{p}^{0}\) is the bounding spin structure on \(S^{1}\) (and hence extends, uniquely, over each of the \(p\) discs that are added.). By construction, \(p\) copies of the spin structure on the circle is a boundary, for it bounds \(\Sigma_{p}^{0}\). But the spin cobordism group \(\Omega_{2}^{\text{spin}}\) is \(\mathbb{Z}_{2}\), and since \(p\) is odd, it follows that \((S^{1},\mathfrak{s})=0\) in \(\Omega_{2}^{\text{spin}}\). Alternatively, one can note that the normal framing on each boundary component induced by the inward normal of \(\Sigma_{p}^{0}\) is \(p-1\), which is even. Hence the \(4\)-manifold obtained by adding \(2\)-handles to \(S^{3}\times I\) along \(\partial\Sigma_{p}^{0}\) with framing \(p-1\) is spin, and contains \(\Sigma_{p}\). The spin structure on this \(4\)-manifold induces a spin structure on \(\Sigma_{p}\) extending the given one on \(\Sigma_{p}^{0}\).
To construct a bounding manifold for \(p^{2}L(p,q)\), consider the product of two copies of \(\Sigma_{p}\). There is an action of \(\mathbb{Z}_{p}\) on \((\Sigma_{p})^{2}\) where the generator, say \(T\), acts by \(\tau\) on the first factor of \(\Sigma_{p}\) and by \(\tau^{q}\) on the second factor. The fixed point set of \(T\) therefore consists of \(p^{2}\) isolated points, where the local representation is exactly the action that defines the lens space \(L(p,q)\). Now remove an invariant \(D^{4}\) neighborhood of each fixed point, and take the quotient by \(\mathbb{Z}_{p}\). The quotient is a spin manifold with boundary \(p^{2}L(p,q)\). The orientation on the lens space boundaries is opposite to the orientation implicit in the definition, so we define set \(M=M(p,q)\) to be this spin manifold with orientation reversed. By construction, there is a homomorphism \(\hat{\alpha}:\pi_{1}(M)\to\mathbb{Z}_{p}\) that restricts to \(\alpha\) on each copy of \(L(p,q)\).
Combining Thm. 4.1 with the construction above gives our main example.
**Theorem 5.1**.: \(M(p,q)\) _has no metric with \(\eta_{1}>0.\) In particular, \(M(p,q)\) has no psc metric with nonnegative mean curvature._
Proof.: If \(M(p,q)\) has a metric \(g\) with \(\eta_{1,g}>0,\) then by Cor. 3.1 we can assume that \(R_{i^{*}\tilde{g}}>0\) for some \(\tilde{g}\) conformal to \(g\). By Lem. 5.1, \(\xi_{\alpha}(L(p,q),\tilde{g})\neq 0.\) By Thm. 4.1, we have \(\eta_{1,\tilde{g}}\leq 0\) and hence \(\eta_{1,g}\leq 0,\) a contradiction.
### Higher dimensions
Let \((Y,g,\alpha)\) be a \((4k-1)\)-dimensional Riemannian spin manifold with \(k>1,\) where \(\alpha\) is a representation of \(\pi_{1}(Y)\) that factors through a finite group. We know \(N(Y,\alpha)=\partial(M,\hat{\alpha})\) for some \(N\) with \(M\) spin. If \(\xi_{\alpha}(Y,g)\neq 0,\) then \(M\) admits no metric \(h\) with \(\eta_{1,h}>0\) and \(i^{*}h=g\) on each component of \(\partial M.\) Thus the \(\xi\)-invariant is an obstruction to extending \(g\) from \(NY\) to a metric \(h\) on a bounding manifold \(M\) such that \(h\) is psc with minimal boundary.
In the quasi-topological direction, we list two cases where \(M\) admits no metric with \(\eta_{1}>0,\) but we know of no examples.
1. Assume \(\tilde{\xi}_{\alpha}(Y)\neq 0\in\mathbb{R}/\mathbb{Z}.\) If a fixed metric \(g\) on \(Y\) has \(\eta_{1,g}>0,\) then \(\xi_{\alpha}(Y,g)\neq 0\in\mathbb{R}.\) As in Cor. 3.1, we may assume that \(i^{*}g\) is psc. If the space of psc metrics is connected, then as in Lem. 5.1, \(\xi_{\alpha}(NY,g)=N\xi_{\alpha}(Y,g)\neq 0.\) This is a contradiction. There are examples of higher dimensional lens spaces with nonzero \(\xi\)-invariants, but we don't have an example of a higher dimensional manifold with connected space of psc metrics.
2. If the space of psc metrics is not connected, then for \(NY=\amalg_{i=1}^{N}Y_{i},\) \(\xi_{\alpha}(NY,g)=\sum_{i=1}^{N}\xi_{\alpha}(Y_{i},g)\) may vanish, since the \(\xi_{\alpha}(Y_{i},g)\) differ by integers. However, this cannot occur if \(\xi_{\alpha}(Y_{i},g)\not\in\mathbb{Q}.\) While irrational \(\xi\)-invariants occur for \(Y=S^{1},\) we again know of no higher dimensional examples. In particular, the examples in [6, Lemma 2.3] are all rational.
|
2303.12374 | Kernel Launcher: C++ Library for Optimal-Performance Portable CUDA
Applications | Graphic Processing Units (GPUs) have become ubiquitous in scientific
computing. However, writing efficient GPU kernels can be challenging due to the
need for careful code tuning. To automatically explore the kernel optimization
space, several auto-tuning tools - like Kernel Tuner - have been proposed.
Unfortunately, these existing auto-tuning tools often do not concern themselves
with integration of tuning results back into applications, which puts a
significant implementation and maintenance burden on application developers. In
this work, we present Kernel Launcher: an easy-to-use C++ library that
simplifies the creation of highly-tuned CUDA applications. With Kernel
Launcher, programmers can capture kernel launches, tune the captured kernels
for different setups, and integrate the tuning results back into applications
using runtime compilation. To showcase the applicability of Kernel Launcher, we
consider a real-world computational fluid dynamics code and tune its kernels
for different GPUs, input domains, and precisions. | Stijn Heldens, Ben van Werkhoven | 2023-03-22T08:21:42Z | http://arxiv.org/abs/2303.12374v1 | # Kernel Launcher: C++ Library for
###### Abstract
_Graphic Processing Units_ (GPUs) have become ubiquitous in scientific computing. However, writing efficient GPU kernels can be challenging due to the need for careful code tuning. To automatically explore the kernel optimization space, several auto-tuning tools - like Kernel Tuner - have been proposed. Unfortunately, these existing auto-tuning tools do not concern themselves with integration of tuning results back into applications, which puts a significant implementation and maintenance burden on application developers. In this work, we present _Kernel Launcher_: an easy-to-use C++ library that simplifies the creation of highly-tuned CUDA applications. With Kernel Launcher, programmers can _capture_ kernel launches, _tune_ the captured kernels for different setups, and _integrate_ the tuning results back into applications using runtime compilation. To showcase the applicability of Kernel Launcher, we consider a real-world computational fluid dynamics code and tune its kernels for different GPUs, input domains, and precisions.
## 1 Introduction
Computer systems used for _High-Performance Computing_ (HPC) are becoming more complex [1], making it increasingly more difficult to achieve good performance. This complexity is due in part to the widespread use of _Graphics Processing Units_ (GPUs). For instance, in November 2022, seven of the top ten systems in the TOP500 [2] achieved their performance solely thanks to GPUs.
Programming and optimizing applications for GPUs is generally considered to be more challenging than for CPUs [3]. One of the key challenges of GPU programming is the need to write GPU-specific functions, called _kernels_, that are executed by millions of threads in parallel. Deciding how to divide and assign work to threads is critical to achieving optimal performance. Additionally,
the complex memory hierarchy of GPUs means that the decision on where and how to store data significantly impact performance. Furthermore, optimizing code for GPUs involves trade-offs: certain code optimization may improve performance in some cases, but could be detrimental in others. For example, loop unrolling, spatial blocking, prefetching and vectorization can all affect performance in different ways. We refer to the work by Hijma et al [4] for an extensive overview of GPU code optimizations discovered over the last 14 years.
Optimizing GPU programs thus requires identifying performance-affecting parameters and tuning these parameters to achieve optimal performance. Although tuning can be done manually, _auto-tuning_ tools can also be used to search for the optimal kernel configuration in an automated way. One such tool is _Kernel Tuner_[5], an easy-to-use tool for tuning GPU code that support many search optimization methods. With Kernel Tuner, GPU programmers create simple Python scripts that specify the kernel code, the tunable parameters, and the input data. The tool then finds the best-performing kernel configuration by exploring the optimization search space.
However, these kernel configurations are often not portable since performance heavily depends on the specific input problem (e.g., dimensions or precision of input data) and the exact GPU architecture. The kernel configuration that gives optimal performance for one particular workload or GPU may result in poor performance for a different setup. For example, allowing threads to process more than one item could improve performance for larger datasets, but it limits concurrency for smaller datasets. Similarly, unrolling loops may be preferred for GPUs with large register files, as it maximizes data reuse in registers and increases instruction-level parallelism, but it results in excessive register spilling on GPUs with smaller register files.
In practice, GPU programmers may tune a CUDA kernel for a single GPU and input problem, and then falsely assume that the optimal configuration is sufficiently portable to all possible inputs and GPUs. Performance-portable GPU application would require tuning for every possible combination of input data set and GPU an integrating this knowledge into the application. Compile-time kernel selection could be used, but this puts a maintenance burden on the application developers. Ideally, it should be possible to tune kernels for many different setups and the application should then be able to select the optimal kernel automatically based on the situation at hand.
To address this issue, we present _Kernel Launcher_: A C++ library that simplifies the integration of auto-tuning in CUDA applications by performing offline tuning of GPU kernels using Kernel Tuner. We make the following contributions:
* We introduce an API for tunable kernel definitions that unifies the host code to launch the kernel with the description of the kernel's tunable parameters.
* We introduce the concept of kernel _captures_ that automatically export all information required to execute a kernel.
* We fully automate the use of Kernel Tuner for CUDA applications by _replaying_ the captured launches for different parameter configurations.
* We demonstrate that only tuning for one dataset or one GPU (even of the same vendor and architecture) is insufficient to achieve optimal performance in all scenarios. Similarly, we show how tuning a kernel for only one particular scenario may result in worse performance compared to not tuning the code at all for other scenarios.
* Finally, we show that Kernel Launcher's runtime kernel selection and compilation can be used to create optimal-performance portable CUDA applications.
## 2 Related Work
We distinguish between two different research topics in the field of auto-tuning research. Some auto-tuning research focuses on (1) auto-tuning compiler-generated code optimizations [6, 7, 8, 9], whereas this paper focuses on (2) software auto-tuning [10, 11]. Ashouri et al. [12] wrote an excellent survey on machine-learning methods for compiler-based auto-tuning. In this paper, we focus on (2) auto-tuning software on the source code level, sometimes called _automated design space exploration_[13]. Software auto-tuning is not limited to conservative assumptions that a compiler can make and, as such, allows to automatically optimize computational functions in isolation. For example, it is possible to automatically select among many different optimization techniques [4] or entirely different implementations of the same algorithm. Software auto-tuning is commonly employed by highly-optimized libraries and CPU applications (e.g., ATLAS [14] and FFTW [15]) as well as GPU applications [16, 10, 17, 11, 18, 19, 20]. However, the technology used to tune these applications is application specific.
Several generic auto-tuning frameworks have been introduced in recent years to create a reusable application-independent solution. In this way, advances in auto-tuning implementations can be reused for new projects and new results in auto-tuning research no longer have to be implemented in multiple application-specific tuners. _OpenTuner_[21] was one of the first generic software auto-tuning frameworks, but with no support for tuning individual GPU kernels. _GPTune_[22] is a tuning framework for MPI-based applications with support for parallel tuning. _HyperMapper_[13] is a tuning framework for multi-objective optimization on FPGAs
_CLTune_[23] was the first generic auto-tuning framework with support specifically for tuning OpenCL kernels, it has been used to implement an auto-tuned BLAS library implemented in OpenCL [24]. _Auto-Tuning Framework_ (ATF) [25] generates auto-tuning search spaces efficiently using a chain-of-trees search space structure. _Kernel Tuning Toolkit_ (KTT) [26] is a C++ auto-tuning framework explicitly developed to support online auto-tuning and pipeline tuning, which allows exploring of combinations of tunable parameters over multiple kernels. Online auto-tuning is a powerful technique that requires substantial
modification of the host application, but allows applications to be tuned at runtime, during normal execution of the application. Online auto-tuning can be advantageous when performance strongly depends on input data. In this paper, we take an orthogonal approach, where we tune offline, once for each problem, and store the obtained performance results to disk.
In earlier work, we have introduced _Kernel Tuner_[5], a compile-time auto-tuning framework that supports many different optimization algorithms [27], including Bayesian Optimization [28]. The compile-time auto-tuning approach taken by Kernel Tuner, allows auto-tuned kernels to be integrated into applications with minimal modification to the host application. Since the auto-tuning process happens at compile-time, there are no limitations on the programming language of the host application.
Falch and Elster [29] have used machine learning for run-time kernel selection of OpenCL applications to use auto-tuning with to create performance portable applications. A neural network is trained using benchmark data, which is, in turn, used to select parts of the search space that are explored exhaustively. By building on top of Kernel Tuner, Kernel Launcher simply uses the results obtained by one of the optimization strategies implemented in Kernel Tuner.
Lawson [30, 31] uses unsupervised clustering to pre-select a subset of the compiled kernel configurations and uses a classification method to select from the best configuration at runtime. Kernel Launcher uses runtime compilation and, therefore, does not include pre-compiled kernel configurations with its applications, as there is no need to pre-select configurations.
Somewhat related to our framework is CERE [32], a framework for code isolation that can extract "hotspots" of an application as isolated code fragments for tuning. However, CERE explicitly targets LLVM code generation for CPUs, while Kernel Launcher is aimed at GPU kernels.
## 3 Background
This section introduces GPU auto-tuning with Kernel Tuner and explains how Kernel Launcher improves the software engineering experience by extending Kernel Tuner's capabilities.
Kernel Tuner is designed for tuning GPU kernels - written in CUDA or OpenCL - that are usually extracted from a larger application. While Kernel Tuner is written in Python, no code that uses Kernel Tuner needs to become part of the host application, nor does Kernel Tuner insert any dependencies into the kernel source code, which can still be compiled with their regular compilers after tuning. This means that the host application can be written in any programming language.
To tune a kernel (see Listing 1), application developers create a small Python script (see Listing 2) that specifies the kernel source code, name, problem size, arguments, and tunable parameters:
* The kernel _source code_ is specified by pointing to a file that should be able to compile in isolation.
* The kernel _name_ in the source code. The name can include tunable parameter as C++ template arguments.
* The _problem size_ denotes the dimensions of problem domain of the GPU kernel. For most kernels, the problem size is related to the size of the input/output domain of the GPU kernel. Kernel Tuner uses the problem size to calculate the total number of threads blocks.
* The kernel _arguments_ is a list of multi-dimensional arrays and scalar values. These arguments are passed as function arguments to the GPU kernel. In most cases, the tuning script generates random arrays for the input data and declares empty arrays for the output data.
* The _tunable parameters_ are specified as named parameters with lists of possible values. The parameters are passed as compile-time constants to the kernel code.
Kernel Tuner supports many additionally arguments to specify, for example, search space restrictions, output verification, user-defined metrics, or tuning objectives. We refer to the Kernel Tuner documentation1 for an extensive description of the function tune_kernel.
Footnote 1: [http://kerneltuner.github.io](http://kerneltuner.github.io)
Kernel Tuner is designed to automatically explore the search space by compiling configuration-specific kernels and then running benchmarks on the GPU. However, as the number of tunable parameters increases, the search space generally grows exponentially, which can make it impractical to explore exhaustively. In order to address this challenge, Kernel Tuner incorporates several optimization strategies to optimize the auto-tuning process.
Once the optimal kernel configuration has been determined by the tuner, this information needs to be integrated back into the target application. Kernel Tuner supports _compile-time_ kernel selection through an API that can generate C header files. These header files include multiple targets, one for each GPU, that allows a build system (e.g., Make or CMake) to select the optimal kernel configuration for a particular target GPU during compilation. In this way, the application can achieve optimal performance a particular GPU, as long as the target GPU is known at compile-time and performance does not depend on the input size.
There are three areas where Kernel Launcher aims to improve both the software engineering process and the performance portability of the resulting tuned application. First, currently, users need to create a separate Python script for each kernel that requires tuning. While this approach may work for applications with only a few bottleneck kernels, it does not scale well and becomes unmanageable for large applications that have dozens of kernels.
Second, Kernel Tuner requires the user to generate valid input data to be used by the kernel. This works well for kernels that use simple data structures or accept randomly generated data. However, for kernels that operate on complex data structures, like lookup tables or graphs, generating realistic input data
that matches the data used in real runs of the application can be challenging. This places a significant burden on application developers.
Third, while the compile-time kernel selection functionality offered by Kernel Tuner has the advantage that little to no modification are required to the host application, it has some clear limitations. To achieve optimal performance on every GPU and problem size, the application needs to be recompiled every time. Additionally, using a compile-time kernel selection approach in applications where the same kernel may be executed on different problems within a single run is complex and requires considerable engineering effort.
Kernel Launcher aims to enhance the software engineering experience of using Kernel Tuner for developing tunable applications. It goes beyond the existing capabilities of Kernel Tuner by offering runtime kernel selection and compilation. With these features, Kernel Launcher enables the creation of optimal, performance-portable applications that can reuse the same tunable kernel for different problems within a single execution of the same application.
Kernel Launcher
In this section, we describe the implementation of _Kernel Launcher_, which is implemented as a C++ library. Figure 1 shows how the library is integrated into applications and interacts with Kernel Tuner. The remainder of this section explains how Kernel Launcher is utilized within applications, and outlines each of the four steps shown in Figure 1. First, the user writes the _kernel definition_ using Kernel Launcher in C++ to allows kernels to be _captured_. Then, Kernel Tuner tunes the kernels, producing wisdom files. These files allow Kernel Launcher to perform _dynamic kernel selection_ using _runtime kernel compilation_.
### Kernel Definition
To make a CUDA kernel tunable, the programmer defines its specifications using the Kernel Launcher API. These specifications consist of three elements: The configuration space, the compilation specifications, and the launch parameters.
For the configuration space, the definition includes the tunable parameters, the allowable values for those parameters, and any constraints on the search space (i.e., boolean expressions). For the compilation specifications, the programmer must provide details such as the source code, kernel name, compiler flags, template arguments, and preprocessor definitions. To launch the kernel, the programmer must specify how the thread block size, the number of thread blocks, and the amount of shared memory are derived from the kernel arguments.
Once a kernel has been defined, it can be launched using the Kernel Launcher API by providing the kernel arguments in way that is similar to launching a regular kernel in plain CUDA (See Listing 3,Line 20). If the kernel has not yet been tuned, the default values for the tunable parameters are used.
Kernel Launcher thus consolidates the description of the tunable aspects of kernels and the code for launching the kernel, eliminating duplication between the Kernel Tuner script and has code of application and moving this definition into the source code of the host application. Previously, with Kernel Tuner, the definition of the tunable kernel and its parameters resided in separate Python files, and the relationship between the problem size and tunable parameters was described in both the Kernel Tuner script and the host application. This distribution of the kernel definition across multiple files led increased maintenance costs since all files need to be kept up to date when changes are made to the kernel source code. However, with Kernel Launcher, the kernel definition and its launch code are integrated in the source code of the host application, resulting in significant maintenance cost savings for C++ applications with many tunable kernels.
### Kernel Capturing
An important concept in Kernel Launcher is the ability to _capture_ a kernel launch, which enables storage of all information necessary to execute the kernel,
including the kernel definition and possible the input data. By capturing a kernel, auto-tuning tools can'replay' the exact same kernel launch for different configurations of the tunable parameters.
There are two main advantages of this capturing concept. First, it removes the need for programmers to have to generate input data when tuning kernels and, instead, making it possible to tune the kernel directly on real-world data extracted from the application. This approach can be particularly useful for complex data sets that are difficult to generate off-line. Second, it enables offline tuning instead of having to tune at runtime. Offline tuning means that more accurate measurements can be obtained since the kernel is executed in isolation. In contract, runtime tuning could lead to inconsistent measurement if, for example, the GPU also executing other kernels concurrently or the CPU is overloaded.
To capture kernel launches in our implementation, the programmer needs to set the environment variable KERNEL_LAUNCHER_CAPTURE to a list of kernel names separated by commas. It is possible to capture multiple kernels in a single run of the application.
### Kernel Tuning
After capturing a kernel launch, the associated kernel can be tuned. Kernel Launcher includes a command-line script that, in turn, uses Kernel Tuner [5] as the main tool for auto-tuning, although in principle, other GPU auto-tuners, such as KTT [33], could be used as well.
Several command-line options can be provided to the script, such the search strategy and the termination condition. The default setting is to tune each kernel for at most 15 minutes using the Bayesian Optimization search strategy [28].
Figure 1: Overview of the designed interaction between the host application, Kernel Launcher, and Kernel Tuner.
### Wisdom Files
After the auto-tuner terminates, the best-performing configuration is written to a human-readable file which we referred to as _wisdom files_. The term "wisdom" was originally introduced by FFTW [34], but Kernel Launcher borrows only the terminology and uses a different file format.
For each kernel in the application, a corresponding wisdom file is available that stores the results of all tuning sessions for that specific kernel. Re-tuning the same kernel multiple times for various problem sizes or GPUs add new results to the existing wisdom file.
A wisdom file consists of a sequence of records. Each record represents the best-performing configuration found during auto-tuning session for one particular GPU and _problem size_. The problem size is a multi-dimensional vector that indicates the size of the workload. The interpretation of the problem size varies depending on the specific problem and is defined as part of the kernel definition. For example, for matrix multiplication of matrices of sizes \(n{\times}m\) and \(m{\times}k\), the problem size would be \((n,k,m)\). In addition to the tuning results, each record includes provenance information associated with the tuning sessions, such as the date, software versions, GPU properties, and the host name.
### Kernel Selection and Compilation
During application execution, Kernel Launcher selects the optimal configuration for each combination of kernel, GPU, and problem size. The first time a kernel is launched, Kernel Launcher processes the wisdom file for that kernel and selects one record based on the GPU and problem size, using the following heuristic:
* If a record exists that matches the GPU and the problem size, that record is chosen.
* If no such record exists, the record that matches the GPU and problem size that is closest in Euclidian distance.
* If no record exists that matches the current GPU, the record that matches the GPU _architecture_ and the problem size is closest is chosen.
* If no record exists that matches the current GPU architecture, the record that has the closest problem size is chosen.
* If the wisdom file is empty or missing, the default configuration is selected.
After selecting a configuration, Kernel Launcher compiles the kernel code at runtime by NVRTC, the NVIDIA runtime compilation library, and loads the compiled code onto the GPU. Note that kernel selection and compilation only happens on the first launch of a kernel for a given problem size and on subsequent calls for the same problem size will reuse the compiled instance of the kernel configuration.
### Code Example
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 3 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 5 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 3 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 6 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 3 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 7 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 3 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 8 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 4 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 9 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 5 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 10 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 6 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 11 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 7 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 11 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 8 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 11 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 9 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 12 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Code Code
Kernel Launcher is implemented as an easy-to-use C++ library. Figure 10 shows an example of the code needed to integrate a tunable vector addition kernel written in CUDA into a C++ host application using Kernel Launcher. First, a KernelBuilder is instantiated that specifies the kernel name and the source file that contains the tunable CUDA kernel code. Second, the tunable parameters and valid values are specified. Then, the problem size, template arguments and thread block dimensions are specified. Note that these may be specified using the kernel arguments or tunable parameters. Next, a WisdomKernel object is instantiated, which searchers the wisdom file and prepares the kernel for runtime compilation. Finally, the kernel is launched on line 20. Note that the thread block and grid dimensions are calculated by Kernel Launcher and should not be passed by the user.
## 13 Experimental Evaluation
In this section, we present an experimental evaluation of Kernel Launcher. Experiments were performed on the VU site of the DAS6 distributed supercomputer [35]. Table 1 lists the properties of the GPUs used for the evaluation. Software versions were CUDA toolkit 11.7, CUDA driver 515.65.01, Kernel Tuner 0.4.3, and CentOS 8.5.
### Application
To evaluate the usability and performance of Kernel Launcher, we have integrated our library into MicroHH [36, 37], a computational fluid dynamics software for direct numerical simulation and large-eddy simulation of turbulent flows in the atmospheric boundary layer. This C++ application runs on both multi-core CPUs and CUDA-enabled GPUs. The code supports many different integration schemes and numerous configuration options, including the size of the three-dimensional domain grid and floating-point precision (single or double precision). MicroHH is an excellent candidate for runtime kernel selection since achieving high performance requires distinct tuning parameters for each combination of GPU architecture, grid size, and floating-point precision.
### Tunable Parameters
For this paper, we selected two kernels from MicroHH for analysis. The first kernel is called advec_u and corresponds to a large stencil operation. In particular, the kernel performs advection along the X-axis and is part of the second order advection scheme with fifth order interpolation. The second kernel is called diff_uvw and is an element-wise operation. This kernel is part of the second order Smagorinsky diffusion for large-eddy simulation.
Both kernels work on a three-dimensional grid where each grid point requires an operation to be performed. the kernels launched a single CUDA thread for each grid point, with each thread processing the point associated with its 3D global thread index. However, we have rewritten the kernels and introduced several code optimizations that resulted in the following tunable parameters (see Table 2):
* **Block Size XYZ**. The number of threads in each CUDA thread block along the X, Y, and Z axis. The default thread block size is \((256,1,1)\)
* **Tiling Factor XYZ**. We modified the kernel so that each thread processes multiple grid points along the X, Y, and Z axis. For example, tiling factors \((3,1,2)\) indicate that each thread processes 6 points: 3 along the X-axis and 2 along the Z-axis. Processing multiple items per thread can increase cache utilization and reduce thread scheduling overhead, but assigning too many items to each thread results in insufficient concurrency. By default, no tiling is used.
* **Loop Unrolling XYZ**. The tiling involves three nested loops, one for each axis of the 3D grid. Each of the three loops can either be _fully_ unrolled or _not_
\begin{table}
\begin{tabular}{l|l|r|r|r|r} \hline GPU & Architecture & Cores & BW & Peak SP & Peak DP \\ \hline RTX A4000 & Ampere (GA104) & 6,144 & 448 & 19,170 & 599 \\ Tesla A100 & Ampere (GA100) & 6,912 & 1,555 & 19,500 & 9700 \\ \hline \end{tabular}
\end{table}
Table 1: GPUs used in our experiments. Bandwidth (BW) in GB/s. Peak single precision (SP) and double precision (DP) performance in GFLOP/s.
unrolled. Unrolling a loop increases instruction-level parallelism and could result in additional compiler optimizations, at the cost of increased register usage and instruction count. There is a boolean parameter for each axis.
* **Tiling stride XYZ**. Since each thread processes multiple items, there are multiple ways to assign the grid points to threads. We implemented two options for each axis. The first option is that each threads processes the points that are consecutive along one axis (e.g., item \(x,x+1,x+2,\ldots\)). The second option is that the points are block-strided (e.g., item \(x,x+b_{X},x+2b_{X}\),... where \(b_{X}\) is the block size along the X axis). The first option can lead to better data reuse for stencil-like kernels (especially if the loop is unrolled), while the second option may give better memory access patterns.
* **Unravel permutation**. The thread blocks are launched as a one-dimensional list of blocks. Each thread then _unravels_ its 1D thread block identifier into a 3D grid index. Since there are six possible permutations of \((X,Y,Z)\), there are six possible ways to perform this unraveling. We added this parameter since it affects the scheduling order of thread blocks and thus impacts cache utilization on the GPU. For example, for the unravel permutation \((Z,X,Y)\), the order in which thread blocks are scheduled is first along the Z axis, then X axis, and finally the Y axis.
* **Thread blocks per SM**. The number of threads that can reside on a _streaming multiprocessor_ (SM) impacts performance. However, there is a delicate trade-off between available concurrency and register usage: More threads per SM means more concurrency, at the cost of fewer available registers per thread. We use CUDA's __launch_bounds__ compiler hint to tune the number of thread blocks per SM, and thus the register usage.
\begin{table}
\begin{tabular}{l||l|l} \hline \hline Name & Values & Default value \\ \hline Block size X & 16, 32, 64, 128, 256 & 256 \\ Block size Y & 1, 2, 4, 8, 16 & 1 \\ Block size Z & 1, 2, 4, 8, 16 & 1 \\ Tile factor X & 1, 2, 4 & 1 \\ Tile factor Y & 1, 2, 4 & 1 \\ Tile factor Z & 1, 2, 4 & 1 \\ Unroll X & true, false & false \\ Unroll Y & true, false & false \\ Unroll Z & true, false & false \\ Tile contiguous X & true, false & false \\ Tile contiguous Y & true, false & false \\ Tile contiguous Z & true, false & false \\ Unravel permutation & XYZ, XZY, YXZ, YZX, ZXY, ZYX & XYZ \\ Min. blocks per SM & 1, 2, 3, 4, 5, 6 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: List of tunable parameters and their default value.
### Capture Results
We executed MicroHH for two grid sizes (256\(\times\)256\(\times\)256 and 512\(\times\)512\(\times\)512) and two floating-point precision (float and double). Table 3 shows the time required to capture each kernel and the size of the capture on disk. The captures are written to persistent storage on a shared file system (NFS); thus, IO times depend heavily on the load of the file system. The table shows that the capture time required scales with the size of the capture which, in turn, scales with the grid size.
Considering all tunable parameters and their possible values, the entire search space consists of more than 7.7 million kernel configurations, which means exhaustively tuning each kernel in the application is infeasible. As such, we tune the eight captured kernels for two search strategies from Kernel Tuner: a _random_ search, since it gives an unbiased performance distribution, and _Bayesian optimization_, since previous work [28] shows it outperforms other search strategies. Figure 3 shows two examples of the tuning sessions for a grid size of 256\({}^{3}\), single precision, and an NVIDIA A100 GPU. The other tuning sessions show similar result and are, therefore, not included.
These results show that both search strategies find increasingly better performing kernel configurations. Since an exhaustive search is infeasible, we consider the best-performing configuration found after one hour of tuning to be the _optimum_. Bayesian optimization strongly prefers better-performing kernel configurations, whereas the configurations selected by random search show a much larger spread. We found that Bayesian optimization takes, on average, 3.4 minutes (max: 6.5 minutes) to find a configuration 10% away from the optimum and 7.5 minutes (max: 19 minutes) for a 5% difference. A complete comparison of the different optimization strategies in Kernel Tuner is outside the scope of this work and we refer interested readers to the work by Schoonhoven et al. [27].
### Tuning Results
Next, we evaluate the tuning results of all eight captured kernels for the two GPUs: the A4000 and the A100. In the remainder of this paper, we shall
\begin{table}
\begin{tabular}{l|l|l||l|l} \hline \hline Kernel & Grid size & Precision & Capture time & Capture size \\ \hline advec\_u & 256\({}^{3}\) & float & 2.3 sec & 70.8 MB \\ advec\_u & 256\({}^{3}\) & double & 4.6 sec & 141.7 MB \\ advec\_u & 512\({}^{3}\) & float & 18.2 sec & 551.6 MB \\ advec\_u & 512\({}^{3}\) & double & 43.2 sec & 1103 MB \\ diff\_uvw & 256\({}^{3}\) & float & 5.6 sec & 212.8 MB \\ diff\_uvw & 256\({}^{3}\) & double & 11.9 sec & 425.6 MB \\ diff\_uvw & 512\({}^{3}\) & float & 43.3 sec & 1656 MB \\ diff\_uvw & 512\({}^{3}\) & double & 82.3 sec & 3312 MB \\ \hline \hline \end{tabular}
\end{table}
Table 3: Time and size required to capture kernel.
refer to each combination of kernel, grid size, precision, and GPU as a _scenario_. We will denote each scenario as _kernel-grid-precision-GPU_ (e.g., advec_u-256\({}^{3}\)-float-A100).
Figure 2 shows 16 histograms of the kernel times measured by the random search strategy for each scenario. The performance of each configuration is expressed as the fraction relative to the best performing configuration (i.e., the _optimum_) for that particular scenario. For example, a fraction of 0.8 indicates that a configuration only reached 80% of the performance compared to the optimum and is, thus, \(1/0.8=1.25\) times slower. The histogram shows how difficult it would be to tune these kernels by hand. For example, for the scenario advec_u-256\({}^{3}\)-float-A100, only 0.8% of the configurations are within 10% of the optimum. As such, it would be challenging and time-consuming to find this optimum manually, which is why auto-tuning tools are a necessity.
Each graph includes a _black_ arrow that indicates the performance of the default configuration. The Figure shows that, for each graph, the default configuration is not near the optimum, meaning that auto-tuning can significantly increase performance. The default configuration across all 16 scenarios reaches only an average performance of 75%, meaning tuning can increase performance by an average of 25%.
We found that the optimal configuration is different for each of the 16 scenarios, and any particular configuration that gives high performance for one scenario usually performs poorly for others. To illustrate this, we consider the best kernel configuration \(\mathcal{C}\) for the scenario advec_u-256\({}^{3}\)-float-A100. Figure 2 shows a _red_ arrow in each histogram that indicates the performance of \(\mathcal{C}\) for each of the 16 scenarios. The Figure shows that while \(\mathcal{C}\) performs well in certain scenarios, it performs exceptionally poorly in other scenarios. Configuration \(\mathcal{C}\) performs worse than the default configuration in 11 of the 16 scenarios. This result indicates that a kernel tuned for one particular scenario may not perform well in other scenarios, demonstrating the need to individually tune the kernels for all scenarios.
Figure 3: Tuning sessions for both random and Bayesian optimization (bayes) search strategy. Each dot represents one configuration evaluated by the tuner. The vertical axis is the measured kernel time (note: does not start at zero). The horizontal axis is the wall clock time in minutes of the tuning session. The dashed lines indicates the performance of the the best configuration found during the tuning session.
### Performance Portability
To quantify the portability of the optimal configuration found when tuning from one scenario to another, Figure 4 shows the normalized relative performance.
For example, when the optimal configuration found for advec_u-256\({}^{3}\)-float-A4000 is applied to the advec_u-512\({}^{3}\)-float-A4000 kernel, it achieves only 44% of the best-known configuration for that problem. It is important to realize that between these two kernels, only the grid size has changed and both use single precision on the A4000.
Conversely, for double precision, the optimal performance is very close for both 256\({}^{3}\) and 512\({}^{3}\). This is because of the design of the A4000, which has a limited number of double-precision _floating-point units_ (FPUs): only 1/32nd compared to the number of single-precision FPUs. Therefore, the kernels using double precision are compute-bound on the A4000, meaning many kernel configurations in the search space show similar performance once their memory-throughput is efficient enough to run into the bottleneck introduced by the limited number of double-precision FPUs. This observation is confirmed by the narrow search space distributions for kernels using double precision on the A4000.
The A100 has many more double-precision FPUs, as its double-precision peak performance is half the single-precision peak performance. Consequently, we see more differences and, therefore, even less portability of the optimal configuration from one scenario to the next. For example, where the optima of advec_u-256\({}^{3}\)-double-A4000 and advec_u-512\({}^{3}\)-double-A4000 show near-identical performance, advec_u-256\({}^{3}\)-double-A100 and advec_u-512\({}^{3}\)-double-A100 only achieve 76% of the optimal performance when applied to each other's scenario. The results for the diff_uvw kernel are similar, except that varying only the problem size only leads to suboptimal performance for double precision on the A100.
Overall, we see that the configuration that is optimal for a given precision and problem size on the A100 only yields 70%-77% for advec_u and 69%-82% for diff_uvw of the performance that could have been achieved on A4000, if the optimal configuration for that scenario had been used instead. Similarly, using the optimal configurations for A4000 only achieves 67%-83% for advec_u and 50%-77% for diff_uvw of the performance that could have been achieved on A100. This is surprising since both GPUs are from the same vendor and even belong to the same architecture (Ampere). These results demonstrate the need for software development tools to use auto-tuning results obtained in different scenarios.
We computed the _performance portability metric_[38] (PPM) to quantify the portability of the default and optimal configurations to all scenarios, as shown in Tables 4 and 5. As can be seen in both tables, the performance portability score of the default configuration is in the same range as the scores for the configurations that have been tuned for one scenario. This means that tuning for only one scenario and then using this configuration in all scenarios may lead to worse overall performance compared to not tuning all. However, using Kernel
Launcher's runtime kernel selection, the application always selects the optimal configuration to achieve the 100% of the potential performance in each scenario. To create performance portable applications that can achieve optimal performance in every scenario, it is necessary to select between different configurations based on the problem at hand. Without Kernel Launcher's runtime kernel selection and runtime kernel compilation capabilities, a similar result could only have been achieved by the user recompiling the application every time it is executed on a different problem.
### Kernel Launcher Overhead
For each kernel, Kernel Launcher introduces some overhead on the first launch since the kernel's source code must compiled dynamically at run time. We found
\begin{table}
\begin{tabular}{l|c|c|c} \hline Configuration tuned for & Best & Worst & PPM \\ \hline (default configuration) & 0.90 & 0.53 & 0.69 \\ A100, float, \(256^{3}\) & 1.00 & 0.36 & 0.62 \\ A100, float, \(512^{3}\) & 1.00 & 0.71 & 0.88 \\ A100, double, \(256^{3}\) & 1.00 & 0.68 & 0.81 \\ A100, double, \(512^{3}\) & 1.00 & 0.68 & 0.83 \\ A4000, float, \(256^{3}\) & 1.00 & 0.44 & 0.71 \\ A4000, float, \(512^{3}\) & 1.00 & 0.57 & 0.72 \\ A4000, double, \(256^{3}\) & 1.00 & 0.69 & 0.75 \\ A4000, double, \(512^{3}\) & 1.00 & 0.62 & 0.78 \\ Kernel Launcher & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table}
Table 4: Performance portability metric (PPM) for the advec_u kernel using the default or optimal configurations applied to each scenario. Kernel Launcher always selects the optimal configuration.
\begin{table}
\begin{tabular}{l|c|c|c} \hline Configuration tuned for & Best & Worst & PPM \\ \hline (default configuration) & 0.78 & 0.67 & 0.74 \\ A100, float, \(256^{3}\) & 1.00 & 0.69 & 0.82 \\ A100, float, \(512^{3}\) & 1.00 & 0.69 & 0.82 \\ A100, double, \(256^{3}\) & 1.00 & 0.66 & 0.84 \\ A100, double, \(512^{3}\) & 1.00 & 0.66 & 0.81 \\ A4000, float, \(256^{3}\) & 1.00 & 0.66 & 0.83 \\ A4000, float, \(512^{3}\) & 1.00 & 0.60 & 0.79 \\ A4000, double, \(256^{3}\) & 1.00 & 0.49 & 0.63 \\ A4000, double, \(512^{3}\) & 1.00 & 0.43 & 0.60 \\ Kernel Launcher & 1.00 & 1.00 & 1.00 \\ \hline \end{tabular}
\end{table}
Table 5: Performance portability metric (PPM) for the diff_uvw kernel using the default or optimal configurations applied to each scenario. Kernel Launcher always selects the optimal configuration.
that the first kernel call takes, on average, 294 ms for our benchmarks. Figure 5 shows a breakdown of this time. There are four sources of overhead: reading the wisdom file, runtime compilation using NVRTC (nvrtcCompileProgram), loading the compiled code into the GPU (cuModuleLoad), and scheduling the kernel onto the GPU (cuLaunchKernel). The figure shows that the runtime compilation (NVRTC) is the most expensive stage and constitutes around 80% of this overhead.
For subsequent launches, runtime compilation is not necessary since the compiled kernel is cached. We found that subsequent kernel calls using Kernel Launcher take just 3 \(\mu s\) on average. This is comparable to the overhead of CUDA when launching a GPU kernel without the use of Kernel Launcher.
## 6 Conclusion
In this work, we have presented Kernel Launcher: A C++ library that eases the development of auto-tuned CUDA applications. Kernel Launcher introduces tunable kernel definitions that merge the tuning code into the host application, reducing redundancy and fragmentation in the source code. Kernel Launcher introduces _capturing_ tunable kernels by storing all the information required to tune the kernel during execution, fully automating the process of tuning kernels using Kernel Tuner. Kernel Launcher then uses wisdom files to select and compile the optimal kernel configuration at runtime. Finally, we have demonstrated that Kernel Launcher can be used to create optimal-performance portable CUDA applications, with a clean API that resembles the CUDA runtime API and with little extra work for application developers.
We have evaluated Kernel Launcher using MicroHH, a computational fluid dynamics code, on two GPUs, the A100 and A4000. Even though these GPUs are based on the same architecture (Nvidia Ampere), using a configuration tuned on one GPU, for a specific problem size and precision, on the other GPU only results in 70%-77% and 69%-82% (A100 to A4000), and 67%-83% and 50%-77% (A4000 to A100), of the potential performance for two kernels in MicroHH. Moreover, using the performance portability metric, we have shown that tuning only for one scenario and then using this configuration in all scenarios may lead to worse overall performance compared to not tuning the code at all. Using Kernel Launcher, the application always selects the optimal configuration to achieve 100% of the potential performance in each scenario, thus achieving
Figure 5: Average time required for the first and subsequent launches of a kernel using Kernel Launcher. See Section 5.6 for explanation.
optimal performance portability.
This work is not without limitations. Applications for which the kernel problem size is unknown beforehand cannot be tuned at compile time. Kernel Launcher does support fuzzy matching to select kernels with the closest matching problem size, so if the distribution of problem sizes can be sampled Kernel Launcher can still be used, but performance may be suboptimal. Additionally, Kernel Launcher uses the problem size as the primary descriptive feature on which configurations are selected for one GPU. For irregular problems, such as graphs or sparse matrices, performance may depend strongly on the data itself rather than its size. Kernel Launcher currently does not support selecting on features other than the problem size and GPU, and as such, performance may be suboptimal for irregular algorithms.
## Acknowledgments
The CORTEX project has received funding from the Dutch Research Council (NWO) in the framework of the NWA-ORC Call (file number NWA.1160.18.316). The ESiWACE2 and ESiWACE3 projects have received funding from EU Horizon 2020 (No 823988) and EuroHPC JU (No 101093054).
|
2310.05730 | Clairaut conformal submersions from Ricci solitons | In the present article, we characterize Clairaut conformal submersions whose
total manifolds admit a Ricci soliton and provide a non-trivial example of such
Clairaut conformal submersions. We firstly calculate scalar curvature and Ricci
tensors of total manifolds of Clairaut conformal submersions and provide
necessary conditions for the fibres of such Clairaut conformal submersions to
be almost Ricci solitons and Einstein. Further, we provide necessary conditions
for the base manifold to be Ricci soliton and Einstein. Then, we find a
necessary condition for vector field $\dot{\zeta}$ to be conformal vector field
and killing vector field. Besides, we indicate that if total manifolds of
Clairaut conformal submersions admit a Ricci soliton with the potential mean
curvature vector field $H$ of $Ker\vartheta _{\ast }, $ then the total
manifolds of Clairaut conformal submersions admit a gradient Ricci soliton.
Finally, by solving Poisson equation, we acquire a necessary and sufficient
condition for Clairaut conformal submersions to be harmonic. | Murat Polat | 2023-10-09T14:02:39Z | http://arxiv.org/abs/2310.05730v1 | # Clairaut conformal submersions from Ricci solitons
###### Abstract.
In the present article, we characterize Clairaut conformal submersions whose total manifolds admit a Ricci soliton and provide a non-trivial example of such Clairaut conformal submersions. We firstly calculate scalar curvature and Ricci tensors of total manifolds of Clairaut conformal submersions and provide necessary conditions for the fibres of such Clairaut conformal submersions to be almost Ricci solitons and Einstein. Further, we provide necessary conditions for the base manifold to be Ricci soliton and Einstein. Then, we find a necessary condition for vector field \(\hat{\zeta}\) to be conformal vector field and killing vector field. Besides, we indicate that if total manifolds of Clairaut conformal submersions admit a Ricci soliton with the potential mean curvature vector field \(H\) of \(Ker\vartheta_{*}\), then the total manifolds of Clairaut conformal submersions admit a gradient Ricci soliton. Finally, by solving Poisson equation, we acquire a necessary and sufficient condition for Clairaut conformal submersions to be harmonic.
**Keywords:** Ricci soliton, Clairaut conformal submersion, harmonic map, Riemannian manifold, Riemannian submersion.
**AMS. 2020:** 53C12; 53B20; 53C43; 53C25.
## 1. Introduction
In [22], O'Neill defined that conformal submersions are a generalization of Riemannian submersions. Ishihara [14] and Fugledge [10] defined horizontally conformal maps which have applications in computer graphics and medical imaging (brain imaging). Gundmundsson found the fundamental equations for the conformal submersions in [11]. In [8], Falcitelli et al. showed that under what conditions conformal submersions would become Riemannian submersions. In [2], Baird et al. indicated that a smooth map \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) is called weakly conformal at \(p_{1}\in M_{1}\) if there exists a number \(\sigma^{2}(p_{1})\) such that
\[g_{M_{2}}(\vartheta_{*}X_{1},\vartheta_{*}X_{2})=\sigma^{2}(p_{1})g_{M_{1}}(X _{1},X_{2}), \tag{1.1}\]
for \(X_{1},X_{2}\in\Gamma(T_{p_{1}}M_{1})\).
Let \((M_{1}^{d_{1}},g_{M_{1}})\) and \((M_{2}^{d_{2}},g_{M_{2}})\) be Riemannian manifolds, then \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) is called horizontally weakly conformal map at \(p_{1}\in M_{1}\) if either \(i\). \(\vartheta_{*p_{1}}=0\), or \(ii\). \(\vartheta_{*p_{1}}\) maps the horizontal space \(\mathcal{H}_{p_{1}}=(Ker\vartheta_{*p_{1}})^{\perp}\) conformally onto \(T_{\vartheta(p_{1})}M_{2}\), i.e. \(\vartheta_{*p_{1}}\) is satisfies (1.1) for \(X_{1},X_{2}\in\mathcal{H}_{p_{1}}\) and is surjective. If a node \(p_{1}\) is of type \(i\). then \(p_{1}\in M_{1}\) is said to be critical node of \(\vartheta\) and if node \(p_{1}\) is of type \(ii\). then \(p_{1}\in M_{1}\) is said to be regular node. The number \(\sigma^{2}(p_{1})\) (non-negative) is called the square dilation (square root \(\sigma(p_{1})\) is called the dilation). If the gradient of its
dilation \(\sigma\) is vertical, i.e. \(\mathcal{H}(grad\ \sigma)=0\) then a horizontally weakly conformal map \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) is called horizontally homothetic. If \(\vartheta\) has no critical nodes then horizontally weakly conformal map \(\vartheta\) is called horizontally conformal submersion (HCS) [2]. Therefore, a Riemannian submersion is a HCS with dilation identically one.
If \(r\) is the distance to the axis of a surface of revolution and \(\theta\) is the angle between a meridian and the velocity vector of a geodesic, then Clairaut's relation [7] states that \(r\sin\theta\) is constant. In [1], Bishop gave Clairaut submersions and provided a necessary and sufficient condition for Riemannian submersions to be Clairaut Riemannian submersions. Further the generalization of Clairaut Riemannian submersions as Clairaut Riemannian maps were introduced in [27] and [18]. Recently, the notion of Clairaut conformal submersion (CCS) is introduced by Meena and Zawadzki in [16]. The authors related the geometry of Clairaut Riemannian submersions with the CCS through conformal changes in the metric.
In [13], Hamilton came up with the idea of Ricci soliton to obtain a desired metric on a Riemannian manifold \((M_{1},g_{M_{1}})\). The Ricci flow is an evolution equation (it can also be called the heat equation) and can be formulated as
\[\frac{\partial}{\partial s}g_{M_{1}}(s)=-2Ric.\]
Furthermore, we can say that Ricci solitons are a generalization of an Einstein metric and the self-similar solutions of Ricci flow are Ricci solitons [4]. \((M_{1},g_{M_{1}},\xi,\mu)\) is called Ricci soliton if there exists a potential vector field (PVF) \(\xi\) which satisfies
\[\frac{1}{2}(L_{\xi}g_{M_{1}})+Ric+\mu g_{M_{1}}=0, \tag{1.2}\]
where \(L_{\xi}g_{M_{1}}\) indicates the Lie derivative of \(g_{M_{1}}\), \(Ric\) indicates the Ricci tensor of Riemannian manifold and \(\mu\) indicates a constant. If \(\mu<0\), \(\mu=0\) or \(\mu>0\), then \((M_{1},g_{M_{1}},\xi,\mu)\) is said to be shrinking, steady or expanding respectively. Furthermore, \(\xi\) is said to be conformal vector field [6] if it provides \(L_{\xi}g_{M_{1}}=2\beta g_{M_{1}}\), here \(\beta\) is the potential function of PVF.
In [23], Perelman showed that Ricci solitons are very useful to solve the Poincare conjecture. In [24], Pigola et al., by considering \(\mu\) as a variable function, gave almost Ricci soliton. In this way, the geometry of the Ricci soliton became the most important subject of research because of its wide applications in theoretical physics and its geometric significance.
Lately, Riemannian submersions whose total manifolds admit Ricci soliton, almost Ricci soliton, almost \(\eta\)-Ricci-Bourguignon soliton, \(\eta\)-Ricci soliton, \(\eta\)-Ricci-Yamabe soliton, almost Yamabe soliton, conformal \(\eta\)-Ricci soliton were investigated in [20], [3], [5], [9], [19], [28] and [29]. Also, Riemannian maps whose total or base manifolds admit a Ricci soliton were investigated in [30], [32], [12], [31] and [18]. Recently, in [17], conformal submersions whose total manifolds admitting a Ricci soliton studied by Meena and Yadav. Also, Polat studied Clairaut pointwise slant Submersion from locally product Riemannian manifolds in [25]. We investigate CCS whose total manifolds admitting a Ricci soliton in this current article.
The article is organized as: in Sect. 2, for the conformal submersion and CCS which are required for this article, we gather some basic concepts. In Sect. 3, we obtain the scalar curvature and Ricci tensor by using Ricci soliton. Then we provide necessary conditions for the fibers of CCS \(\vartheta\) to be almost Ricci soliton and Einstein. Moreover, we provide necessary conditions for the base manifold to
be Ricci soliton and Einstein. Also, we discuss the harmonicity of CCS between Riemannian manifolds whose total space is Ricci soliton. Finally, we prepare a illustrative example to support the correctness of the theory of this article.
## 2. Preliminaries
In this part, we remind a brief review of some basic facts for the notion of submersion, Riemannian submersion, conformal submersion and CCS between Riemannian manifolds.
Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a map between Riemannian manifolds \((M_{1}^{d_{1}},g_{M_{1}})\) and \((M_{2}^{d_{2}},g_{M_{2}}).\) Then \(\vartheta\) is called \(C^{\infty}\)-submersion if it has maximal rank at every node \(p_{1}\in M_{1}\). The fibers of \(\vartheta\) is defined as \(\vartheta^{-1}(p_{2}),\) for any \(p_{2}\in M_{2}\). The vectors tangent to fibers build the smooth vertical distribution denoting \(\nu_{p_{1}}=Ker\vartheta_{*p_{1}}\) and its orthogonal complementary is called horizontal distribution denoting \(\mathcal{H}_{p_{1}}\). \(\nu\) and \(\mathcal{H}\) indicate projections on the vertical and horizontal distributions, respectively. A vector field \(D\) on \(M_{1}\) is said to be projectable if there exists a vector field \(\tilde{D}\) on \(M_{2}\) such that \(\vartheta_{*p_{1}}(D)=\tilde{D}_{\vartheta(p_{1})}\). Then \(D\) and \(\tilde{D}\) are called \(\vartheta\)-related. So, \(D\) is called the horizontal lift of \(\tilde{D}\).
Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a submersion between Riemannian manifolds \((M_{1}^{d_{1}},g_{M_{1}})\) and \((M_{2}^{d_{2}},g_{M_{2}}).\) We say that \(\vartheta\) is a Riemannian submersion if \(\vartheta_{*p_{1}}\) preserves the length of the horizontal vectors.
Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a Riemannian submersion. If \(\vartheta_{*}\) restricted to horizontal distribution of \(\vartheta\) is a conformal map, namely,
\[g_{M_{2}}(\vartheta_{*}X_{1},\vartheta_{*}X_{2})=\sigma^{2}(p_{1})g_{M_{1}}(X_ {1},X_{2}), \tag{2.1}\]
\(\forall X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\) and \(p_{1}\in M_{1}\) for a smooth function \(\sigma:M_{1}\rightarrow\mathbb{R}^{+}\), then \(\vartheta\) is said to be conformal submersion. The O'Neill tensors \(S\) and \(T\) formulated as [22]
\[S_{E_{1}}E_{2}=\mathcal{H}\nabla_{\mathcal{H}E_{1}}\nu E_{2}+\nu\nabla_{ \mathcal{H}E_{1}}\mathcal{H}E_{2}, \tag{2.2}\]
\[T_{E_{1}}E_{2}=\mathcal{H}\nabla_{\nu E_{1}}\nu E_{2}+\nu\nabla_{\nu E_{1}} \mathcal{H}E_{2}, \tag{2.3}\]
for any \(E_{1},E_{2}\in\Gamma(TM_{1})\), here \(\nabla\) denotes the Levi-Civita connection of metric tensor \(g_{M_{1}}\). \(\forall E_{1}\in\Gamma(TM_{1})\), \(T_{E_{1}}\) and \(S_{E_{1}}\) are skew-symmetric operators on \((\Gamma(TM_{1}),g_{M_{1}})\). We can easily see that \(T_{E_{1}}=T_{\nu E_{1}}\) i.e, \(T\) is vertical, and \(S_{E_{1}}=S_{\mathcal{H}E_{1}}\) i.e, \(S\) is horizontal and \(T\) satisfies \(T_{U_{1}}U_{2}=T_{U_{2}}U_{1},\ \forall U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\). Now, from (2.2) and (2.3), we have
\[\nabla_{U_{1}}U_{2}=T_{U_{1}}U_{2}+\nu\nabla_{U_{1}}U_{2}, \tag{2.4}\]
\[\nabla_{X_{1}}U_{1}=S_{X_{1}}U_{1}+\nu\nabla_{X_{1}}U_{1}, \tag{2.5}\]
\[\nabla_{X_{1}}X_{2}=S_{X_{1}}X_{2}+\mathcal{H}\nabla_{X_{1}}X_{2}, \tag{2.6}\]
where \(U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\) and \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\). A conformal submersion \(\vartheta\) with
\[T_{U_{1}}U_{2}=g_{M_{1}}(U_{1},U_{2})H\ \text{or}\ T_{U_{1}}X_{1}=-g_{M_{1}}(H,X_{1})U_ {1}, \tag{2.7}\]
is called as totally umbilical fibers [33, 34]. Herein, \(\forall U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\), \(X_{1}\in\Gamma(Ker\vartheta_{*})^{\perp}\) and \(H\) denotes the mean curvature vector field of the fibers.
**Proposition 1**.: _[_11_]_ _Let \(\vartheta:(M_{1},g_{M_{1}})\rightarrow(M_{2},g_{M_{2}})\) be a HCS such that \((Ker\vartheta_{*})^{\perp}\) is integrable. Then the horizontal space is totally umbilical in \((M_{1},g_{M_{1}})\), i.e. \(S_{X_{1}}X_{2}=g_{M_{1}}(X_{1},X_{2})H^{\prime}\ \forall X_{1},X_{2}\in\Gamma(Ker \vartheta_{*})^{\perp}\), where \(H^{\prime}\) is the mean curvature vector field of \((Ker\vartheta_{*})^{\perp}\) and stated_
\[H^{\prime}=\left(\nabla_{\nu}\frac{1}{\sigma^{2}}\right)-\frac{\sigma^{2}}{2}. \tag{2.8}\]
The second fundamental form of \(\vartheta\)[21] is given by \((\nabla\vartheta_{*})(X_{1},X_{2})=\nabla_{X_{1}}^{\vartheta}\vartheta_{*}X _{2}-\vartheta_{*}(\nabla_{X_{1}}^{M_{1}}X_{2}),\ \forall X_{1},X_{2}\in\Gamma(TM_{1})\), or
\[(\nabla\vartheta_{*})(X_{1},X_{2})=\nabla_{\vartheta_{*}X_{1}}^{M_{2}} \vartheta_{*}X_{2}-\vartheta_{*}(\nabla_{X_{1}}^{M_{1}}X_{2}),\ \forall X_{1},X_{2}\in\Gamma(TM_{1}). \tag{2.9}\]
**Lemma 1**.: _[_11_]_ _Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a HCS. Then_
\[\vartheta_{*}(\mathcal{H}\nabla_{X_{1}}X_{2})\] \[= \frac{\sigma^{2}}{2}\left\{-g_{M_{1}}(X_{1},X_{2})\vartheta_{*}( grad_{\mathcal{H}}\ (\frac{1}{\sigma^{2}}))+X_{1}(\frac{1}{\sigma^{2}})\vartheta_{*}X_{2}+X_{2}( \frac{1}{\sigma^{2}})\vartheta_{*}X_{1}\right\}\] \[+\nabla_{\vartheta_{*}X_{1}}^{M_{2}}\vartheta_{*}X_{2},\]
_where \(\nabla\) Levi-Civita connection on \(M_{1}\) and \(X_{1},X_{2}\) are basic vector fields._
Now we give general information about the gradient (grad), divergence (div) and Hessian \((Hess)\)[26]. Let \(\beta\in\)F\((M_{1})\), then gradient of function \(\beta\), indicated by \(grad\beta\) or \(\nabla\beta\), stated
\[g_{M_{1}}(grad\ \beta,X_{1})=X_{1}(\beta), \tag{2.10}\]
for \(X_{1}\in\Gamma(TM_{1}).\) Let \(\{e_{k}\}_{1\leq k\leq d_{1}}\) be an orthonormal basis of \(T_{p_{1}}M_{1}\) then, we have
\[g_{M_{1}}(X_{1},X_{2})=\sum_{k=1}^{d_{1}}g_{M_{1}}(X_{1},e_{k})g_{M_{1}}(X_{2}, e_{k}). \tag{2.11}\]
The divergence of \(X_{1}\) given by
\[div(X_{1})=\sum_{k=1}^{d_{1}}g_{M_{1}}(\nabla_{e_{k}}X_{1},e_{k}), \tag{2.12}\]
for any \(X_{1}\in\Gamma(TM_{1}).\) The Hessian tensor \(h_{\beta}\) of \(\beta\) is given by
\[h_{\beta}(X_{1})=\nabla_{X_{1}}\nabla\beta,\]
for \(X_{1}\in\Gamma(TM_{1}).\) The Hessian form of \(\beta\) is given by
\[Hess\beta(X_{1},X_{2})=g_{M_{1}}(h_{\beta}(X_{1}),X_{2}), \tag{2.13}\]
for any \(X_{1},X_{2}\in\Gamma(TM_{1}).\) The Laplacian of \(\beta\) is given by
\[\Delta\beta=div(\nabla\beta). \tag{2.14}\]
**Lemma 2**.: _[_11_]_ _Let \(\beta:M_{1}\rightarrow\mathbb{R}\) be a smooth function and \((M_{1},g_{M_{1}})\) be a Riemannian manifold. Then_
\[g_{M_{1}}(\nabla_{X_{1}}grad(\beta),X_{2})=g_{M_{1}}(\nabla_{X_{2}}grad(\beta ),X_{1}),\]
_for \(X_{1},X_{2}\in\Gamma(TM_{1}).\)_
**Proposition 2**.: _[_11_]_ _Let \(\vartheta:(M_{1},g_{M_{1}})\to(M_{2},g_{M_{2}})\) be a HCS with dilation \(\sigma\). Then_
\[S_{X_{1}}X_{2}=\frac{1}{2}\left\{\nu[X_{1},X_{2}]-\sigma^{2}g_{M_{1}}(X_{1},X_{ 2})\left(\nabla_{\nu}\frac{1}{\sigma^{2}}\right)\right\},\]
_for any \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\)._
**Theorem 1**.: _[_17_]_ _Let \(\vartheta:(M_{1},g_{M_{1}})\to(M_{2},g_{M_{2}})\) be a HCS with dilation \(\sigma\) such that \((Ker\vartheta_{*})^{\perp}\) is totally geodesic. Then \(\sigma\) is constant on \(Ker\vartheta_{*}\)._
**Theorem 2**.: _[_17_]_ _Let \(\vartheta:(M_{1},g_{M_{1}})\to(M_{2},g_{M_{2}})\) be a HCS. If \(S\) is parallel then \(\sigma\) is constant on \(Ker\vartheta_{*}\)._
**Theorem 3**.: _[_17_]_ _Let \(\vartheta:(M_{1},g_{M_{1}})\to(M_{2},g_{M_{2}})\) be a HCS. Then \(\vartheta\) is totally geodesic if and only if fibers of \(\vartheta\) are totally geodesic, \((Ker\vartheta_{*})^{\perp}\) is totally geodesic and \(\vartheta\) is homothetic._
**Lemma 3**.: _[_17_]_ _Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\to(M_{2}^{d_{2}},g_{M_{2}})\) be a HCS with dilation \(\sigma\) such that \((Ker\vartheta_{*})^{\perp}\) is integrable. Then followings are valid:_
1. \(\sum\limits_{l=1}^{d_{2}}g_{M_{1}}(S_{X_{l}}U_{1},S_{X_{l}}U_{2})=d_{2}^{2} \frac{\sigma^{4}}{4}g_{M_{1}}(\nabla_{\nu}\frac{1}{\sigma^{2}},U_{1})g_{M_{1} }(\nabla_{\nu}\frac{1}{\sigma^{2}},U_{2})\)_,_
2. \(\sum\limits_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{U_{1}}S)_{X_{l}}X_{l},U_{2})=d_{2} g_{M_{1}}(\nabla_{U_{1}}H^{\prime},U_{2})\)_,_
3. \(\sum\limits_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{1}}S)_{X_{l}}X_{l},U_{1})=d_{2} g_{M_{1}}(\nabla_{X_{1}}H^{\prime},U_{1})\)_,_
4. \(\sum\limits_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{l}}S)_{X_{1}}X_{l},U_{1})=g_{M_ {1}}(X_{1},X_{l})g_{M_{1}}(\nabla_{X_{l}}H^{\prime},U_{1})\)_,_
5. \(\sum\limits_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{U_{k}}S)_{X_{1}}X_{2},U_{k}) =dkv(H^{\prime})g_{M_{1}}(X_{1},X_{2})\)_,_
6. \(\sum\limits_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(S_{X_{1}}U_{k},S_{X_{2}}U_{k})=\frac{ \sigma^{4}}{4}|\nabla_{\nu}\frac{1}{\sigma^{2}}|^{2}g_{M_{1}}(X_{1},X_{2})\)_,_
_here \(\{U_{k}\}_{d_{2}+1\leq k\leq d_{1}}\) is orthonormal bases of \(Ker\vartheta_{*}\) and \(\{X_{l}\}_{1\leq l\leq d_{2}}\) is orthonormal bases of \((Ker\vartheta_{*})^{\perp}\), for \(U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\) and \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\)._
**Proposition 3**.: _Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\to(M_{2}^{d_{2}},g_{M_{2}})\) be a HCS with dilation \(\sigma\). Then_
\[Ric(U_{1},U_{2})\] \[= Ric^{\nu}(U_{1},U_{2})-(d_{1}-d_{2})g_{M_{1}}(T_{U_{1}}U_{2},H)\] \[+\sum\limits_{l=1}^{d_{2}}g_{M_{1}}(S_{X_{l}}U_{1},S_{X_{l}}U_{2} )+\sum\limits_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{U_{1}}S)_{X_{l}}X_{l},U_{2})\] \[-\sum\limits_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{l}}T)_{U_{1}}X_{l},U_{2})\] \[-\frac{\sigma^{4}}{2}d_{2}g_{M_{1}}(U_{1},\nabla_{\nu}\frac{1}{ \sigma^{2}})g_{M_{1}}(U_{2},\nabla_{\nu}\frac{1}{\sigma^{2}}),\]
\[Ric(U_{1},X_{1})\] \[= (d_{1}-d_{2})g_{M_{1}}(\nabla_{U_{1}}H,X_{1})\] \[-\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{U_{k}}T)_{U_{1}}U_{k}, X_{1})+\sum_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{1}}S)_{X_{l}}X_{l},U_{1})\] \[-\sum_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{l}}S)_{X_{1}}X_{l},U_{1}) -\sum_{l=1}^{d_{2}}g_{M_{1}}(T_{U_{1}}X_{l},\nu[X_{1},X_{l}]),\] \[Ric(X_{1},X_{2})\] \[= \sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{U_{k}}S)_{X_{1}}X_{2}, U_{k})\] \[-\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{X_{1}}T)_{U_{k}}X_{2}, U_{k})+\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(S_{X_{1}}U_{k},S_{X_{2}}U_{k})\] \[-\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(T_{U_{k}}X_{1},T_{U_{k}}X_{2}) +\sigma^{2}g_{M_{1}}(S_{X_{1}}X_{2},\nabla_{\nu}\frac{1}{\sigma^{2}})\] \[+\frac{1}{\sigma^{2}}Ric^{M_{2}}(\tilde{X}_{1},\tilde{X}_{2})+ \frac{3}{4}\sum_{l=1}^{d_{2}}g_{M_{1}}(\nu[X_{1},X_{l}],\nu[X_{l},X_{2}])\] \[-\frac{(d_{2}-2)}{2}\sigma^{2}g_{M_{1}}(\nabla_{X_{1}}\nabla\frac {1}{\sigma^{2}},X_{2})\] \[-\frac{\sigma^{2}}{2}g_{M_{1}}(X_{1},X_{2})\left\{\Delta^{\mathcal{ H}}\frac{1}{\sigma^{2}}-d_{2}\left(H^{\prime}\frac{1}{\sigma^{2}}\right)\right\}\] \[+\frac{d_{2}\sigma^{4}}{4}g_{M_{1}}(X_{1},X_{2})|\nabla\frac{1}{ \sigma^{2}}|^{2}+\frac{\sigma^{4}}{4}(d_{2}-2)(X_{1}\frac{1}{\sigma^{2}})(X_{ 2}\frac{1}{\sigma^{2}}), \tag{2.16}\]
_here \(\{U_{k}\}_{d_{2}+1\leq k\leq d_{1}}\) is orthonormal bases of \(Ker\vartheta_{*}\) and \(\{X_{l}\}_{1\leq l\leq d_{2}}\) is orthonormal bases of \((Ker\vartheta_{*})^{\perp},\) for \(U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\) and \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\). Also, \(X_{1}\) and \(X_{2}\) are the horizontal lift of \(\tilde{X}_{1}\) and \(\tilde{X}_{2}\), respectively._
The Clairaut condition for a Riemannian submersion was originally defined by Bishop in [1]. Further, Clairaut condition was defined for a Riemannian map in [27] and [18]. Now, we recall the definition CCS [16]. According to definition, A conformal submersion \(\vartheta:(M_{1},g_{M_{1}})\rightarrow(M_{2},g_{M_{2}})\) between Riemannian manifolds with dilation \(\sigma\) is said to be CCS if there is a function \(r:M_{1}\rightarrow\mathbb{R}^{+}\) such that for every geodesic \(\zeta\) on \(M_{1}\), the function \((r\circ\zeta)\sin\omega(s)\) is constant along \(\zeta\), where for all \(s\), \(\omega(s)\) is the angle between \(\dot{\zeta}(s)\) and the horizontal space at \(\zeta(s)\).
**Theorem 4**.: _[_16_]_ _Let \(\vartheta:(M_{1},g_{M_{1}})\rightarrow(M_{2},g_{M_{2}})\) be a conformal submersion with connected fibers and dilation \(\sigma\). Then \(\vartheta\) is a CCS with \(r=e^{\beta}\) if and only if \(\nabla\beta\) is horizontal, fibers of \(\vartheta\) are totally umbilical with \(H=-\nabla\beta\) i.e. \(T_{U_{1}}U_{2}=-g_{M_{1}}(U_{1},U_{2})\nabla\beta\) and \(\sigma\) is constant along the fibers of \(\vartheta\)._
## 3. Clairaut conformal submersions whose total manifolds admit a Ricci soliton
In this part, we conceive a CCS whose total space is a Ricci soliton and study its geometry.
**Proposition 4**.: _Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\to(M_{2}^{d_{2}},g_{M_{2}})\) be a CCS with dilation \(\sigma\). Then the Ricci tensor_
\[Ric(U_{1},U_{2})\] \[= Ric^{\prime}(U_{1},U_{2})-(d_{1}-d_{2})g_{M_{1}}(U_{1},U_{2})| \nabla\beta|^{2}-g_{M_{1}}(U_{1},U_{2})div(\nabla\beta),\] \[Ric(U_{1},X_{1})\] \[= (d_{1}-d_{2}+1)g_{M_{1}}(S_{X_{1}}U_{1},\nabla\beta)-\sum_{l=1}^{ d_{2}}\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(\nabla_{X_{l}}S_{X_{1}}X_{l},U_{k}),\] \[Ric(X_{1},X_{2})\] \[= div(S_{X_{1}}X_{2})-2\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(S_{X_{1}} U_{k},S_{X_{2}}U_{k})\] \[-(d_{1}-d_{2})g_{M_{1}}(X_{2},\nabla_{X_{1}}\nabla\beta)-X_{1}( \beta)X_{2}(\beta)(d_{1}-d_{2})\] \[+\frac{1}{\sigma^{2}}Ric^{M_{2}}(\tilde{X}_{1},\tilde{X}_{2})- \frac{(d_{2}-2)}{2}\sigma^{2}g_{M_{1}}(\nabla_{X_{1}}\nabla\frac{1}{\sigma^{2} },X_{2})\] \[-\frac{\sigma^{2}}{2}g_{M_{1}}(X_{1},X_{2})\Delta^{\mathcal{H}} \frac{1}{\sigma^{2}}+\frac{d_{2}\sigma^{4}}{4}g_{M_{1}}(X_{1},X_{2})|\nabla \frac{1}{\sigma^{2}}|^{2}\] \[+\frac{\sigma^{4}}{4}(d_{2}-2)(X_{1}\frac{1}{\sigma^{2}})(X_{2} \frac{1}{\sigma^{2}}), \tag{3.1}\]
_here \(\{U_{k}\}_{d_{2}+1\leq k\leq d_{1}}\) is orthonormal bases of \(Ker\vartheta_{*}\) and \(\{X_{l}\}_{1\leq l\leq d_{2}}\) is orthonormal bases of \((Ker\vartheta_{*})^{\perp},\) for \(U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\) and \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\). Also, \(X_{1}\) and \(X_{2}\) are the horizontal lift of \(\tilde{X}_{1}\) and \(\tilde{X}_{2}\), respectively._
Proof.: Since \(\vartheta\) is CCS, utilizing Theorem 4, we obtain
\[g_{M_{1}}(T_{U_{1}}U_{2},H)=g_{M_{1}}(U_{1},U_{2})|\nabla\beta|^{2}, \tag{3.4}\]
and
\[\sum_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{l}}T)_{U_{1}}X_{l},U_{2})=g_{M_{1}}(U_ {1},U_{2})div(\nabla\beta). \tag{3.5}\]
Also, by using Theorem 4 in Lemma 3, we have
\[\sum_{l=1}^{d_{2}}g_{M_{1}}(S_{X_{l}}U_{1},S_{X_{l}}U_{2})=0, \tag{3.6}\]
and
\[\sum_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{U_{1}}S)_{X_{1}}X_{l},U_{2})=0. \tag{3.7}\]
Then using (3.4)-(3.7) in (2.15), we get (3.1).
Also, by using Theorem 4, we obtain
\[g_{M_{1}}(\nabla_{U_{1}}H,X_{1})=-g_{M_{1}}(\nabla_{U_{1}}\nabla\beta,X_{1}) \tag{3.8}\]
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{U_{k}}T)_{U_{1}}U_{k},X_{1})=g_{M_{1 }}(S_{X_{1}}U_{1},\nabla\beta) \tag{3.9}\]
\[\sum_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{1}}S)_{X_{l}}X_{l},U_{1})=0 \tag{3.10}\]
\[\sum_{l=1}^{d_{2}}g_{M_{1}}((\nabla_{X_{l}}S)_{X_{1}}X_{l},U_{1})\] \[= \sum_{l=1}^{d_{2}}\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{X_{l} }S)_{X_{1}}X_{l},U_{k})g_{M_{1}}(U_{k},U_{k}) \tag{3.11}\]
\[\sum_{l=1}^{d_{2}}g_{M_{1}}(T_{U_{1}}X_{l},\nu[X_{1},X_{l}])=-2g_{M_{1}}(S_{X_{ 1}}U_{1},\nabla\beta). \tag{3.12}\]
Then using (3.8)-(3.12) in (2.16), we get (3.2). Finally,
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{U_{k}}S)_{X_{1}}X_{2},U_{k})=div(S_ {X_{1}}X_{2}) \tag{3.13}\]
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(S_{X_{1}}U_{k},S_{X_{2}}U_{k})=0 \tag{3.14}\]
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}((\nabla_{X_{1}}T)_{U_{k}}X_{2},U_{k})=(d_{1} -d_{2})g_{M_{1}}(X_{2},\nabla_{X_{1}}\nabla\beta) \tag{3.15}\]
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(T_{U_{k}}X_{1},T_{U_{k}}X_{2})=g_{M_{1}}\left( \nabla_{X_{1}}\mathcal{H}\nabla\frac{1}{\sigma^{2}},X_{2}\right). \tag{3.16}\]
Then using (3.13)-(3.16) in (2.17), we get (3.3).
**Theorem 5**.: _Let \((M_{1},g_{M_{1}},\xi,\mu)\) be a Ricci soliton with the PVF \(\xi\in\Gamma(TM_{1})\) and \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a CCS between Riemannian manifolds. Then following options are valid:_
_(i) any fiber of \(\vartheta\) is an almost Ricci soliton if \(\xi=W\in\Gamma(Ker\vartheta_{*})\),_
_(ii) any fiber of \(\vartheta\) is an Einstein if \(\xi=X_{1}\in\Gamma(Ker\vartheta_{*})^{\perp}\)._
Proof.: Since \((M_{1},g_{M_{1}},\xi,\mu)\) is a Ricci soliton, then using (1.2), we obtain
\[\frac{1}{2}(L_{\xi}g_{M_{1}})(U_{1},U_{2})+Ric(U_{1},U_{2})+\mu g_{M_{1}}(U_{1},U_{2})=0, \tag{3.17}\]
for \(U_{1},U_{2}\in\Gamma(Ker\vartheta_{*}).\) Using (3.1) in (3.17), we have
\[\frac{1}{2}(L_{\xi}g_{M_{1}})(U_{1},U_{2})+Ric^{\nu}(U_{1},U_{2})\] \[+g_{M_{1}}(U_{1},U_{2})(\mu-(d_{1}-d_{2})|\nabla\beta|^{2}-div( \nabla\beta))\] \[= 0.\]
If \(\xi=W\in\Gamma(Ker\vartheta_{*})\), then from (3.18), we acquire
\[\frac{1}{2}(L_{W}g_{M_{1}})(U_{1},U_{2})+Ric^{\nu}(U_{1},U_{2})+\mu^{\prime}g_ {M_{1}}(U_{1},U_{2})=0,\]
where \(\mu^{\prime 2}-div(\nabla\beta)\) is a smooth function on \(M_{1}\). This implies \((i)\). Now, if \(\xi=X_{1}\in\Gamma(Ker\vartheta_{*})^{\perp}\), then (3.18) implies
\[\frac{1}{2}\{g_{M_{1}}(\nabla_{U_{1}}X_{1},U_{2})+g_{M_{1}}( \nabla_{U_{2}}X_{1},U_{1})\}\] \[+Ric^{\nu}(U_{1},U_{2})+g_{M_{1}}(U_{1},U_{2})(\mu-(d_{1}-d_{2})| \nabla\beta|^{2}-div(\nabla\beta))\] \[= 0,\]
which implies
\[-g_{M_{1}}(\mathcal{H}\nabla_{U_{1}}U_{2},X_{1})+Ric^{\nu}(U_{1},U _{2})\] \[+g_{M_{1}}(U_{1},U_{2})(\mu-(d_{1}-d_{2})|\nabla\beta|^{2}-div( \nabla\beta))\] \[= 0.\]
Using (2.4) and Theorem 4 in above equation, we get
\[Ric^{\nu}(U_{1},U_{2})+g_{M_{1}}(U_{1},U_{2})(\mu-(d_{1}-d_{2})|\nabla\beta|^{ 2}-div(H)+X_{1}(\beta))=0.\]
which implies \((ii)\).
**Theorem 6**.: _Let \((M_{1},g_{M_{1}},\xi,\mu)\) be a Ricci soliton with the PVF \(\xi\in\Gamma(TM_{1})\) and \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a totally geodesic CCS between Riemannian manifolds. Then the following options are valid:_
_(i) \(M_{2}\) is a Ricci soliton if \(\xi=Z\in\Gamma(Ker\vartheta_{*})^{\perp}\),_
_(ii) \(M_{2}\) is an Einstein if \(\xi=W\in\Gamma(Ker\vartheta_{*})\)._
Proof.: Since \((M_{1},g_{M_{1}},\xi,\mu)\) is a Ricci soliton then using (1.2), we obtain
\[\frac{1}{2}(L_{\xi}g_{M_{1}})(X_{1},X_{2})+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1 },X_{2})=0,\]
which can be written as
\[\frac{1}{2}\left\{g_{M_{1}}(\nabla_{X_{1}}\xi,X_{2})+g_{M_{1}}( \nabla_{X_{2}}\xi,X_{1})\right\}\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0, \tag{3.19}\]
for \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\). If \(\xi=Z\in\Gamma(Ker\vartheta_{*})^{\perp}\), then utilizing (1.1) in (3.19)
\[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\vartheta_{*}(\nabla_{X_{1 }}Z),\vartheta_{*}X_{2})+g_{M_{2}}(\vartheta_{*}(\nabla_{X_{2}}Z),\vartheta_{* }X_{1})\right\}\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0. \tag{3.20}\]
Using (2.8) in (3.20), we have
\[\frac{1}{2\sigma^{2}}\left\{\begin{array}{c}g_{M_{2}}(\nabla_{X_{ 2}}^{M_{2}}\bar{Z}-(\nabla\vartheta_{*})(X_{1},Z),\tilde{X}_{2})\\ +g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla\vartheta_{*})(X_{2},Z),\bar {X}_{1})\end{array}\right\}\] \[+Ric(X_{1},X_{2})+\frac{\mu}{2\sigma^{2}}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\] \[= 0, \tag{3.21}\]
where \(X_{1},X_{2}\) and \(Z\) are the horizontal lift of \(\bar{X}_{1},\tilde{X}_{2}\) and \(\bar{Z}\), respectively. Since \(\vartheta\) is totally geodesic, then \((\nabla\vartheta_{*})(X_{1},Z)=0\) and \((\nabla\vartheta_{*})(X_{2},Z)=0\). Using Theorem 6 in (3.21), we have
\[\frac{1}{2\sigma^{2}}\left\{\begin{array}{c}g_{M_{2}}(\nabla_{X_{ 2}}^{M_{2}}\bar{Z}-(\nabla\vartheta_{*})(X_{1},Z),\tilde{X}_{2})\\ +g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla\vartheta_{*})(X_{2},Z),\bar {X}_{1})\end{array}\right\}\] \[+Ric(X_{1},X_{2})+\frac{\mu}{2\sigma^{2}}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\] \[= 0, \tag{3.22}\]
where \(X_{1},X_{2}\) and \(Z\) are the horizontal lift of \(\bar{X}_{1},\tilde{X}_{2}\) and \(\bar{Z}\) respectively. Since \(\vartheta\) is totally geodesic, then \((\nabla\vartheta_{*})(X_{1},Z)=0\) and \((\nabla\vartheta_{*})(X_{2},Z)=0\). Using Theorem 6 in (3.21), we obtain
\[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla \vartheta_{*})(X_{1},Z),\tilde{X}_{2})\right\}\] \[+Ric(X_{1},X_{2})+\frac{\mu}{2\sigma^{2}}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\] \[= 0, \tag{3.23}\]
where \(X_{1},X_{2}\) and \(Z\) are the horizontal lift of \(\bar{X}_{1},\tilde{X}_{2}\) and \(\bar{Z}\) respectively. Since \(\vartheta\) is totally geodesic, then \((\nabla\vartheta_{*})(X_{1},Z)=0\) and \((\nabla\vartheta_{*})(X_{2},Z)=0\). Using Theorem 6 in (3.21), we obtain
\[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla \vartheta_{*})(X_{1},Z),\tilde{X}_{2})\right\}\] \[+Ric(X_{1},X_{2})+\frac{\mu}{2\sigma^{2}}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\] \[= 0, \tag{3.24}\]
where \(X_{1},X_{2}\) and \(Z\) are the horizontal lift of \(\bar{X}_{1},\tilde{X}_{2}\) and \(\bar{Z}\) respectively. Since \(\vartheta\) is totally geodesic, then \((\nabla\vartheta_{*})(X_{1},Z)=0\) and \((\nabla\vartheta_{*})(X_{2},Z)=0\). Using Theorem 6 in (3.21), we obtain
\[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla \vartheta_{*})(X_{1},Z),\tilde{X}_{2})\right\}\] \[+Ric(X_{1},X_{2})+\frac{\mu}{2\sigma^{2}}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\] \[= 0, \tag{3.25}\]
where \(X_{1},X_{2}\) and \(Z\) are the horizontal lift of \(\bar{X}_{1},\tilde{X}_{2}\) and \(\bar{Z}\) respectively. Since \(\vartheta\) is totally geodesic, then \((\nabla\vartheta_{*})(X_{1},Z)=0\) and \((\nabla\vartheta_{*})(X_{2},Z)=0\). Using Theorem 6 in (3.21), we obtain
\[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla \vartheta_{*})(X_{1},Z),\tilde{X}_{2})\right\}\] \[+Ric(X_{1},X_{2})+\frac{\mu}{2\sigma^{2}}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\] \[= 0, \tag{3.26}\]
where \(X_{1},X_{2}\) and \(Z\) are the horizontal lift of \(\bar{X}_{1},\tilde{X}_{2}\) and \(\bar{Z}\) respectively. Since \(\vartheta\) is totally geodesic, then \((\nabla\vartheta_{*})(X_{1},Z)=0\) and \((\nabla\vartheta_{*})(X_{2},Z)=0\). Using Theorem 6 in (3.21), we obtain
(3.27) \[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\nabla_{X_{2}}^{M_{2}}\bar{Z}-(\nabla \vartheta_{*})(X_{1},Z),\tilde{X}_{2})\right\}\] \[+Ric(X_{1},
1, Theorem 2 and Theorem 3 in (3.3) then (3.21) can be written as
\[\frac{1}{2\sigma^{2}}\left\{g_{M_{2}}(\nabla_{X_{1}}^{M_{2}}\bar{Z},\bar{X}_{2})+g_{M_{2}}(\nabla_{\bar{X}_{2}}^{M_{2}}\bar{Z},\bar{X}_{1})\right\}\] \[+\frac{1}{\sigma^{2}}Ric^{M_{2}}(\bar{X}_{1},\tilde{X}_{2})+\frac{ \mu}{\sigma^{2}}g_{M_{2}}(\bar{X}_{1},\tilde{X}_{2})\] \[= 0,\]
which completes proof of (i).
If \(\xi=W\in\Gamma(Ker\vartheta_{*})\) then (3.19) becomes
\[\frac{1}{2}\left\{g_{M_{1}}(\nabla_{X_{1}}W,X_{2})+g_{M_{1}}( \nabla_{X_{2}}W,X_{1})\right\}\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0,\]
which can be written as
\[-\frac{1}{2}\left\{g_{M_{1}}(\nabla_{X_{1}}X_{2},W)+g_{M_{1}}( \nabla_{X_{2}}X_{1},W)\right\}\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0.\]
Using (2.1) in (2.6), we have
\[-\frac{1}{2}\left\{g_{M_{1}}(S_{X_{1}}X_{2},W)+g_{M_{1}}(S_{X_{2} }X_{1},W)\right\}\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0.\]
Since \(\vartheta\) is totally geodesic, then using (3.3), Proposition 2 and Theorem 3 in (3.22), we obtain
\[\frac{1}{\sigma^{2}}Ric^{M_{2}}(\bar{X}_{1},\tilde{X}_{2})+\frac{\mu}{\sigma^{ 2}}g_{M_{2}}(\bar{X}_{1},\tilde{X}_{2})=0,\]
where \(X_{1}\) and \(X_{2}\) are the horizontal lift of \(\bar{X}_{1}\) and \(\tilde{X}_{2}\), respectively. This completes the proof of (ii).
**Corollary 1**.: _Let \((M_{1},g_{M_{1}},\xi,\mu)\) be a Ricci soliton with the PVF \(W\in\Gamma(Ker\vartheta_{*})\) and \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a totally geodesic CCS between Riemannian manifolds. If \(Ric^{M_{2}}(\bar{X}_{1},\tilde{X}_{2})=-\mu g_{M_{2}}(\bar{X}_{1},\tilde{X}_{2 }),\) then \(W\) is a killing vector field on \(M_{2}\)._
**Theorem 7**.: _Let \(\zeta\) be a geodesic curve on \(M_{1}\) and \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) be a Ricci soliton with the PVF \(\dot{\zeta}\in\Gamma(TM_{1}).\) Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a CCS with \(r=e^{\beta}\) between Riemannian manifolds such that fibers of \(\vartheta\) are Einstein. Then \(\dot{\zeta}\) is conformal vector field on \(Ker\vartheta_{*}\)._
Proof.: Since \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) is a Ricci soliton followings are obtained:
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(U_{1},U_{2})+Ric(U_{1},U_{2})+\mu g_{M_{1 }}(U_{1},U_{2})=0, \tag{3.23}\]
for \(U_{1},U_{2}\in\Gamma(Ker\vartheta_{*})\) and \(\dot{\zeta}\in\Gamma(TM_{1})\). Using (3.1) in (3.23), we have
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(U_{1},U_{2})+Ric^{v}(U_{1},U_ {2})-(d_{1}-d_{2})g_{M_{1}}(U_{1},U_{2})\left\|\nabla\beta\right\|^{2}\] \[-g_{M_{1}}(U_{1},U_{2})\operatorname{div}(\nabla\beta)+\mu g_{M_{ 1}}(U_{1},U_{2})\] \[= 0.\]
Since fibers of \(\vartheta\) is Einstein, using \(Ric^{v}(U_{1},U_{2})=-\mu g_{M_{1}}(U_{1},U_{2})\) we get
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(U_{1},U_{2})-\left\{(d_{1}-d_{2})\left\| \nabla\beta\right\|^{2}+\operatorname{div}(\nabla\beta)\right\}g_{M_{1}}(U_{1 },U_{2})=0,\]
which can be written as
\[(L_{\dot{\zeta}}g_{M_{1}})(U_{1},U_{2})=2\beta_{1}g_{M_{1}}(U_{1},U_{2}),\]
where \(\beta_{1}=(d_{1}-d_{2})\left\|\nabla\beta\right\|^{2}+\operatorname{div}( \nabla\beta)\) is a smooth function on \(M_{1}\). Thus \(\dot{\zeta}\) is conformal vector field on \(Ker\vartheta_{*}\).
**Corollary 2**.: _Let \(\zeta\) be a geodesic curve on \(M_{1}\) and \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) be a Ricci soliton with the PVF \(\dot{\zeta}\in\Gamma(TM_{1})\). Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\to(M_{2}^{d_{2}},g_{M_{2}})\) be a CCS with \(r=e^{\beta}\) between Riemannian manifolds. If fibers of \(\vartheta\) are totally geodesic then, \(\dot{\zeta}\) is killing vector field on \(Ker\vartheta_{*}\)._
**Theorem 8**.: _Let \(\zeta\) be a geodesic curve on \(M_{1}\) and \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) be a Ricci soliton with the PVF \(\dot{\zeta}\in\Gamma(TM_{1})\). Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\to(M_{2}^{d_{2}},g_{M_{2}})\) be a homothetic CCS with \(r=e^{\beta}\) from a Riemannian manifold \(M_{1}\) to an Einstein manifold \(M_{2}\) such that fibers of \(\vartheta\) are totally geodesic and \((Ker\vartheta_{*})^{\perp}\) is integrable. Then \(\dot{\zeta}\) is conformal vector field on \((Ker\vartheta_{*})^{\perp}\)._
Proof.: Since \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) is a Ricci soliton then by (1.2) we get
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(X_{1},X_{2})+Ric(X_{1},X_{2})+\mu g_{M_{ 1}}(X_{1},X_{2})=0,\]
for \(X_{1},X_{2}\in\Gamma(Ker\vartheta_{*})^{\perp}\). Using Theorem 1, Theorem 2 and Theorem 3 in (3.3) then we have
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(X_{1},X_{2})+\frac{1}{\sigma ^{2}}Ric^{M_{2}}(\bar{X}_{1},\tilde{X}_{2})-\frac{\sigma^{2}}{2}g_{M_{1}}(X_{ 1},X_{2})\Delta^{\mathcal{H}}\frac{1}{\sigma^{2}}\] \[+\frac{d_{2}\sigma^{4}}{4}g_{M_{1}}(X_{1},X_{2})|\nabla\frac{1}{ \sigma^{2}}|^{2}+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0.\]
Using (2.1) in the above equation
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(X_{1},X_{2})+\frac{1}{\sigma ^{2}}Ric^{M_{2}}(\bar{X}_{1},\tilde{X}_{2})-\frac{1}{2}g_{M_{2}}(\bar{X}_{1}, \tilde{X}_{2})\Delta^{\mathcal{H}}\frac{1}{\sigma^{2}}\] \[+\frac{d_{2}\sigma^{4}}{4}g_{M_{2}}(\bar{X}_{1},\tilde{X}_{2})| \nabla\frac{1}{\sigma^{2}}|^{2}+\frac{\mu}{\sigma^{2}}g_{M_{2}}(\bar{X}_{1}, \tilde{X}_{2})\] \[= 0,\]
which can be written as
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(X_{1},X_{2})+\frac{1}{\sigma^{2}}Ric^{M _{2}}(\bar{X}_{1},\tilde{X}_{2})+\beta_{1}g_{M_{2}}(\bar{X}_{1},\tilde{X}_{2})=0,\]
or
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(X_{1},X_{2})+\frac{1}{\sigma^{2}}\left\{ Ric^{M_{2}}(\bar{X}_{1},\tilde{X}_{2})+\beta_{2}g_{M_{2}}(\bar{X}_{1},\tilde{X}_{2}) \right\}=0, \tag{3.24}\]
where \(\beta_{1}=-\frac{1}{2}\Delta^{\mathcal{H}}\frac{1}{\sigma^{2}}+\frac{d_{2}\sigma^ {4}}{4}|\nabla\frac{1}{\sigma^{2}}|^{2}+\frac{\mu}{\sigma^{2}}\) and \(\beta_{2}=\sigma^{2}\beta_{1}\) Since \(M_{2}\) is an Einstein, putting \(Ric^{M_{2}}(\tilde{X}_{1},\tilde{X}_{2})=\beta_{2}g_{M_{2}}(\tilde{X}_{1}, \tilde{X}_{2})\) in (3.24), we obtain
\[\frac{1}{2}(L_{\dot{\zeta}}g_{M_{1}})(X_{1},X_{2})+2\beta_{1}g_{M_{1}}(X_{1},X_ {2})=0,\]
which implies that \(\dot{\zeta}\) is conformal vector field on \((Ker\vartheta_{*})^{\perp}\).
**Theorem 9**.: _[_17_]__Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a totally geodesic HCS with dilation \(\sigma\). Then_
\[s=s^{v}+\frac{1}{\sigma^{2}}s^{M_{2}},\]
_where \(s,s^{v}\) and \(s^{M_{2}}\) indicate the scalar curvatures of \(M_{1},Ker\vartheta_{*}\) and \(M_{2},\) respectively._
Using Theorem 9, the following result can be written
**Theorem 10**.: _Let \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) be a Ricci soliton with the PVF \(\dot{\zeta}\in\Gamma(TM_{1})\). and \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a totally geodesic CCS with \(r=e^{\beta}\) between Riemannian manifolds such that \((Ker\vartheta_{*})^{\perp}\) is integrable. Then \(M_{1}\) has constant scalar curvature by \(-\mu d_{1}\)._
Proof.: Since \((M_{1},g_{M_{1}},\dot{\zeta},\mu)\) is a Ricci soliton then by (1.2) we have
\[\frac{1}{2}(g_{M_{1}}(\nabla_{X_{1}}\dot{\zeta},X_{2})+g_{M_{1}}( \nabla_{X_{2}}\dot{\zeta},X_{1}))\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0,\]
for \(X_{1},X_{2},\dot{\zeta}\in\Gamma(TM_{1}).\) Now, we decompose \(X_{1},X_{2}\) and \(\dot{\zeta}\) such that \(X_{1}=\nu X_{1}+\mathcal{H}X_{1},X_{2}=\nu X_{2}+\mathcal{H}X_{2}\) and \(\dot{\zeta}=\nu W+\mathcal{H}Z\). Then (3.25) stated
\[\frac{1}{2}\left\{\begin{array}{l}g_{M_{1}}(\nabla_{\nu X_{1}+ \mathcal{H}X_{1}}\nu W+\mathcal{H}Z,\nu X_{2}+\mathcal{H}X_{2})\\ +g_{M_{1}}(\nabla_{\nu X_{2}+\mathcal{H}X_{2}}\nu W+\mathcal{H}Z,\nu X_{1}+ \mathcal{H}X_{1})\end{array}\right\}\] \[+Ric(vX_{1},vX_{2})+Ric(\mathcal{H}X_{1},\mathcal{H}X_{2})+Ric(vX _{1},\mathcal{H}X_{2})\] \[+Ric(\mathcal{H}X_{1},vX_{2})+\mu\left\{g_{M_{1}}(vX_{1},vX_{2}) +(\mathcal{H}X_{1},\mathcal{H}X_{2})\right\}\] \[= 0.\]
Taking trace of (3.26), we have
\[\frac{1}{2}\left\{2\sum_{l=1}^{d_{2}}g_{M_{1}}(\nabla_{X_{l}}X_{l },X_{l})+2\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(\nabla_{Uk}U_{k},U_{k})\right\}\] \[+\sum_{k=d_{2}+1}^{d_{1}}Ric(U_{k},U_{k})+\sum_{l=1}^{d_{2}}Ric(X_{ l},X_{l})+2\sum_{k,l}Ric(U_{k},X_{l})\] \[+\mu\left\{\sum_{l=1}^{d_{2}}g_{M_{1}}(X_{l},X_{l})+\sum_{k=d_{2}+ 1}^{d_{1}}g_{M_{1}}(U_{k},U_{k})\right\}\] \[= 0,\]
where \(\{U_{k}\}_{d_{2}+1\leq k\leq d_{1}}\) is orthonormal bases of \(Ker\vartheta_{*}\) and \(\{X_{l}\}_{1\leq l\leq d_{2}}\) is orthonormal bases of \((Ker\vartheta_{*})^{\perp}\). Since \(\vartheta\) is totally geodesic and \((Ker\vartheta_{*})^{\perp}\) is integrable, then
using Theorem 1, Theorem 2 and Theorem 3 in (3.1), (3.2) and (3.3), we can write (3.27) as
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(\nabla_{Uk}U_{k},U_{k})+\sum_{l=1 }^{d_{2}}g_{M_{1}}(\nabla_{X_{l}}X_{l},X_{l})\] \[+Ric^{v}(U_{k},U_{k})+\frac{1}{\sigma^{2}}Ric^{M_{2}}(\bar{X}_{1}, \bar{X}_{2})+\mu(d_{1}-d_{2}+d_{2})\] \[= 0, \tag{3.28}\]
where \(Ric^{v}\) and \(Ric^{M_{2}}\) indicate the scalar curvatures of \(Ker\vartheta_{*}\) and \(M_{2}\), respectively. From Theorem 9, (3.28) and since \(\nabla\) is metric connection on \(M_{1}\), then we have
\[\frac{1}{2}\left\{\sum_{l=1}^{d_{2}}\nabla_{X_{l}}g_{M_{1}}(X_{l},X_{l})+\sum_{k=d_{2}+1}^{d_{1}}\nabla_{U_{k}}g_{M_{1}}(U_{k},U_{k})\right\}\] \[+Ric^{v}+\frac{1}{\sigma^{2}}Ric^{M_{2}}+\mu d_{1}\] \[= 0.\]
Thus we obtain \(s+\mu d_{1}=0\), where \(s=Ric^{v}+\frac{1}{\sigma^{2}}Ric^{M_{2}}\) is the scalar curvature on \(M_{1}\).
**Theorem 11**.: _Let \((M_{1},g_{M_{1}},-H,\mu)\) be a Ricci soliton with the PVF \(-H\in(Ker\vartheta_{*})^{\perp}\) and \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a CCS with \(r=e^{\beta}\) between Riemannian manifolds. Then the following options are true:_
_(i) \(M_{1}\) admits a gradient Ricci soliton,_
_(ii) The mean curvature vector field of \(Ker\vartheta_{*}\) is constant._
Proof.: Since \((M_{1},g_{M_{1}},-H,\mu)\) is a Ricci soliton then using (1.2), we obtain
\[-\frac{1}{2}(g_{M_{1}}(\nabla_{X_{1}}H,X_{2})+g_{M_{1}}(\nabla_{ X_{2}}H,X_{1}))\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0,\]
for \(X_{1},X_{2}\in\Gamma(TM_{1})\) and \(-H\in\Gamma(Ker\vartheta_{*})^{\perp}\). Using Theorem 4, then (3.29) can be written as
\[\frac{1}{2}(g_{M_{1}}(\nabla_{X_{1}}\nabla\beta,X_{2})+g_{M_{1}} (\nabla_{X_{2}}\nabla\beta,X_{1}))\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0,\]
which is equal to
\[\frac{1}{2}(g_{M_{1}}(g_{M_{2}\beta}(X_{1}),X_{2})+g_{M_{1}}(g_{M _{2}\beta}(X_{2}),X_{1}))\] \[+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})\] \[= 0.\]
Since \(h_{\beta}\) is self-adjoint, (3.30) can be written as
\[g_{M_{1}}(h_{\beta}(X_{1}),X_{2})+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})=0.\]
Using (2.13) in above equation
\[Hess\beta(X_{1},X_{2})+Ric(X_{1},X_{2})+\mu g_{M_{1}}(X_{1},X_{2})=0, \tag{3.31}\]
which completes the proof of (i). If we taking trace (3.31), from (2.14) we have
\[\Delta\beta+s+\mu d_{1}=0.\]
Thus, the Poisson equation on \((M_{1},g_{M_{1}})\) is
\[\Delta\beta=\mbox{div}(\nabla\beta)=-s-\mu d_{1}, \tag{3.32}\]
where \(\Delta\) is the Laplace operator, \(\beta\) is a smooth function on \(M_{1}.\) To determine its solution using Theorem 10 in (3.32), we obtain
\[\Delta\beta=\mbox{div}(\nabla\beta)=0,i.e.\nabla(H)=0,\]
which means \(H\) is constant. This completes the proof of (ii).
**Definition 1**.: **[**26**]** _Let \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a smooth map between Riemannian manifolds. Then \(\vartheta\) is harmonic if and only if the tension field \(\tau(\vartheta)\) of \(\vartheta\) vanishes at every node \(p_{1}\in M_{1}.\)_
Considering this definition, the following theorem can be written:
**Theorem 12**.: _Let \((M_{1},g_{M_{1}},W,\mu)\) be a Ricci soliton with the PVF \(W\in\Gamma(Ker\vartheta_{*})\) and \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) be a totally geodesic CCS with \(r=e^{\beta}\) between Riemannian manifolds such that \((Ker\vartheta_{*})^{\perp}\) is totally geodesic. Then \(\vartheta\) is harmonic if and only if the scalar curvature of \(Ker\vartheta_{*}\) is \(-\mu(d_{1}-d_{2}).\)_
Proof.: Since \(\vartheta\) is a conformal submersion then we have
\[\tau(\vartheta)=(d_{1}-d_{2})\vartheta_{*}(\nabla\beta)+(d_{2}-2)\frac{ \sigma^{2}}{2}\vartheta_{*}(\nabla_{\mathcal{H}}\frac{1}{\sigma^{2}}). \tag{3.33}\]
Now, putting \(\xi=W\) in (3.18), we have
\[\frac{1}{2}\left\{g_{M_{1}}(\nabla_{U_{1}}W,U_{2})+g_{M_{1}}( \nabla_{U_{2}}W,U_{1})\right\}+Ric^{\nu}(U_{1},U_{2})\] \[+g_{M_{1}}(U_{1},U_{2})\left\{\mu-(d_{1}-d_{2})|\nabla\beta|^{2}- div(\nabla\beta)\right\}\] \[= 0.\]
Taking trace, we have
\[\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(\nabla_{U_{k}}U_{k},U_{k})+\sum _{k=d_{2}+1}^{d_{1}}Ric^{v}(U_{k},U_{k})\] \[-\left\{\mu-(d_{1}-d_{2})|\nabla\beta|^{2}-div(\nabla\beta) \right\}\sum_{k=d_{2}+1}^{d_{1}}g_{M_{1}}(U_{k},U_{k})\] \[= 0, \tag{3.34}\]
herein \(\left\{U_{k}\right\}_{d_{2}+1\leq k\leq d_{1}}\) is an orthonormal bases of \(Ker\vartheta_{*}.\) Since \(\nabla\) is metric connection, then (3.34) can be written as
\[s^{Ker\vartheta_{*}}-\left\{\mu-(d_{1}-d_{2})|\nabla\beta|^{2}(d_{1}-d_{2})- div(\nabla\beta)\right\}=0, \tag{3.35}\]
where \(s^{Ker\vartheta_{*}}\) is the scalar curvature of \(Ker\vartheta_{*}.\) Since \(s^{Ker\vartheta_{*}}=-\mu(d_{1}-d_{2})\) then (3.35) implies
\[(d_{1}-d_{2})^{2}|\nabla\beta|^{2}+div(\nabla\beta)(d_{1}-d_{2})=0\Leftrightarrow \nabla\beta=0. \tag{3.36}\]
Thus from (3.36) and Lemma 1 we obtain \(\tau(\vartheta)=0\Leftrightarrow\nabla\beta=0.\) This completes the proof.
**Example 1**.: _Let \(M_{1}=\left\{(u_{1},u_{2},u_{3})\in\mathbb{R}^{3}:u_{1}\neq 0\right\}\) be a Riemannian manifold with Riemannian metric \(g_{M_{1}}\) on \(M_{1}\) stated \(g_{M_{1}}=e^{-2u_{1}}du_{1}^{2}+e^{-2u_{1}}du_{2}^{2}+e^{-2u_{1}}du_{3}^{2}\). Let \(M_{2}=\left\{(v_{1},v_{2})\in\mathbb{R}^{2}\right\}\) be a Riemannian manifold with Riemannian metric \(g_{M_{2}}\) on \(M_{2}\) stated \(g_{M_{2}}=e^{2u_{1}}dv_{1}^{2}+e^{2u_{1}}dv_{2}^{2}.\) Define a map \(\vartheta:(M_{1}^{d_{1}},g_{M_{1}})\rightarrow(M_{2}^{d_{2}},g_{M_{2}})\) by_
\[\vartheta(u_{1},u_{2},u_{3})=(u_{1},u_{2}).\]
_by direct computations_
\[Ker\vartheta_{*}=Span\left\{V=e_{3}\right\},\]
_and_
\[(Ker\vartheta_{*})^{\bot}=Span\left\{X_{1}=e_{1},X_{2}=e_{2}\right\},\]
_herein \(\left\{e_{1}=e^{u_{1}}\frac{\partial}{\partial u_{1}},e_{2}=e^{u_{1}}\frac{ \partial}{\partial u_{2}},e_{3}=e^{u_{1}}\frac{\partial}{\partial u_{3}}\right\}\) are bases of \(T_{p_{1}}M_{1}\) and \(\left\{e_{1}^{*}=e^{u_{1}}\frac{\partial}{\partial v_{1}},e_{2}^{*}=e^{u_{1} }\frac{\partial}{\partial v_{2}}\right\}\) are bases of \(T_{\vartheta(p_{1})}M_{2},\) for any \(p_{1}\in M_{1}.\) By direct computations, one can see that \(\vartheta_{*}X_{1}=e_{1}^{*},\vartheta_{*}X_{2}=e_{2}^{*},\vartheta_{*}V=0\) and \(g_{M_{2}}(\vartheta_{*}X_{i},\vartheta_{*}X_{j})=\sigma^{2}g_{M_{1}}(X_{i},X_ {j}),\) for \(\sigma=e^{2u_{1}}.\) Thus \(\vartheta\) is conformal submersion with dilation \(\sigma=e^{u_{1}}.\) Now, we will find a smooth function \(\beta\) on \(M_{1}\) satisfying \(T_{V}V=-g_{M_{1}}(V,V)\nabla\beta\) for \(V\in\Gamma(Ker\vartheta_{*}).\) we can simply calculate that_
\[\Gamma_{11}^{1} = -1,\Gamma_{11}^{2}=\Gamma_{11}^{3}=0,\] \[\Gamma_{21}^{2} = -1,\Gamma_{21}^{1}=\Gamma_{21}^{3}=0,\] \[\Gamma_{12}^{1} = 1,\Gamma_{22}^{2}=\Gamma_{22}^{3}=0,\] \[\Gamma_{33}^{1} = 1,\Gamma_{33}^{2}=\Gamma_{33}^{3}=0,\] \[\Gamma_{31}^{3} = -1,\Gamma_{31}^{1}=\Gamma_{31}^{2}=0,\] \[\Gamma_{13}^{1} = \Gamma_{32}^{2}=\Gamma_{32}^{3}=0,\] \[\Gamma_{23}^{1} = \Gamma_{23}^{2}=\Gamma_{23}^{3}=0. \tag{3.37}\]
_By using (3.37), we get_
\[\nabla_{e_{1}}e_{1} = 0,\nabla_{e_{2}}e_{2}=e^{2u_{1}}\frac{\partial}{\partial u_{1}}, \nabla_{e_{3}}e_{3}=e^{2u_{1}}\frac{\partial}{\partial u_{1}},\] \[\nabla_{e_{1}}e_{2} = 0,\nabla_{e_{2}}e_{1}=-e^{2u_{1}}\frac{\partial}{\partial u_{2}}, \nabla_{e_{3}}e_{1}=-e^{2u_{1}}\frac{\partial}{\partial u_{3}},\] \[\nabla_{e_{3}}e_{2} = \nabla_{e_{1}}e_{3}=\nabla_{e_{2}}e_{3}=0. \tag{3.38}\]
_Using (3.38), we get_
\[\nabla_{V}V=e^{u_{1}}X_{1}.\]
_Its mean that \(\mathcal{H}\nabla_{V}V=e^{u_{1}}X_{1},v\nabla_{V}V=0.\) By (2.4), we have_
\[T_{V}V=e^{u_{1}}X_{1}.\]
_Similarly, if we take \(X=\lambda_{1}X_{1}+\lambda_{2}X_{2}\) for \(\lambda_{1},\lambda_{2}\in\mathbb{R}\) then_
\[\nabla_{V}X=-\lambda_{1}e^{u_{1}}V.\]
_Its mean that \(\mathcal{H}\nabla_{V}X=0,v\nabla_{V}X=-\lambda_{1}e^{u_{1}}V.\) Also,_
\[\nabla_{X}V=0,\]
_i.e._\(\mathcal{H}\nabla_{X}V=0,v\nabla_{X}V=0.\) _Then by (2.5), we have_
\[S_{X}V=0, \tag{3.39}\]
_and_
\[\nabla_{X}X=\lambda_{2}^{2}e^{u_{1}}X_{1}-\lambda_{1}\lambda_{2}e^{u_{1}}X_{2},\]
_i.e._\(\mathcal{H}\nabla_{X}V=\lambda_{2}^{2}e^{u_{1}}X_{1}-\lambda_{1}\lambda_{2}e^{u_{1}} X_{2},v\nabla_{X}X=0.\) _Then by (2.6), we have_
\[S_{X}X=0. \tag{3.40}\]
_For any smooth function on \(M_{1},\) the gradient of \(\beta\) with respect to the metric \(g_{M_{1}}\) is given by \(\nabla\beta=\sum\limits_{i,j=1}^{3}g_{M_{1}}^{ij}\frac{\partial\beta}{\partial u _{i}}\frac{\partial}{\partial u_{j}}.\) Therefore,_
\[\nabla\beta=e^{2u_{1}}\frac{\partial\beta}{\partial u_{1}}\frac{\partial}{ \partial u_{1}}+e^{2u_{1}}\frac{\partial\beta}{\partial u_{2}}\frac{\partial} {\partial u_{2}}+e^{2u_{1}}\frac{\partial\beta}{\partial u_{3}}\frac{\partial }{\partial u_{3}}.\]
_Hence,_
\[\nabla\beta = -e^{2u_{1}}\frac{\partial}{\partial u_{1}}\] \[= -e^{u_{1}}X_{1},\]
_for the function \(\beta=-u_{1}.\) On the other hand, we have_
\[g_{M_{1}}(V,V)=1.\]
_Then it is easy to verify that \(T_{V}V=-g_{M_{1}}(V,V)\nabla\beta.\) Thus, by (2.7) and Theorem 4, we see that this map is CCS with \(r=e^{-u_{1}}.\) Now, we will show that \(M_{1}\) admits a Ricci soliton, i.e._
\[\frac{1}{2}(L_{E}g_{M_{1}})(F,G)+Ric(F,G)+\mu g_{M_{1}}(F,G)=0, \tag{3.42}\]
_for any \(E,F,G\in\Gamma(TM_{1}).\) Since here the dimension of \(Ker\vartheta_{*}\) is one and the dimension of \((Ker\vartheta_{*})^{\perp}\) is two, therefore we can decompose \(E,F\) and \(G\) such that \(F=\lambda_{1}V+\lambda_{2}X_{1}+\lambda_{3}X_{2},G=\lambda_{4}V+\lambda_{5}X_{ 1}+\lambda_{6}X_{2}\) and \(E=\lambda_{7}V+\lambda_{8}X_{1}+\lambda_{9}X_{2},\) where \(V\) denotes for component of \(Ker\vartheta_{*}\) and \(X_{1},X_{2}\) denote for component of \((Ker\vartheta_{*})^{\perp}\) and \(\left\{\lambda_{i}\right\}_{1\leq i\leq 9}\in\mathbb{R}\) are some scalars. Now, since_
\[\frac{1}{2}(L_{E}g_{M_{1}})(F,G)=\frac{1}{2}\left\{g_{M_{1}}(\nabla_{F}E,G)+g_ {M_{1}}(\nabla_{G}E,F)\right\},\]
_which is equal to_
\[\frac{1}{2}(L_{E}g_{M_{1}})(F,G)\] \[= \frac{1}{2}\left\{\begin{array}{c}g_{M_{1}}(\nabla_{\lambda_{1} V+\lambda_{2}X_{1}+\lambda_{3}X_{2}}\lambda_{7}V+\lambda_{8}X_{1}+\lambda_{9}X_{2},\lambda_{4}V+\lambda_{5}X_{1}+\lambda_{6}X_{2})\\ +g_{M_{1}}(\nabla_{\lambda_{4}V+\lambda_{5}X_{1}+\lambda_{6}X_{2}}\lambda_{7} V+\lambda_{8}X_{1}+\lambda_{9}X_{2},\lambda_{1}V+\lambda_{2}X_{1}+\lambda_{3}X_{2}) \end{array}\right\},\]
_which implies_
\[\frac{1}{2}(L_{E}g_{M_{1}})(F,G)=\frac{1}{2}\left\{\begin{array}{c}e^{u_{1}} (\lambda_{1}\lambda_{5}\lambda_{7}-2\lambda_{1}\lambda_{4}\lambda_{8}\\ -2\lambda_{3}\lambda_{6}\lambda_{8}+\lambda_{3}\lambda_{5}\lambda_{9}+\lambda_ {2}\lambda_{4}\lambda_{7}+\lambda_{2}\lambda_{6}\lambda_{9})\end{array}\right\}. \tag{3.43}\]
_Also,_
\[g_{M_{1}}(F,G)=\lambda_{1}\lambda_{4}+\lambda_{2}\lambda_{5}+\lambda_{3}\lambda _{6}, \tag{3.44}\]
_and_
\[Ric(F,G)\] \[= \lambda_{1}\lambda_{4}Ric(V,V)+\lambda_{1}\lambda_{5}Ric(V,X_{1})+ \lambda_{1}\lambda_{6}Ric(V,X_{2})\] \[+\lambda_{2}\lambda_{4}Ric(X_{1},V)+\lambda_{2}\lambda_{5}Ric(X_{1},X_{1})+\lambda_{2}\lambda_{6}Ric(X_{1},X_{2})\] \[+\lambda_{3}\lambda_{4}Ric(X_{2},V)+\lambda_{3}\lambda_{5}Ric(X_{2},X_{1})+\lambda_{3}\lambda_{6}Ric(X_{2},X_{2}). \tag{3.45}\]
_Since \(\dim(Ker\vartheta_{*})=1,\) so \(Ric^{v}(V,V)=0\) and using (3.39), (3.40) and (3.41) in (3.1), we get_
\[Ric(V,V)=e^{2u_{1}}. \tag{3.46}\]
_From (3.39) and (3.40) we have_
\[Ric(V,X_{1})=Ric(V,X_{2})=0. \tag{3.47}\]
_From (3.3), we have_
\[Ric(X_{1},X_{1})=e^{2u_{1}} \tag{3.48}\]
\[Ric(X_{2},X_{2})=e^{2u_{1}}. \tag{3.49}\]
_Using (3.46), (3.47), (3.48) and (3.49) in (3.45), we have_
\[Ric(F,G)=e^{2u_{1}}\left\{\lambda_{1}\lambda_{4}+\lambda_{2}\lambda_{5}+ \lambda_{3}\lambda_{6}\right\}. \tag{3.50}\]
_Now, by using (3.43), (3.44) and (3.50) in (3.42), we have_
\[\left.\begin{array}{l}\frac{1}{2}\left\{\begin{array}{c}e^{u_{1}}(\lambda_ {1}\lambda_{5}\lambda_{7}-2\lambda_{1}\lambda_{4}\lambda_{8}-2\lambda_{3} \lambda_{6}\lambda_{8}\\ +\lambda_{3}\lambda_{5}\lambda_{9}+\lambda_{2}\lambda_{4}\lambda_{7}+\lambda_{2 }\lambda_{6}\lambda_{9})\end{array}\right\}\\ +e^{2u_{1}}\left\{\lambda_{1}\lambda_{4}+\lambda_{2}\lambda_{5}+\lambda_{3} \lambda_{6}\right\}+\mu(\lambda_{1}\lambda_{4}+\lambda_{2}\lambda_{5}+\lambda_ {3}\lambda_{6})\\ = 0.\end{array}\right.\]
_So, \((M_{1},g_{M_{1}})\) admit a Ricci soliton for_
\[\mu=-e^{2u_{1}}-\frac{e^{u_{1}}(\lambda_{1}\lambda_{5}\lambda_{7}-2\lambda_{1 }\lambda_{4}\lambda_{8}-2\lambda_{3}\lambda_{6}\lambda_{8}+\lambda_{3}\lambda_ {5}\lambda_{9}+\lambda_{2}\lambda_{4}\lambda_{7}+\lambda_{2}\lambda_{6} \lambda_{9})}{\lambda_{1}\lambda_{4}+\lambda_{2}\lambda_{5}+\lambda_{3}\lambda _{6}},\]
_where \(\lambda_{1}\lambda_{4}+\lambda_{2}\lambda_{5}+\lambda_{3}\lambda_{6}\neq 0.\)Ricci soliton \((M_{1},g_{M_{1}})\) becomes expanding, steady or shrinking for some values of \(\lambda_{i}\)'s with respect to \(\mu>0,\)\(\mu=0\) or \(\mu<0,\) respectively._
|
2305.10835 | Ahead-of-Time P-Tuning | In this paper, we propose Ahead-of-Time (AoT) P-Tuning, a novel
parameter-efficient fine-tuning method for pre-trained Language Models (LMs)
that adds input-dependent bias before each Transformer layer. We evaluate AoT
P-Tuning on GLUE and SuperGLUE benchmarking datasets using RoBERTa and DeBERTa
models, showing that it outperforms BitFit and is comparable or better than
other baseline methods for efficient fine-tuning. Additionally, we assess the
inference overhead of AoT P-Tuning and demonstrate that it introduces
negligible overhead compared to established baseline methods. Our method
enables multi-task inference with a single backbone LM, making it a practical
solution for real-world applications. | Daniil Gavrilov, Nikita Balagansky | 2023-05-18T09:24:53Z | http://arxiv.org/abs/2305.10835v1 | # Ahead-of-Time P-Tuning
###### Abstract
In this paper, we propose Ahead-of-Time (AoT) P-Tuning, a novel parameter-efficient fine-tuning method for pre-trained Language Models (LMs) that adds input-dependent bias before each Transformer layer. We evaluate AoT P-Tuning on GLUE and SuperGLUE benchmarking datasets using RoBERTa and DeBERTa models, showing that it outperforms BitFit and is comparable or better than other baseline methods for efficient fine-tuning. Additionally, we assess the inference overhead of AoT P-Tuning and demonstrate that it introduces negligible overhead compared to established baseline methods. Our method enables multi-task inference with a single backbone LM, making it a practical solution for real-world applications.
## 1 Introduction
As pre-trained Language Models (LMs) grow in size (Radford et al., 2019; Devlin et al., 2019; Raffel et al., 2020), developing new methods for handling these large models becomes increasingly important. One potential solution is parameter-efficient fine-tuning, where only a subset of the model's parameters are optimized from scratch or using existing weights (Liu et al., 2021, 2021, 2019; Hu et al., 2022; Ben Zaken et al., 2022; Lester et al., 2021).
Typically, the success of these parameter-efficient fine-tuning methods is judged based on the number of optimized parameters and the model's performance on downstream tasks. However, in this paper, we also consider two additional factors: the inference overhead that results from fine-tuning and the model's ability to perform multi-task inference. The latter refers to using a single LM to handle multiple tasks during inference, which is advantageous for real-world applications (Lester et al., 2021).
Most current methods, such as P-Tuning (Liu et al., 2021, 2021, 2021), Adapters (Houlsby et al., 2019), and LoRA (Hu et al., 2022), have a trade-off between introducing inference overhead and the ability to perform multi-task inference. BitFit (Ben Zaken et al., 2022) does not introduce any overhead and can handle multi-task inference, but its performance is lacking compared to other methods.
In this paper, we propose a simple parameter-efficient fine-tuning method for LMs called Ahead-of-Time (AoT) P-Tuning. This method involves adding input-dependent bias before each Transformer layer (Vaswani et al., 2017). Furthermore, AoT P-Tuning can be used in multi-task inference setups with a single backbone LM.
Our contributions in this paper can be summarized as follows:
1. We introduce Ahead-of-Time (AoT) P-Tuning, a parameter-efficient fine-tuning method that adds input-dependent bias before each Transformer layer.
2. We test the proposed method on GLUE and SuperGLUE benchmarking datasets (Wang et al., 2018, 2019) using RoBERTa (Liu et al., 2019) and DeBERTa (He et al., 2020) models.
Our experiments show that AoT P-Tuning outperforms BitFit and is comparable or better than other baseline methods for efficient fine-tuning.
3. We measure the inference overhead of AoT P-Tuning and demonstrate that it introduces negligible overhead compared to established baseline methods.
## 2 Recent Works
Currently, a wide range of different methods could be referenced with P-Tuning. Liu et al. (2021) proposed to add soft prompts to the embeddings of GPT-2's input sequence (Radford et al., 2019) to train it on classification tasks. Lester et al. (2021) proposed a scheme similar to the one used in Liu et al. (2021), but trained a T5 model (Raffel et al., 2020) with P-Tuning to show how the performance of the method changes with the increased scale of the backbone model.
Recently, Qin and Eisner (2021); Li and Liang (2021); Liu et al. (2021) proposed to add prefixes not only to input embeddings but also at each layer of the Transformer model. In addition, Liu et al. (2021) suggested training a linear classification head on top of the backbone model instead of utilizing a LM head to obtain classification results.
Due to this range of similar methods, we will follow the naming used by Liu et al. (2021) and refer to Prompt-Tuning (adding soft prompts to the input embeddings) as P-Tuning v1 and to Prefix-Tuning (adding soft prefixes at each layer of Transformer backbone) as P-Tuning v2.
Hu et al. (2022) proposed to train low-rank changes of attention weights, while Houlsby et al. (2019) fine-tuned additional model layers, which can also be considered parameter-efficient. Ben Zaken et al. (2022) proposed to fine-tune only bias terms of the model.
\begin{table}
\begin{tabular}{r|c|c|c} \hline \hline Method & Parameter Efficient & Zero-Cost & Multi-Task Inference \\ \hline Fine-Tuning & ✗ & ✓ & ✗ \\ \hline LoRA & ✓ & ✗ & ✓ \\ LoRA Fused & ✓ & ✓ & ✗ \\ Adapters & ✓ & ✗ & ✓ \\ BitFit & ✓ & ✓ & ✓ \\ P-Tuning v1/v2 & ✓ & ✗ & ✓ \\ AoT P-Tuning (ours) & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Schematic comparison of recent fine-tuning methods with AoT P-Tuning. Recent fine-tuning approaches either allow inference with no computational overhead or multi-task inference. See Section 3.1 for details.
Figure 1: Schematic comparison of P-Tuning v2 (left), and AoT P-Tuning (right). Since the sequence length is not increased, AoT P-Tuning takes significantly less time to evaluate, only requiring the overhead of adding biases to the input sequence (See Section 4.4 for experiments with inference speed).
Ahead-of-Time P-Tuning
### Motivation
Parameter-efficient fine-tuning methods are crucial for reducing computational and memory requirements, enabling deploying of advanced AI models in resource-constrained environments and real-world applications. In discussions of parameter-efficient fine-tuning methods, recent research often refers to the number of parameters utilized for optimization (Liu et al., 2021, 2019; Houlsby et al., 2019; Hu et al., 2022; Ben Zaken et al., 2022; Lester et al., 2021). A method can be considered superior to another if it uses fewer trained parameters and achieves better performance in terms of metrics on downstream tasks.
In this study, we categorize methods based on two additional characteristics: the inference overhead produced by the parameter-efficient fine-tuning method and the capability to perform multi-task inference after fine-tuning multiple models for distinct tasks. While minimizing inference overhead is beneficial for faster model evaluation, the practicality of multi-task inference is debatable. However, multi-task inference is desirable for high-demand systems deployed in real-world scenarios. It can enable more efficient resource allocation, reduce the overall number of models required, and facilitate dynamic workload adjustment based on different tasks as needed. Using the same backbone model for various tasks promotes more streamlined serving, as all workers share the same model in memory. This approach enables dynamic workload adjustment based on different tasks as needed. In contrast, without multi-task inference, some workers might be underutilized depending on external conditions, resulting in suboptimal resource allocation that could have been used for other tasks.
Table 1 compares recent methods concerning these properties. P-Tuning v1/v2 has inference overhead because the length of sequences passed to the model increases, while adapters introduce overhead by adding additional layers to the model computation. LoRA's analysis is more complex, as it can be evaluated in two modes: one in which we fuse the weights of LoRA (Hu et al., 2022), resulting in no inference overhead since the model is now identical to the original pre-trained model, and another where we do not fuse weights, leading to inference overhead due to the need for additional matrix multiplications during evaluation.
In terms of multi-task inference capability, P-Tuning v1/v2 allows for this, as different prompts can be stacked in a batch and propagated through the backbone model. For Adapters, weights can also be stacked in a batch and evaluated using batched matrix multiplication. However, it is important to note that these layers must have the same shape, which is a tunable hyperparameter. While multi-task inference with a fused LoRA is theoretically possible, similar to Adapters, it demands a substantial amount of GPU memory. For example, for the DeBERTa-XL model with layers of hidden size d = 1024 and l = 48, passing a batch with b sequences necessitates storing 1024 * 1024 * 48 * b parameters for the model, where 4 represents the number of parameter matrices of the Attention module. Notably, with b = 4, the number of original parameters in the model is already exceeded, which is impractical. To address this issue, one can opt not to fuse LoRA weights (to work with factorized weights with low rank, thus reducing the total number of parameters passed as a batch) for multi-task inference, although this can introduce computational overhead.
In contrast, BitFit (Ben Zaken et al., 2022) is advantageous in terms of both criteria: it only modifies biases of the model, thereby not altering its structure or introducing computational overhead, and can easily be parallelized for multi-task inference by stacking biases for different tasks in a batch. However, based on our experimental results in subsequent sections, its performance is inferior to other methods.
In light of these considerations, this paper aims to answer the following research question: _Can we develop a parameter-efficient fine-tuning method that combines the rapid evaluation and multi-task inference capabilities of BitFit while outperforming it on downstream tasks?_
### Proposed Mechanism
With AoT P-Tuning, we propose to augment each Transformer layer with a simple procedure. We define trainable matrices \(\mathbf{P}^{i}\in\mathbb{R}^{|\mathbf{V}|\times d}\) for each layer. Then, before the evaluation of the \(i\)-th layer, we modify the hidden states as follows:
\[\mathbf{H}^{\prime i}=\mathbf{H}^{i}+\mathbf{P}^{i}_{x_{1}},\dots,\mathbf{P}^{i}_{x_{n}}\in\mathbb{R} ^{n\times d}, \tag{1}\]
where \(\mathbf{P}^{i}_{x_{j}}\in\mathbb{R}^{d}\) is a lookup of \(x_{j}\)-th prompt embedding from \(\mathbf{P}^{i}\). Such a scheme saves us significant time during evaluation since AoT P-Tuning does not imply an increase in sequence length and involves only adding biases during the evaluation, which introduces negligible computational overhead.
While \(\mathbf{P}^{i}\) in naive implementation will require much memory to store parameters, we describe reparametrizations that make training more tractable in the following subsections.
Note that AoT P-Tuning could be evaluated in parallel with several tasks in a batch due to the fact that performing look-up from \(\mathbf{P}\) can be easily parallelized. As for other methods for parameter-efficient fine-tuning, we only optimize parameters of \(\mathbf{P}\) and Classification Head during fine-tuning.
### On the Parameter Efficiency of AoT P-Tuning
It is notable that, in most cases, it is not feasible to optimize the weight \(\mathbf{P}\in\mathbb{R}^{|\mathbf{V}|\times d}\) for each layer. If we consider training RoBERTa-Large with such a scheme (which has \(|\mathbf{V}|=50265\), \(d=1024\) and \(l=24\)), then storing all biases \(\mathbf{P}\) will exceed \(1.2\)B parameters, while the model itself has roughly \(350\)M parameters.
To overcome this limitation, we propose two reparametrizations of \(\mathbf{P}\) so that it can use fewer parameters during training.
The first is based on the Kronecker product (namely, **Kronecker AoT P-Tuning**). More specifically, we reparametrize \(\mathbf{P}\) as
\[\mathbf{P}=(\mathbf{W}_{L}\otimes\mathbf{W}_{M})\mathbf{W}_{R}, \tag{2}\]
where \(\mathbf{W}_{L}\in\mathbb{R}^{a\times r}\), \(\mathbf{W}_{M}\in\mathbb{R}^{b\times r}\), \(\mathbf{W}_{R}\in\mathbb{R}^{r^{2}\times d}\), \(a\) and \(b\) are selected in such a way so \(a*b=|\mathbf{V}|\), \(r\) is the factorization rank which is a hyperparameter to tune, and \(\otimes\) denotes the Kronecker product.
With this reparametrization, training AoT P-Tuning becomes tractable. E.g., for RoBERTa-Large, with \(a=256\), \(b=200\), and \(r=20\), \(\mathbf{P}\) will contain roughly \(10\)M parameters, which is less than \(3\%\) of the total number of parameters in the model1.
Footnote 1: One may note that \(256*200=51200\neq 50265\). However, \(50265\) is difficult to factorize efficiently since \(50265=1117*3^{2}*5\). Because of this, we chose to mostly factorize \(\mathbf{P}\) in such a way as to make it slightly larger than the original vocabulary size. Doing so allows us to select more appropriate \(a\) and \(b\) from the perspective of parameter and computational efficiency.
The second approach to work with \(\mathbf{P}\), which we used in our experiments, is based on passing the embeddings matrix \(\mathbf{E}\) through a learnable Fully Connected network (namely, **FC AoT P-Tuning**). Thus, we reparametrize \(\mathbf{P}\) as
\[\mathbf{P}=f(\mathbf{E}\mathbf{W}_{1}+\mathbf{b}_{1})\mathbf{W}_{2}+\mathbf{b}_{2}, \tag{3}\]
where \(\mathbf{W}_{1}\in\mathbb{R}^{d\times r}\), \(\mathbf{b}_{1}\in\mathbb{R}^{r}\), \(\mathbf{W}_{2}\in\mathbb{R}^{r\times d}\), \(\mathbf{b}_{2}\in\mathbb{R}^{d}\), \(f\) is a non-linearity, and \(r\) is the mapping rank, which is also a hyperparameter to tune, same as for Kronecker AoT P-Tuning.
With FC AoT P-Tuning, we utilize knowledge stored in the pre-trained embeddings matrix \(\mathbf{E}\), which should hypothetically perform better than training \(\mathbf{P}\) from scratch as Kronecker AoT P-Tuning.
Note that for both Kronecker and FC AoT P-Tuning, we can evaluate only specific rows \(\{\mathbf{P}_{x_{i}},\dots,\mathbf{P}_{x_{n}}\}\) for input sequence \(\{x_{1},\dots,x_{n}\}\), making training more efficient.
For both reparametrizations, \(\mathbf{P}\) could be fused once training is complete, and thus the rank of factorization \(r\) does not affect inference speed. During the evaluation, there is no need to store the full \(\mathbf{P}\) in GPU memory. Instead, it could be stored in RAM, and only rows of these matrices should be placed in GPU memory to be added to the hidden states before each layer.
From a certain perspective, choosing between AoT P-Tuning and P-Tuning is a trade-off between evaluation speed and RAM consumption during inference. If RAM is limited, then usual P-Tuning
could be used at the cost of slower inference. In other cases, AoT P-Tuning is viable if there is enough RAM and inference speed is crucial. Although, in most cases, \(\mathbf{P}\) matrices for different tasks could be easily stored in the RAM. For RoBERTa-Large, a single task parameter will require roughly \(2.4\)Gb if stored in half-precision.
### Intuition Behind AoT P-Tuning and Connection to Other Methods
Having \(\mathbf{H}^{\prime}\), after passing through \(\mathbf{W}_{Q}\), \(\mathbf{W}_{K}\), and \(\mathbf{W}_{V}\) we obtain \(\mathbf{Q}^{\prime}\), \(\mathbf{K}^{\prime}\), and \(\mathbf{V}^{\prime}\). Note that \(\mathbf{V}^{\prime}=\mathbf{H}\mathbf{W}_{V}+\{\mathbf{P}_{x_{1}},\dots,\mathbf{P}_{x_{n}}\}\mathbf{W} _{V}\stackrel{{\text{def}}}{{=}}\mathbf{V}+\mathbf{P}_{x}\mathbf{W}_{V}\).
The result of evaluating Attention with AoT P-Tuning could be seen as:
\[\mathbf{A}^{\prime}_{i}= \sum_{j=1}^{n}\mathbf{a}_{j}(\mathbf{Q}^{\prime}_{i},\mathbf{K}^{\prime})\mathbf{ P}_{x_{j}}\mathbf{W}_{V}+\sum_{j=1}^{n}\mathbf{a}_{j}(\mathbf{Q}^{\prime}_{i},\mathbf{K}^{ \prime})\mathbf{V}_{j}. \tag{4}\]
From such a perspective, there is a clear connection between AoT P-Tuning (Equation 4) and P-Tuning v2 (Equation 9) with the following changes:
1. For AoT P-Tuning, attention weights \(\mathbf{a}_{j}\), \(j\in\overline{1,l}\) are used for both terms in Equation 4.
2. For AoT P-Tuning, attention is evaluated on modified \(\mathbf{Q}^{\prime}\). In addition, there is a difference in the form of dependency of \(\mathbf{K}^{\prime}\) and \(\mathbf{V}^{\prime}\) on prefix weight. For AoT P-Tuning, we add prefixes to \(\mathbf{K}\) and \(\mathbf{V}\), while for P-Tuning v2, prefixes are concatenated to these matrices.
3. For AoT P-Tuning, the first term of Equation 4 implies evaluation of Attention with a prompt which is dependent on the input text, while for P-Tuning v2, the prompt \(\mathbf{P}_{V}\) is constant.
Considering Equation 4, AoT can be seen as a form of the P-Tuning method, for which we embed prefixes before evaluating the attention layer2.
Footnote 2: It is possible to think of AoT P-Tuning as a method which adds bias **after** the evaluation of the Transformer layer. In this case, it could be seen as a method that directly models the result of the evaluation of P-Tuning v2 with a slightly different computation order. However, we believe that this way is more difficult to consider.
Also, one may note that AoT P-Tuning is highly related to BitFit (Ben Zaken et al., 2022). Both methods are identical in their practical properties as they are zero-cost during the inference and allow multi-task inference (See Table 1). Although, there is a clear difference in the form of biases added during model evaluation: BitFit adds constant bias to each element in the sequence as \(\mathbf{H}^{\prime i}=\mathbf{H}^{i}+\{\mathbf{b},\dots,\mathbf{b}\}\). Such bias leads to implied Attention results in the form of:
\[\mathbf{A}^{\prime}_{{}_{BiFitFit}}=\mathbf{b}\mathbf{W}_{V}+\sum_{j=1}^{n}\mathbf{a}_{j}(\bm {Q}^{\prime}_{i},\mathbf{K}^{\prime})\mathbf{V}_{j}, \tag{5}\]
which no longer modifies the attention map with input-dependent bias. In the latter sections, we will show that **such discrepancy leads to poor performance of BitFit during fine-tuning compared to AoT P-Tuning**.
## 4 Experiments
### Experimental Details
We compared AoT P-Tuning (Kronecker and FC reparametrizations of \(\mathbf{P}\)) with other fine-tuning methods capable of performing multi-task inference: P-Tuning v1, P-Tuning v2 on GLUE and SuperGLUE (Wang et al., 2018, 2019) Benchmarking Datasets. We also evaluated plain fine-tuning, LoRA, Adapters, and BitFit for reference. For each fine-tuning approach, we experimented with the RoBERTa-Base, RoBERTa-Large, and DeBERTa-XL backbone models.
For each task, we performed a grid hyperparameter search (see Appendix Table 2 for hyperparameter ranges). For RoBERTa models, we evaluated each hyperparameter set with \(5\) different seed values
and reported median and std score values for each task. For DeBERTa-XL, we used to assess each hyperparameter assignment with a single seed due to longer training time. See Appendix Table 1 for a list of metrics used for each task.
We used the Adam (Kingma & Ba, 2015) optimizer with a constant learning rate for each task. We stopped training once the validation metric stopped increasing (see the "patience" parameter in Appendix Table 4).
For Kronecker AoT P-Tuning with RoBERTa models, we parametrized the matrix \(\mathbf{P}=(\mathbf{W}_{L}\otimes\mathbf{W}_{M})\mathbf{W}_{R}\) with \(a=256\), and \(b=200\), while for DeBERTa, we used \(a=b=360\). \(\mathbf{W}_{L}\) and \(\mathbf{W}_{M}\) were initialized randomly, while \(\mathbf{W}_{R}\) was initialized as a zero matrix. For FC AoT P-Tuning, we initialized \(\mathbf{W}_{1}\) randomly, while \(\mathbf{W}_{2}\), \(\mathbf{b}_{1}\), and \(\mathbf{b}_{2}\) were initialized with zeros. For Kronecker AoT P-Tuning, we applied dropout (Srivastava et al., 2014) to the \(\mathbf{P}_{x}\) with a fixed probability equal to \(0.1\). In contrast, for FC AoT P-Tuning, we applied dropout to \(\mathbf{E}\) before multiplying it with \(\mathbf{W}_{1}\).
Each experiment was run on a single NVIDIA A100 GPU with a total computation time of roughly \(1200\) days.
### Results
Refer to Table 2 and Appendix Table 3 for the trained model results.
Our observations indicate that FC AoT P-Tuning outperforms Kronecker AoT P-Tuning. This outcome mainly results from FC reparametrization using a pre-trained embedding matrix rather than learning biases from scratch.
We also observed that FC AoT P-Tuning performs better or is similar to P-Tuning v1/v2 and LoRA measured by Macro score. Additionally, FC AoT P-Tuning primarily yields comparable results to Adapters. In cases where Adapters outperform AoT P-Tuning, the gains are usually marginal based on our experiments. Furthermore, AoT P-Tuning surpasses BitFit in most instances. Only for the RoBERTa-Base backbone evaluated with GLUE Macro score, BitFit performs on par with AoT P-Tuning. We noticed that both AoT P-Tuning reparametrizations predominantly exhibit a lower variance of metrics across different seeds.
Note that the results presented in Table 2 and Appendix Table 3 were obtained using different training hyperparameters, including distinct factorization ranks for LoRA and Adapters. This implies that if we consider having the same rank for different tasks for LoRA or Adapters, which is essential for multi-task inference (see Section 3.1 for details), these results could represent an upper bound of performance. Refer to Figure 2 for SuperGLUE macro scores with varying prefix lengths \(p\), and prefix ranks \(r\) for different methods. Also, remember that while we included AoT P-Tuning in these figures for reference, AoT P-Tuning does not require the same rank for multi-task inference since weights are fused during evaluation, and \(r\) no longer affects any shape. When considering the same rank for Adapters and LoRA, these methods only marginally outperform BitFit. We also observed that LoRA struggled with large \(r\).
Figure 2: SuperGLUE macro score for RoBERTa-Base (a) and DeBERTa-XL (b) models. See Section 4.2 for details.
Based on our experimental results, if the ability to perform multi-task inference without computational overhead is essential, then AoT P-Tuning will not significantly diminish performance compared to LoRA and Adapters and could be utilized.
With per-task Expected Validation Performance (EVP) (Dodge et al., 2019), we observed that AoT P-Tuning highly depends on the number of hyperparameter assignments (see Appendix Figures 2, 4). Although, in most cases, using less than \(100\) hyperparameter assignments for AoT P-Tuning is enough for it to outperform P-Tuning v2, which is not crucial in most cases.
\begin{table}
\begin{tabular}{r|c c c c} \hline \hline \multicolumn{5}{c}{RoBERTa-Large} \\ \hline Model & RTE & COPA & WSC & WiC \\ \hline Fine-Tuning & \(88.1\pm 1.5\) & \(87.0\pm 10.2\) & \(80.8\pm 6.3\) & \(73.8\pm 1.6\) \\ \hline Adapters & \(87.7\pm 18.5\) & \(89.0\pm 10.1\) & \(77.9\pm 9.8\) & \(\mathbf{73.5\pm 1.0}\) \\ LoRA & \(87.4\pm 1.6\) & \(\mathbf{91.0\pm 8.5}\) & \(\mathbf{79.8\pm 10.6}\) & \(71.9\pm 1.2\) \\ BitFit & \(87.7\pm 0.8\) & \(\mathbf{91.0\pm 2.3}\) & \(71.2\pm 6.7\) & \(71.3\pm 9.5\) \\ \hline P-Tuning v1 & \(62.8\pm 2.3\) & \(75.0\pm 4.3\) & \(66.3\pm 1.3\) & \(64.1\pm 0.9\) \\ P-Tuning v2 & \(87.4\pm 1.5\) & \(87.0\pm 6.3\) & \(75.0\pm 7.7\) & \(70.8\pm 1.5\) \\ \hline Kron. AoT P-Tuning (ours) & \(84.8\pm 1.3\) & \(72.0\pm 9.1\) & \(67.3\pm 3.0\) & \(71.0\pm 1.0\) \\ FC AoT P-Tuning (ours) & \(\mathbf{88.4\pm 0.9}\) & \(85.0\pm 10.1\) & \(\mathbf{79.8\pm 4.1}\) & \(72.1\pm 1.5\) \\ \hline & MultiRC & CB & BoolQ & Macro \\ \hline Fine-Tuning & \(83.3\pm 1.1\) & \(97.3\pm 2.8\) & \(85.6\pm 0.3\) & \(85.1\) \\ \hline Adapters & \(\mathbf{83.7\pm 20.3}\) & \(\mathbf{100.0\pm 0.0}\) & \(\mathbf{85.7\pm 10.6}\) & \(\mathbf{85.4}\) \\ LoRA & \(75.7\pm 17.4\) & \(\mathbf{100.0\pm 2.6}\) & \(84.6\pm 0.6\) & \(84.3\) \\ BitFit & \(82.5\pm 0.6\) & \(\mathbf{100.0\pm 0.7}\) & \(85.4\pm 1.0\) & \(84.2\) \\ \hline P-Tuning v1 & \(54.3\pm 2.9\) & \(81.4\pm 3.0\) & \(64.3\pm 1.2\) & \(66.9\) \\ P-Tuning v2 & \(82.4\pm 0.6\) & \(\mathbf{100.0\pm 0.8}\) & \(85.0\pm 0.6\) & \(83.9\) \\ \hline Kron. AoT P-Tuning (ours) & \(82.8\pm 0.8\) & \(97.3\pm 2.3\) & \(84.8\pm 0.5\) & \(80.0\) \\ FC AoT P-Tuning (ours) & \(82.7\pm 19.3\) & \(\mathbf{100.0\pm 0.0}\) & \(85.5\pm 10.3\) & \(84.8\) \\ \hline \multicolumn{5}{c}{DeBERTa-XL} \\ \hline Model & RTE & COPA & WSC & WiC \\ \hline Fine-Tuning & 89.9 & 96.0 & 76.9 & 75.9 \\ \hline Adapters & 90.3 & 96.0 & 89.4 & **77.3** \\ LoRA & 90.3 & 97.0 & 89.4 & 75.5 \\ BitFit & 89.2 & 97.0 & 86.5 & 73.7 \\ \hline P-Tuning v1 & 78.3 & 90.0 & 67.3 & 66.8 \\ P-Tuning v2 & 90.6 & 97.0 & 89.4 & 76.5 \\ \hline Kron. AoT P-Tuning (ours) & \(88.8\) & 96.0 & 87.5 & 71.8 \\ FC AoT P-Tuning (ours) & \(\mathbf{91.0}\) & \(\mathbf{98.0}\) & \(\mathbf{94.2}\) & 74.1 \\ \hline & MultiRC & CB & BoolQ & Macro \\ \hline Fine-Tuning & 84.3 & 98.4 & 86.7 & 86.9 \\ \hline Adapters & 86.7 & 97.3 & **88.9** & 89.4 \\ LoRA & 86.0 & **100.0** & 88.3 & **89.5** \\ BitFit & 85.2 & **100.0** & 86.5 & 88.3 \\ \hline P-Tuning v1 & 82.1 & 93.8 & 79.4 & 79.7 \\ P-Tuning v2 & **87.1** & 97.3 & 87.0 & 89.3 \\ \hline Kron. AoT P-Tuning (ours) & \(86.3\) & 83.1 & 87.3 & 85.8 \\ FC AoT P-Tuning (ours) & \(86.5\) & 92.3 & 88.1 & 89.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on the SuperGLUE Dev set. For RoBERTa-Large, each result is median and std across several seeds, and the Macro column is a mean score across all tasks. For DeBERTa-XL, we evaluated each hyperparameter assignment with a single seed and reported its metric score. We bolded the best results and underlined the second best results. Fine-tuning is omitted from comparison with other methods and was not bolded for visibility. See Section 4.2 for details.
### Analysis of Trained Weights
We investigated trained \(\mathbf{P}\) matrices for WSC, COPA, CB, and RTE tasks with the DeBERTa-XL model. Since FC AoT P-Tuning performed better than Kronecker factorization, we selected this reparametrization method to report the results.
More specifically, we sorted rows of \(\mathbf{P}\) matrices for each layer measured by the \(L_{2}\) norm and reported the appropriate tokens for these rows. See Appendix Tables 5, 6, 7, 8 for results.
For the WSC task, there is a clear interpretation of trained rows for \(\mathbf{P}\), since rows with a large \(L_{2}\) norm represent tokens responsible for pronouns and names, which is crucial for solving WSC. For the COPA task, we observed that the model tends to assign large norms for verb tokens. For the RTE and CB tasks, \(\mathbf{P}\) also assigns large norms for name tokens, which often occur in the training data, while CB primarily modifies adverbs for later layers.
### Inference Speed Overhead
In Figure 3 and Appendix Figures 5, 6, we investigated the computational overhead of AoT P-Tuning compared to other baselines.
We estimated inference time for RoBERTa-Base, RoBERTa-Large, and DeBERTa-XL models with batch sizes \(\in[1,16,64]\) and sequence lengths \(\in[64,128,384]\). For batch size equal to \(1\), we evaluated the model \(300\) times, and \(100\) times for other values. For evaluation, we used A100 80Gb GPU. We report mean values of inference time normalized by the inference time of the vanilla model (i.e., plain fine-tuning).
We evaluated AoT P-Tuning on two setups. For the first setup, we fused \(\mathbf{P}\) so that the model can perform on its top speed. We did not perform fusing for the second setup. While a lack of fusing is not required to use AoT P-Tuning, we report these results for reference. We also report LoRA results for the experiments in which we did not fuse weights. This setup makes it possible to use LoRA in a multi-task setup (see Section 3.1 for details).
We observed that the proposed method performed differently depending on the experiment's parameters. AoT performed with computational overhead for a small model (i.e., RoBERTa-Base), small sequence length, and small batch size. We observed \(12\%\) overhead on inference speed compared to plain fine-tuning for RoBERTa-Base with batch size \(1\) and sequence length \(64\). However, AoT P-Tuning still performed faster than other baselines by a large margin (i.e., LoRA for RoBERTa-Base with batch size \(1\) and sequence length \(384\) added \(50-70\%\) computational overhead compared to fine-tuning).
Once model size or input size is increased, we observed that the overhead of adding biases for AoT P-Tuning becomes negligible. The proposed method performed the same as plain fine-tuning (for some experiments, we observed \(1-2\%\) overhead, while for others, AoT P-Tuning performed even faster than fine-tuning, which is a variation in inference time measurements). For the most practical setup with large models (DeBERTa-XL), small batch size (\(=1\)) and long sequence length (\(=384\)), AoT P-Tuning performed slightly faster than fine-tuning, **while other methods performed \(12-25\%\)
Figure 3: Speed measurements for baseline methods with sequence length equal to \(384\) for DeBERTa-XL. See Appendix Figures 5, 6 for results with other sequence lengths and Section 4.4 for details.
**slower**. LoRA and Adapters showed \(10\%\) overhead for larger batch sizes, while AoT P-Tuning still performed with negligible overhead.
## 5 Conclusion and Future Work
This paper focused on parameter-efficient fine-tuning methods emphasizing inference speed and multi-task capabilities. We observed that widely used methods typically involve a trade-off between fast and multi-task inference, while those that can perform both tend to struggle in downstream task evaluations. To address this challenge, we introduced a novel method called AoT P-Tuning, which has negligible inference overhead while maintaining multi-task performance.
Our experiments demonstrated that AoT P-Tuning outperforms BitFit, which also does not introduce any overhead and could be evaluated in a multi-task manner. Furthermore, AoT P-Tuning is competitive with other methods across various downstream tasks and pre-trained models, offering faster inference times.
Although we investigated two reparametrizations based on the Kronecker product and fully connected (FC) network, further exploration of alternative reparametrizations for weight \(\mathbf{P}\) may enhance the performance of our proposed method. Moreover, while our approach is straightforward, implementing various architectural modifications could improve AoT P-Tuning's performance and decrease the need for hyperparameter tuning.
|
2308.10177 | A partition formula from idempotents | A formula which only involves a partition number and elementary functions is
derived by applying Burnside's Lemma to the set of idempotent maps from a set
to itself. One side involves a summation over a set closely related to the
partition number, however. Some speculation is made as to how to eliminate this
summation. | Charlotte Aten | 2023-08-20T06:51:55Z | http://arxiv.org/abs/2308.10177v1 | # A partition formula from idempotents
###### Abstract.
A formula which only involves a partition number and elementary functions is derived by applying Burnside's Lemma to the set of idempotent maps from a set to itself. One side involves a summation over a set closely related to the partition number, however. Some speculation is made as to how to eliminate this summation.
Key words and phrases:Partition numbers, idempotents, monoid representations 2020 Mathematics Subject Classification: 11P84, 05E10, 20M30
## 1. Introduction
This paper was largely written at the beginning of the author's time in graduate school. It was the result of a brief investigation of representations of monoids. While attention is increasingly paid to the theory of linear representations of monoids as in the somewhat recent textbook of Steinberg[1], we consider here set-theoretic representations of a particular finite monoid, which are, according to one's viewpoint, either sets equipped with an action of the monoid in question or homomorphic images of the given monoid which can be found in the monoid of maps from a given set to itself. This latter viewpoint is exactly the monoid theoretic analogue of a permutation representation of a group.
The set theoretic representations of the free idempotent monoid on one generator (that is, the monoid of order \(2\) whose only nonidentity element is idempotent) carry a natural action of a particular group. We use Burnside's Lemma to obtain from this a count of the number of partitions of a number \(n\). This in turn leads to the formula
\[\begin{split} p(n)&=\frac{1}{n!}\sum_{g\in V_{n}} \left(\prod_{k=1}^{n}(k-1)!^{g(k)}(g(k))!\binom{n-\sum_{s=1}^{k-1}sg(s)}{g(k)} \right)\\ &\prod_{v=1}^{g(k)}\binom{n-\sum_{s=1}^{k-1}sg(s)-g(k)-(v-1)(k-1) }{k-1}\right)\end{split} \tag{1}\]
where \(V_{n}\) is
\[\left\{\,g\colon\,\{1,\ldots,n\}\to\{0,\ldots,n\}\,\Bigg{|}\,\sum_{k=1}^{n}kg(k )=n\,\right\}.\]
In section2 we give some background on the relevant monoid representations for the derivation of equation1. We continue in section3 by showing that we can reduce the study of such representations to the study of idempotent maps subject to a natural conjugation action. The orbits and isotropy groups of this action are characterized in section4. In section5 we use Burnside's Lemma to derive
equation 1 and make some suggestions as to how to eliminate summation over the set \(V\).
Throughout this paper we write \(p(n)\) for the number of partitions of the number \(n\). We use the model-theoretic notation where \(A\) is a set and \(\mathbf{A}\) is a structure with universe \(A\). A distinction is made between equality by definition (written \(A\coloneqq B\)) and the assertion of an equality (written \(A=B\)). We write \(\mathcal{P}(X)\) to indicate the powerset of a set \(X\), \(\operatorname{Im}(f)\) to denote the image of a map \(f\), and \(\mathbf{Stab}(x)\) to denote the stabilizer subgroup for an element \(x\) of a set equipped with a group action. We define \([n]\coloneqq\{1,2,\ldots,n\}\).
## 2. Monoid representation preliminaries
Let \(\mathbf{M}(S)\) denote the free monoid on the set \(S\), let \(\mathbf{T}(X)\) denote the full transformation monoid on the set \(X\), and let \(\Sigma(X)\) denote the units of \(\mathbf{T}(X)\). That is, \(\mathbf{\Sigma}(X)\) is the symmetric group on \(X\). Given a monoid \(\mathbf{A}\) a _representation_ of \(\mathbf{A}\) on \(X\) is a monoid homomorphism \(\rho\colon\mathbf{A}\to\mathbf{T}(X)\). We axiomatize monoids so that the identity element is a constant which must be preserved by homomorphisms. In particular, a representation of a monoid \(\mathbf{A}\) on \(X\) must take the identity \(e\) of \(\mathbf{A}\) to the identity map \(\operatorname{id}_{X}\) on \(X\). That is to say \(\rho(e)=\operatorname{id}_{X}\) for all representations \(\rho\). Given \(\sigma\in\Sigma(X)\) and a representation \(\rho\) we have a _conjugate representation_\(\rho^{\sigma}\) of \(\rho\) given by \(\rho^{\sigma}(a)\coloneqq\sigma\rho(a)\sigma^{-1}\) for \(a\in A\).
We study the representation theory of the free idempotent monoid on one generator. Let \(\mathbf{B}\coloneqq\mathbf{M}(\{x\})/\langle(x,x^{2})\rangle\) and let \(e\) and \(b\) denote the equivalence classes of the identity and \(x\) under \(\langle(x,x^{2})\rangle\), respectively. Given a representation \(\rho\colon\mathbf{B}\to\mathbf{T}(X)\) of \(\mathbf{B}\) we define \(f_{\rho}\coloneqq\rho(b)\). Note that \(f_{\rho}^{2}=f_{\rho}\) so \(f_{\rho}\) is an idempotent in \(\mathbf{T}(X)\). The representation \(\rho\) is in fact completely determined by \(f_{\rho}\) since \(\rho(e)=\operatorname{id}_{X}\) for all representations \(\rho\). In order to study representations of \(\mathbf{B}\) on a set \(X\) we must understand idempotents in \(\mathbf{T}(X)\).
**Proposition 1**.: _Let \(f\in T(X)\). We have that \(f\) is idempotent if and only if \(f=\operatorname{id}_{\operatorname{Im}(f)}\cup g\) for some \(g\colon X\setminus\operatorname{Im}(f)\to\operatorname{Im}(f)\)._
Proof.: Suppose that \(f\in T(X)\) is idempotent and let \(y\in\operatorname{Im}(f)\). We find that \(f(x)=y\) for some \(x\in X\). This implies that \(f(y)=f^{2}(x)=f(x)=y\) so \(f\) acts as the identity on \(\operatorname{Im}(f)\). Define \(g\coloneqq f|_{X\setminus\operatorname{Im}(f)}\). Since every member of \(X\) belongs to either \(\operatorname{Im}(f)\) or \(X\setminus\operatorname{Im}(f)\), but not both, we have that \(f=\operatorname{id}_{\operatorname{Im}(f)}\cup g\) where \(g\colon X\setminus\operatorname{Im}(f)\to\operatorname{Im}(f)\).
Conversely suppose that \(f=\operatorname{id}_{\operatorname{Im}(f)}\cup g\) where \(g\colon X\setminus\operatorname{Im}(f)\to\operatorname{Im}(f)\). Either \(x\in\operatorname{Im}(f)\), in which case \(f^{2}(x)=f(\operatorname{id}(x))=f(x)\), or \(x\in X\setminus\operatorname{Im}(f)\), in which case \(f(x)=g(x)\in\operatorname{Im}(f)\) and hence \(f^{2}(x)=f(f(x))=\operatorname{id}(f(x))=f(x)\). We find that \(f\) is idempotent, so we have exactly characterized the idempotents in \(\mathbf{T}(X)\).
Let \(R_{\mathbf{A}}(X)\) denote the set of representations of a monoid \(\mathbf{A}\) on \(X\).
**Proposition 2**.: _We have that \(\mathbf{\Sigma}(X)\) acts on \(R_{\mathbf{A}}(X)\) by conjugation._
Proof.: Let \(\mathbf{\Sigma}(R_{\mathbf{A}}(X))\) denote the group of permutations of \(R_{\mathbf{A}}(X)\). Define
\[\alpha\colon\Sigma(X)\to\Sigma(R_{\mathbf{A}}(X))\]
by \((\alpha(\sigma))(\rho)\coloneqq\rho^{\sigma}\). We claim that \(\alpha\) is a group action.
We show that \(\alpha(\sigma)\) is indeed a permutation of \(R_{\mathbf{A}}(X)\). Let \(\rho\colon\mathbf{A}\to\mathbf{T}(X)\) be a representation. We show that \(\rho^{\sigma}\) is a monoid homomorphism and hence a
representation of \(\mathbf{A}\). Let \(a,b\in A\) and use that \(\rho\) is a monoid homomorphism to see that
\[\rho^{\sigma}(e)=\sigma\rho(e)\sigma^{-1}=\sigma\operatorname{id}_{X}\sigma^{-1 }=\operatorname{id}_{X}\]
and
\[\rho^{\sigma}(ab)=\sigma\rho(ab)\sigma^{-1}=\sigma\rho(a)\rho(b)\sigma^{-1}=( \sigma\rho(a)\sigma^{-1})(\sigma\rho(b)\sigma^{-1})=\rho^{\sigma}(a)\rho^{ \sigma}(b).\]
Thus, \(\alpha(\sigma)\in T(R_{\mathbf{A}}(X))\). Suppose that \((\alpha(\sigma))(\rho_{1})=(\alpha(\sigma))(\rho_{2})\). This implies that \(\sigma\rho_{1}(a)\sigma^{-1}=\sigma\rho_{2}(a)\sigma^{-1}\) for all \(a\in A\). Canceling we find that \(\rho_{1}=\rho_{2}\) so \(\alpha(\sigma)\) is injective. Given a representation \(\rho\in R_{\mathbf{A}}(X)\) we have that \(\rho^{\sigma^{-1}}\in R_{\mathbf{A}}(X)\) by the same reasoning used above for \(\rho^{\sigma}\). We find that
\[(\alpha(\sigma))(\rho^{\sigma^{-1}})=\sigma\rho^{\sigma^{-1}}\sigma^{-1}= \sigma\sigma^{-1}\rho\sigma\sigma^{-1}=\rho\]
so \(\alpha(\sigma)\) is surjective. As we have that \(\alpha\) is a function from \(\Sigma(X)\) to \(\Sigma(R_{\mathbf{A}}(X))\) it remains to show that \(\alpha\) is a group homomorphism.
Note that
\[((\alpha(\operatorname{id}))(\rho))(a)=\rho^{\operatorname{id}}(a)= \operatorname{id}\rho(a)\operatorname{id}^{-1}=\rho(a)=(\operatorname{id}_{R_ {\mathbf{A}}(X)}(\rho))(a)\]
for any \(a\in A\) and any \(\rho\in R_{\mathbf{A}}(X)\) so \(\alpha(\operatorname{id})=\operatorname{id}_{R_{\mathbf{A}}(X)}\). Given \(\sigma\in\Sigma(X)\) we have that
\[((\alpha(\sigma^{-1}))(\rho))(a)=\rho^{\sigma^{-1}}(a)=\sigma^{-1}\rho(a) \sigma=(((\alpha(\sigma))^{-1})(\rho))(a)\]
so \(\alpha(\sigma^{-1})=(\alpha(\sigma))^{-1}\). Let \(\sigma,\tau\in\Sigma(X)\). Observe that
\[\rho^{\tau\sigma}(a)=\tau\sigma\rho(a)\sigma^{-1}\tau^{-1}=\tau\rho^{\sigma}( a)\tau^{-1}=(\rho^{\sigma})^{\tau}(a)\]
so \(\alpha(\tau\sigma)=\alpha(\tau)\alpha(\sigma)\). Thus, \(\alpha\colon\mathbf{\Sigma}(X)\to\mathbf{\Sigma}(R_{\mathbf{A}}(X))\) is a group homomorphism and hence \(\mathbf{\Sigma}(X)\) acts on \(R_{\mathbf{A}}(X)\) by conjugation.
## 3. Idempotents
Let \(I(X)\) denote the set of idempotents in \(T(X)\). We have an analogous action of \(\mathbf{\Sigma}(X)\) on \(I(X)\). Given \(\sigma\in\Sigma(X)\) and an idempotent \(f\) we have a _conjugate idempotent_\(f^{\sigma}\) of \(f\) given by \(f^{\sigma}(x)\coloneqq\sigma f\sigma^{-1}(x)\) for \(x\in X\).
**Proposition 3**.: _The group \(\mathbf{\Sigma}(X)\) acts on \(I(X)\) by conjugation._
Proof.: Let \(\mathbf{\Sigma}(I(X))\) denote the group of permutations of \(I(X)\). Define
\[\beta\colon\Sigma(X)\to\Sigma(I(X))\]
by \((\beta(\sigma))(f)\coloneqq f^{\sigma}\). We claim that \(\beta\) is a group action.
We show that \(\beta(\sigma)\) is indeed a permutation of \(I(X)\). Let \(f\colon X\to X\) be an idempotent. We show that \(f^{\sigma}\) is also an idempotent. Observe that
\[(f^{\sigma})^{2}=(\sigma f\sigma^{-1})(\sigma f\sigma^{-1})=\sigma f^{2} \sigma^{-1}=\sigma f\sigma^{-1}=f^{\sigma}\]
so \(f^{\sigma}\in I(X)\) and hence \(\beta(\sigma)\in T(I(X))\). Suppose that \((\beta(\sigma))(f)=(\beta(\sigma))(g)\). This implies that \(\sigma f\sigma^{-1}=\sigma g\sigma^{-1}\). Canceling we find that \(f=g\) so \(\beta(\sigma)\) is injective. Given an idempotent \(f\in I(X)\) we have that \(f^{\sigma^{-1}}\in I(X)\) by the same reasoning used above for \(f^{\sigma}\). We find that
\[(\beta(\sigma))(f^{\sigma^{-1}})=\sigma f^{\sigma^{-1}}\sigma^{-1}=\sigma \sigma^{-1}f\sigma\sigma^{-1}=f\]
so \(\beta(\sigma)\) is surjective. As we have that \(\beta\) is a function from \(\Sigma(X)\) to \(\Sigma(I(X))\) it remains to show that \(\beta\) is a group homomorphism.
Note that
\[((\beta(\operatorname{id}))(f)=f^{\operatorname{id}}=\operatorname{id}f\operatorname {id}^{-1}=f=\operatorname{id}_{I(X)}(f)\]
so \(\beta(\operatorname{id})=\operatorname{id}_{I(X)}\). Given \(\sigma\in\Sigma(X)\) we have that
\[(\beta(\sigma^{-1}))(f)=f^{\sigma^{-1}}=\sigma^{-1}f\sigma=((\beta(\sigma))^{- 1})(f)\]
so \(\beta(\sigma^{-1})=(\beta(\sigma))^{-1}\). Let \(\sigma,\tau\in\Sigma(X)\). Observe that
\[f^{\tau\sigma}=\tau\sigma f\sigma^{-1}\tau^{-1}=\tau f^{\sigma}(a)\tau^{-1}=( f^{\sigma})^{\tau}\]
so \(\beta(\tau\sigma)=\beta(\tau)\beta(\sigma)\). Thus, \(\beta\colon\mathbf{\Sigma}(X)\to\mathbf{\Sigma}(I(X))\) is a group homomorphism and hence \(\mathbf{\Sigma}(X)\) acts on \(I(X)\) by conjugation.
We now exploit the extreme similarity of the conjugation actions of \(\mathbf{\Sigma}(X)\) on \(R_{\mathbf{A}}(X)\) and \(I(X)\) in order to examine representations of \(\mathbf{B}\), the free idempotent monoid on one generator. As we noted previously a representation \(\rho\) of \(\mathbf{B}\) on a set \(X\) is determined by an idempotent \(f_{\rho}\coloneqq\rho(b)\) in \(\mathbf{T}(X)\). Define \(\gamma\colon R_{\mathbf{B}}(X)\to I(X)\) by \(\beta(\rho)\coloneqq f_{\rho}\) so that \(\gamma\) is the aforementioned map taking a representation to the idempotent which determines it.
Let \(\mathbf{G}_{R_{\mathbf{B}}(X)}\coloneqq(R_{\mathbf{B}}(X),\Sigma(X))\) and \(\mathbf{G}_{I(X)}\coloneqq(I(X),\Sigma(X))\) denote the \(\mathbf{\Sigma}(X)\)-sets given by the conjugation actions on representations of \(\mathbf{B}\) on \(X\) and idempotents in \(I(X)\).
**Proposition 4**.: _We have that \(\gamma\colon\mathbf{G}_{R_{\mathbf{B}}(X)}\to\mathbf{G}_{I(X)}\) is an isomorphism._
Proof.: We show that \(\gamma\) is bijective. Suppose that \(\gamma(\rho_{1})=\gamma(\rho_{2})\). This implies that \(\rho_{1}(b)=\rho_{2}(b)\). Since \(\rho_{1}(e)=\rho_{2}(e)\) for any representations \(\rho_{1}\) and \(\rho_{2}\) we find that \(\rho_{1}=\rho_{2}\). Thus, \(\gamma\) is injective. Given \(f\in I\) we define \(\rho\colon\mathbf{B}\to\mathbf{T}(X)\) by \(\rho(e)\coloneqq\operatorname{id}_{X}\) and \(\rho(b)\coloneqq f\). Since \(f\) is idempotent it is immediate that \(\rho\) is a monoid homomorphism and hence \(\rho\in R\). It follows that \(\gamma(\rho)=f\) so \(\gamma\) is surjective. We conclude that \(\gamma\) is a bijection.
It remains to show that \(\gamma\) is a \(\mathbf{\Sigma}(X)\)-set morphism. Let \(\sigma\in\Sigma(X)\) and observe that
\[\gamma((\alpha(\sigma))(\rho))=\gamma(\rho^{\sigma})=\gamma(\sigma\rho\sigma^{ -1})=\sigma\rho(b)\sigma^{-1}=(\rho(b))^{\sigma}=(\beta(\sigma))(\gamma(\rho)),\]
as desired.
## 4. Orbits and isotropy groups
Since the \(\mathbf{\Sigma}(X)\)-sets \(\mathbf{G}_{R_{\mathbf{B}}(X)}\) and \(\mathbf{G}_{I(X)}\) are isomorphic we can study the more concrete action of individual idempotents under conjugation rather than directly handling the action of representations of \(\mathbf{B}\) under conjugation.
We characterize the orbits of \(I(X)\) under conjugation.
**Proposition 5**.: _Given \(f,g\in I(X)\) we have that \(g=f^{\sigma}\) for some \(\sigma\in\Sigma(X)\) if and only if there exists a bijection \(h\colon\operatorname{Im}(f)\to\operatorname{Im}(g)\) such that \(\left|f^{-1}(x)\right|=\left|g^{-1}(h(x))\right|\) for all \(x\in\operatorname{Im}(f)\). This latter condition says that both \(f\) and \(g\) induce the same partition on \(X\) up to relabeling._
Proof.: Suppose that \(g=f^{\sigma}\) for some \(\sigma\in\Sigma(X)\) and take \(h\coloneqq\sigma|_{\operatorname{Im}(f)}\). Since \(h\) is the restriction of a bijection we have that \(h\) is a bijection from \(\operatorname{Im}(f)\) to \(\operatorname{Im}(h)\). Since \(g=f^{\sigma}\) we have that \(g=\sigma f\sigma^{-1}\) so \(g\sigma=\sigma f\) and hence
\[\sigma(\operatorname{Im}(f))=\sigma(f(X))=g(\sigma(X))=g(X)=\operatorname{Im}( g).\]
It follows that
\[h(\operatorname{Im}(f))=\sigma(\operatorname{Im}(f))=\operatorname{Im}(g)\]
so \(h\) is a bijection from \(\operatorname{Im}(f)\) to \(\operatorname{Im}(g)\). Fix \(x\in\operatorname{Im}(f)\) and let
\[\phi_{x}\colon f^{-1}(x)\to g^{-1}(h(x))\]
be given by \(\phi_{x}\coloneqq\sigma|_{f^{-1}(x)}\). This map is well-defined, as if \(s\in f^{-1}(x)\) then \(f(s)=x\) and hence
\[g(\phi_{x}(s))=g(\sigma(s))=\sigma f\sigma^{-1}(\sigma(s))=\sigma f(s)=\sigma( x)=h(x)\]
so \(\phi_{x}(s)\in g^{-1}(h(x))\). An identical argument shows that \(\phi_{x}\) has an inverse map \(\phi_{x}^{-1}\coloneqq\sigma^{-1}|_{g^{-1}(h(x))}\). Thus, \(\phi_{x}\colon f^{-1}(x)\to g^{-1}(h(x))\) is a bijection. This establishes that \(\big{|}f^{-1}(x)\big{|}=\big{|}g^{-1}(h(x))\big{|}\) for each \(x\in\operatorname{Im}(f)\) so there is indeed a bijection \(h\colon\operatorname{Im}(f)\to\operatorname{Im}(g)\) such that \(\big{|}f^{-1}(x)\big{|}=\big{|}g^{-1}(h(x))\big{|}\) for all \(x\in\operatorname{Im}(f)\).
Conversely, suppose that \(f,g\in T(X)\) are idempotents such that there exists a bijection \(h\colon\operatorname{Im}(f)\to\operatorname{Im}(g)\) with \(\big{|}f^{-1}(x)\big{|}=\big{|}g^{-1}(h(x))\big{|}\) for all \(x\in\operatorname{Im}(f)\). Since \(f\) and \(g\) are idempotents we always have that \(x\in f^{-1}(x)\) and \(h(x)\in g^{-1}(h(x))\) for any \(x\in\operatorname{Im}(f)\). For each \(x\in\operatorname{Im}(f)\) we can then choose a bijection
\[\phi_{x}\colon f^{-1}(x)\to g^{-1}(h(x))\]
such that \(\phi_{x}(x)=h(x)\). Define \(\sigma\coloneqq\bigcup_{x\in\operatorname{Im}(f)}\phi_{x}\). Since the \(\phi_{x}\) are bijections whose domains and codomains do not intersect we have that \(\sigma\in\Sigma(X)\). We claim that \(g=f^{\sigma}\). To see this, take \(x\in X\). Since \(g(x)=s\) for some \(s\in\operatorname{Im}(g)\) we have that \(x\in g^{-1}(h(t))\) where \(h(t)=s\) for a unique \(t\in\operatorname{Im}(f)\). It follows that \(\sigma^{-1}(x)=\phi_{t}^{-1}(x)\in f^{-1}(t)\) so
\[f^{\sigma}(x)=\sigma f\sigma^{-1}(x)=\sigma t=\phi_{t}(t)=h(t)=s=g(x).\]
This shows that \(g=f^{\sigma}\), as desired.
We can produce an \(f\in I(X)\) with \(|\operatorname{Im}(f)|\) having any cardinality \(k\in[|X|]\). Such an \(f\) induces a partition of \(X\) into \(k\) parts. Our previous result then implies that the orbits of idempotents \(f\in I(X)\) with \(|\operatorname{Im}(f)|=k\) under \(\mathbf{\Sigma}(X)\) are in bijection with partitions of \(|X|\) into \(k\) parts. It follows that the number of orbits of all idempotents on \(X\) is the number of partitions of \(|X|\).
Now that we have characterized the orbits of the idempotents in \(\mathbf{T}(X)\) under the conjugation action of \(\mathbf{\Sigma}(X)\) we proceed to describe the isotropy subgroups for idempotents under this action.
**Definition 1** (The group \(\mathbf{G}_{U}\)).: Let \(f\in I(X)\). Define \(\eta\colon\operatorname{Im}(f)\to\mathcal{P}(X)\) by
\[\eta(x)\coloneqq\left\{\,y\in\operatorname{Im}(f)\,\big{|}\,\big{|}f^{-1}(x) \big{|}=\big{|}f^{-1}(y)\big{|}\,\right\}.\]
Given \(U\in\operatorname{Im}(\eta)\) choose a representative \(x_{U}\in U\). Let \(\mathbf{H}_{U}\coloneqq\mathbf{\Sigma}(U)\). Define \(\mathbf{A}_{U}\coloneqq\mathbf{\Sigma}(f^{-1}(x_{U})\setminus\{x_{U}\})\) and define \(\mathbf{K}_{U}\coloneqq\mathbf{A}_{U}^{U}\). Let \(\alpha\colon\mathbf{H}_{U}\to\mathbf{K}_{U}\) be given by
\[\alpha(h)((a_{u})_{u\in U})\coloneqq(a_{h(u)})_{u\in U}.\]
Let \(\mathbf{G}_{U}\) denote the group with universe \(K_{U}\times H_{U}\) whose identity element is \(((e_{U})_{u\in U},\operatorname{id}_{U})\) where \(e_{U}\) is the identity in \(\mathbf{A}_{U}\), whose inverse operation is given by
\[((a_{u})_{u\in U},\varphi)^{-1}\coloneqq\left(\left(a_{\varphi_{1}^{-1}(u)}^{- 1}\right)_{u\in U},\varphi_{1}^{-1}\right),\]
and whose multiplication is given by
\[((a_{u})_{u\in U},\varphi_{1})((b_{u})_{u\in U},\varphi_{2})\coloneqq(\alpha( \varphi_{2})((a_{u})_{u\in U})(b_{u})_{u\in U},\varphi_{1}\varphi_{2}).\]
We show that \(\mathbf{G}_{U}\) is indeed a group. First note that \(\alpha\) is a group homomorphism as
\[\alpha(\mathrm{id}_{U})((a_{u})_{u\in U})=(a_{\mathrm{id}_{U}(u)})_{u\in U}=(a_{u })_{u\in U}\]
so \(\alpha(\mathrm{id}_{U})\) is the identity in \(\mathbf{K}_{U}\), given \(h\in H_{U}\) we have that
\[\alpha(h^{-1})((a_{u})_{u\in U})=(a_{h^{-1}(u)})_{u\in U}=(\alpha(h))^{-1}((a_ {u})_{u\in U}),\]
and given \(h_{1},h_{2}\in H_{U}\) we have that
\[(\alpha(h_{1})\circ\alpha(h_{2}))((a_{u})_{u\in U})=\alpha(h_{1})((a_{h_{2}( u)})_{u\in U})=(a_{h_{1}h_{2}(u)})_{u\in U}=\alpha(h_{1}h_{2})((a_{u})_{u\in U}).\]
It is immediate that \(((e_{U})_{u\in U},\mathrm{id}_{U})\) is an identity element and that inverses are appropriately defined. It remains to show that the given multiplication is associative. Observe that
\[(((a_{u})_{u\in U},\varphi_{1})((b_{u})_{u\in U},\varphi_{2}))((c_ {u})_{u\in U},\varphi_{3}) =(\alpha(\varphi_{2})((a_{u})_{u\in U})(b_{u})_{u\in U},\varphi_{ 1}\varphi_{2})((c_{u})_{u\in U},\varphi_{3})\] \[=(\alpha(\varphi_{3})((a_{\varphi_{2}(u)}b_{u})_{u\in U})(c_{u}) _{u\in U},\varphi_{1}\varphi_{2}\varphi_{3})\] \[=((a_{\varphi_{2}\varphi_{3}(u)}b_{\varphi_{3}(u)}c_{u})_{u\in U },\varphi_{1}\varphi_{2}\varphi_{3})\] \[=((a_{u})_{u\in U},\varphi_{1})((b_{\varphi_{3}(u)}c_{u})_{u\in U },\varphi_{2}\varphi_{3})\] \[=((a_{u})_{u\in U},\varphi_{1})(((b_{u})_{u\in U},\varphi_{2})((c _{u})_{u\in U},\varphi_{3}))\]
so \(\mathbf{G}_{U}\) is a group. Note that different choices of a representative \(x_{U}\) yield isomorphic groups so the notation \(\mathbf{G}_{U}\) is only suppressing an isomorphism.
**Proposition 6**.: _The stabilizer of \(f\) satisfies_
\[\mathbf{Stab}(f)\cong\!\!\!\prod_{U\in\mathrm{Im}(\eta)}\!\!\!\mathbf{G}_{U}.\]
Proof.: Consider a particular \(U\in\mathrm{Im}(\eta)\). For each \(u\in U\) fix a bijection
\[r_{u}\colon f^{-1}(x_{U})\setminus\{x_{U}\}\to f^{-1}(u)\setminus\{u\}\,.\]
Given \(\sigma\in\mathrm{Stab}(f)\) define \(\psi_{\sigma,u}\coloneqq r_{\sigma(u)}^{-1}\circ\sigma\circ r_{u}\) and define \(\varphi_{\sigma}\coloneqq\sigma|_{U}\). Let \(\gamma\colon\mathrm{Stab}(f)\to G_{U}\) be given by
\[\gamma(\sigma)\coloneqq((\psi_{\sigma,u})_{u\in U},\varphi_{\sigma}).\]
We claim that \(\gamma\) is a homomorphism.
Since
\[\gamma(\mathrm{id}_{X})=((\psi_{\mathrm{id}_{X},u})_{u\in U},\mathrm{id}_{U})=( (e_{U})_{u\in U},\mathrm{id}_{U})\]
we have that \(\gamma(\mathrm{id}_{X})\) is the identity of \(\mathbf{G}_{U}\). We have that
\[\gamma(\sigma^{-1})=((\psi_{\sigma^{-1},u})_{u\in U},\varphi_{ \sigma^{-1}}) =((r_{\sigma^{-1}(u)}^{-1}\circ\sigma^{-1}\circ r_{u})_{u\in U}, \varphi_{\sigma}^{-1})\] \[=(((r_{u}^{-1}\circ\sigma\circ r_{\sigma^{-1}(u)})^{-1})_{u\in U}, \varphi_{\sigma}^{-1})\] \[=\left(\left(\psi_{\sigma,\sigma^{-1}(u)}^{-1}\right)_{u\in U}, \varphi_{\sigma}^{-1}\right)\] \[=\left(\left(\psi_{\sigma,\varphi_{\sigma}^{-1}(u)}^{-1}\right)_{u \in U},\varphi_{\sigma}^{-1}\right)\] \[=((\psi_{\sigma,u})_{u\in U},\varphi_{\sigma})^{-1}\] \[=\gamma(\sigma)^{-1}\]
so \(\gamma\) respects taking inverses.
Observe that
\[\gamma(\tau)\gamma(\sigma) =((\psi_{\tau,u})_{u\in U},\varphi_{\tau})((\psi_{\sigma,u})_{u\in U },\varphi_{\sigma})\] \[=(\varphi_{\sigma}((\psi_{\tau,u})_{u\in U})(\psi_{\sigma,u})_{u\in U },\varphi_{\tau}\varphi_{\sigma})\] \[=((\psi_{\tau,\sigma(u)}\psi_{\sigma,u})_{u\in U},\varphi_{\tau \sigma})\] \[=((r_{\tau\sigma(u)}^{-1}\circ\tau\circ r_{\sigma(u)}\circ r_{ \sigma(u)}^{-1}\circ\sigma\circ r_{u})_{u\in U},\varphi_{\tau\sigma})\] \[=((r_{\tau\sigma(u)}^{-1}\circ\tau\sigma\circ r_{u})_{u\in U}, \varphi_{\tau\sigma})\] \[=((\psi_{\tau\sigma,u})_{u\in U},\varphi_{\tau\sigma})\] \[=\gamma(\tau\sigma)\]
so \(\gamma\) is a group homomorphism.
We have that \(\gamma\) is surjective. Since each member of \(\operatorname{Stab}(f)\) can be written as a product of permutations carried by the \(U\in\operatorname{Im}(\eta)\) it follows that \(\operatorname{\mathbf{Stab}}(f)\cong\prod_{U\in\operatorname{Im}(\eta)} \mathbf{G}_{U}\).
## 5. Burnside's Lemma
As an application we can now use Burnside's Lemma to count the number of partitions \(p(n)\) of a natural number \(n\). By our previous work we have that for each finite set \(X\) we have
\[p(|X|)=\frac{1}{|X|!}\sum_{f\in I(X)}\left|\operatorname{Stab}(f)\right|.\]
Suppose that \(X=[n]\). We have that
\[p(n)=\frac{1}{n!}\sum_{f\in I([n])}\left|\operatorname{Stab}(f)\right|.\]
Given \(f\in I([n])\) define \(\varpi(f)\colon[n]\to\{0,\dots,n\}\) by
\[\varpi(f)(k)\coloneqq\left|\left\{\,x\in n\,\left|\,\left|f^{-1}(x)\right|=k \,\right.\right\}\right|.\]
Our characterization of \(\operatorname{Stab}(f)\) shows that
\[\left|\operatorname{Stab}(f)\right|=\prod_{k=1}^{n}(k-1)!^{\varpi(f)(k)}( \varpi(f)(k))!.\]
It remains to count how many \(f\in I([n])\) determine each map \(g\colon[n]\to\{0,\dots,n\}\), for if \(\varpi(f)=g\) then
\[\left|\operatorname{Stab}(f)\right|=\prod_{k=1}^{n}(k-1)!^{g(k)}(g(k))!.\]
When \(g=\varpi(f)\) for some \(f\in I([n])\) we have that
\[\left|\left\{\,f\in I(n)\,\left|\,\varpi(f)=g\,\right.\right\}\right|=\] \[\prod_{k=1}^{n}\binom{n-\sum_{s=1}^{k-1}sg(s)}{g(k)}\prod_{v=1}^{ g(k)}\binom{n-\sum_{s=1}^{k-1}sg(s)-g(k)-(v-1)(k-1)}{k-1}.\]
Let \(V_{n}\coloneqq\varpi(I([n]))\). Note that
\[V_{n}=\left\{\,g\colon[n]\to\{0,\dots,n\}\,\left|\,\sum_{k=1}^{n}kg(k)=n\, \right.\right\}.\]
We find that equation 1 follows immediately.
Unfortunately, the summation over \(V_{n}\) is forcing us to sum over all possible \(n\)-tuples of integers between \(0\) and \(n\) which could be the numbers of parts of each given size for some partition of \(n\), so we would need to perform a task very similar to finding all partitions of \(n\) in order to compute \(p(n)\) directly by this formula. It may, however, be possible leverage this formula to obtain a different formula for \(p(n)\) which does not have this inadequacy.
For example, we could rewrite equation 1 as
\[n!p(n) =\sum_{g\in V_{n}}\left(\prod_{k=1}^{n}{(k-1)!^{g(k)}(g(k))!}{n- \sum_{s=1}^{k-1}{sg(s)}\choose g(k)}\right.\] \[\left.\prod_{v=1}^{g(k)}{n-\sum_{s=1}^{k-1}{sg(s)}-g(k)-(v-1)(k-1 )\choose k-1}\right).\]
Summing both sides from \(n=1\) to \(n=m\) for some natural number \(m\) yields
\[\sum_{n=1}^{m}n!p(n) =\sum_{n=1}^{m}\sum_{g\in V_{n}}\left(\prod_{k=1}^{n}{(k-1)!^{g(k )}(g(k))!}{n-\sum_{s=1}^{k-1}{sg(s)}\choose g(k)}\right.\] \[\left.\prod_{v=1}^{g(k)}{n-\sum_{s=1}^{k-1}{sg(s)}-g(k)-(v-1)(k-1 )\choose k-1}\right)\] \[=\sum_{g\in\bigcup_{n=1}^{m}{V_{n}}}\left(\prod_{k=1}^{n}{(k-1)!^ {g(k)}(g(k))!}{n-\sum_{s=1}^{k-1}{sg(s)}\choose g(k)}\right.\] \[\left.\prod_{v=1}^{g(k)}{n-\sum_{s=1}^{k-1}{sg(s)}-g(k)-(v-1)(k-1 )\choose k-1}\right).\]
The right-hand side of this last formula looks more promising, but when we sum over choices of \(g\) from any of the \(V_{n}\) we don't quite obtain a summation over \(\left\{0,\dots,m\right\}^{m}\) since some such tuples correspond to partitions of numbers greater than \(m\).
Another idea would be to attempt to reformulate the more desirable quantity
\[\sum_{g\colon[m]\to\left\{0,\dots,m\right\}}\left(\prod_{k=1}^{n (g)}(k-1)!^{g(k)}(g(k))!{n(g)-\sum_{s=1}^{k-1}{sg(s)}\choose g(k)}\right.\] \[\left.\prod_{v=1}^{g(k)}{n(g)-\sum_{s=1}^{k-1}{sg(s)}-g(k)-(v-1)( k-1)\choose k-1}\right)\]
where \(n(g)\coloneqq\sum_{\ell=1}^{m}\ell g(\ell)\) in terms of equation 1. In any case, an elementary formula for the partition numbers remains elusive.
|
2303.12412 | Capelli-Deruyts bitableaux and the classical Capelli generators of the
center of the enveloping algebra $U(gl(n))$ | In this paper, we consider a special class of Capelli bitableaux, namely the
Capelli-Deruyts bitableaux. The main results we prove are the hook coefficient
lemma and the expansion theorem. Capelli-Deruyts bitableaux of rectangular
shape are of particular interest since they are central elements in the
enveloping algebra. The expansion theorem implies that these central element is
explicitely described as a polynomial in the classical Capelli central
elements. The hook coefficient lemma implies that the Capelli-Deruyts
bitableaux are (canonically) expressed as the products of column determinants. | Andrea Brini, Antonio Teolis | 2023-03-22T09:21:31Z | http://arxiv.org/abs/2303.12412v2 | ###### Abstract
###### Abstract
In this paper, we consider a special class of Capelli bitableaux, namely the Capell bitableaux of the form \({\bf K}^{\lambda}=[Der_{\lambda}^{*}|Der_{\lambda}]\in{\bf U}(gl(n))\). The main results we prove are the hook coefficient lemma and the expansion theorem. Capelli-Deruyts bitableaux \({\bf K}_{n}^{p}\) of rectangular shape are of particular interest since they are central elements in the enveloping algebra \({\bf U}(gl(n))\). The expansion theorem implies that the central element \({\bf K}_{n}^{p}\) is explicitely described as a polynomial in the classical Capelli central elements \({\bf H}_{n}^{(j)}\). The hook coefficient lemma implies that the Capelli-Deruyts bitableaux \({\bf K}_{n}^{p}\) are (canonically) expressed as the products of column determinants.
**Capelli-Deruyts bitableaux**
**and**
**the classical Capelli generators**
**of the center of the enveloping algebra \({\bf U}(gl(n))\)**
A. Brini and A. Teolis
\({}^{\flat}\) _Dipartimento di Matematica, Universita di Bologna_
_Piazza di Porta S. Donato, 5. 40126 Bologna. Italy._
e-mail: [email protected]
**Keyword**: Capelli bitableaux; Capelli-Deruyts bitableaux; Capelli column determinants; central elements in \({\bf U}(gl(n))\); Lie superalgebras.
**AMSC**: 17B10, 05E10, 17B35
###### Contents
* 1 Introduction
* 2 The classical Capelli identities
* 3 The Capelli-Deruyts bitableaux in \({\bf U}(gl(n))\)
* 3.1 Capelli-Deruyts bitableaux \({\bf K}^{\lambda}\) of shape \(\lambda\).
* 3.2 The Capelli-Deruyts bitableaux \({\bf K}_{\bf n}^{\bf p}\) of rectangular shape \(\lambda=n^{p}\).
The hook eigenvalue Theorem for Capelli-Deruyts bitableaux
* 5 The factorization Theorem for Capelli-Deruyts bitableaux
* 6 The center \(\zeta(n)\) of \({\bf U}(gl(n))\)
* 6.1 The Capelli generators of the center \(\zeta(n)\) of \({\bf U}(gl(n))\)
* 6.2 The factorization Theorem for rectangular Capelli-Deruyts bitableaux \({\bf K}_{\bf n}^{\bf p}\)
* 6.3 The Harish-Chandra isomorphism and the algebra \(\Lambda^{*}(n)\) of shifted symmetric polynomials
* 6.4 The Harish-Chandra isomorphism interpretation of Theorem 1 and Theorem 3
* 6.5 Polynomial identities
* 6.6 The shaped Capelli central elements \({\bf K}_{\lambda}(n)\)
* 7 Proof of Theorem 2
* 7.1 A commutation identity for enveloping algebras of Lie superalgebras
* 7.2 Some preliminary remarks and definitions
* 7.2.1 The virtual algebra and the Capelli devirtualization epimorphism
* 7.2.2 A more readable notation
* 7.2.3 The coproduct in \(\Lambda(V)=\Lambda({\cal L})\), Sweedler notation and _split notation_
* 7.3 Some lemmas
* 7.4 Proof of Theorem 2
* 8 Proof of Theorem 1
* 9 Appendix. A glimpse on the superalgebraic method of virtual variables
* 9.1 The general linear Lie super algebra \(gl(m|n)\)
* 9.2 The supersymmetric algebra \({\mathbb{C}}[M_{m|n,d}]\)
* 9.3 Left superderivations and left superpolarizations
* 9.4 The superalgebra \({\mathbb{C}}[M_{m|n,d}]\) as a \({\bf U}(gl(m|n))\)-module
* 9.5 The virtual algebra \(Virt(m,n)\) and the virtual presentations of elements in \({\bf U}(gl(n))\)
9.6 Bitableaux monomials and Capelli bitableaux in \({\bf U}(gl(n))\)
9.7 Biproducts and bitableaux in \({\mathbb{C}}[M_{m|n,d}]\)
## 1 Introduction
The study of the center \(\mathbf{\zeta}(n)\) of the enveloping algebra \({\bf U}(gl(n))\) of the general linear Lie algebra \(gl(n,{\mathbb{C}})\), and the study of the algebra \(\Lambda^{*}(n)\) of shifted symmetric polynomials have noble and rather independent origins and motivations. The theme of central elements in \({\bf U}(gl(n))\) is a standard one in the general theory of Lie algebras, see e.g. [18]. It is an old and actual one, since it is an offspring of the celebrated Capelli identity (see e.g. [11], [14], [21], [22], [36], [41], [42]), relates to its modern generalizations and applications (see e.g. [1], [24], [25], [29], [30], [31], [32], [40]) as well as to the theory of _Yangians_ (see, e.g. [27], [28]).
_Capelli bitableaux_\([S|T]\) and their variants (such as _Young-Capelli bitableaux_ and _double Young-Capelli bitableaux_) have been proved to be relevant in the study of the enveloping algebra \({\bf U}(gl(n))={\bf U}(gl(n),{\mathbb{C}})\) of the general linear Lie algebra and of its center \(\zeta(n)\).
To be more specific, the _superalgebraic method of virtual variables_ (see, e.g. [4], [5], [6], [7], [8], [9], [10]) allowed us to express remarkable classes of elements in \({\bf U}(gl(n))\), namely,
* the class of _Capelli bitableaux_\([S|T]\in{\bf U}(gl(n))\)
* the class of _Young-Capelli bitableaux_\([S|\ \underline{T}\ \in{\bf U}(gl(n))\)
* the class of _double Young-Capelli bitableaux_\([\ \underline{S\ |\ T}\ ]\in{\bf U}(gl(n))\)
as the images - with respect to the \(Ad_{gl(n)}\)-adjoint equivariant Capelli _devirtualization epimorphism_ - of simple expressions in an enveloping superalgebra \({\bf U}(gl(m_{0}|m_{1}+n))\) (see, e.g [10]).
Capelli (determinantal) bitableaux are generalizations of the famous _column determinant element_ in \({\bf U}(gl(n))\) introduced by Capelli in 1887 [11] (see, e.g. [9]). Young-Capelli bitableaux were introduced by the present authors several years ago [5], [6], [7] and might be regarded as generalizations of the Capelli column determinant elements in \({\bf U}(gl(n))\) as well as of the _Young symmetrizers_ of the classical representation theory of symmetric groups (see, e.g. [42]). Double Young-Capelli bitableaux play a crucial role in the study of the center \(\mathbf{\zeta}(n)\) of the enveloping algebra ([8], [10]).
In plain words, the Young-Capelli bitableau \([S|\ \underline{T}\ ]\) is obtained by adding a _column symmetrization_ to the Capelli bitableau \([S|T]\) and turn out to be a linear combination of
Capelli bitableaux (see, e.g [10], Proposition 2.13). The double Young-Capelli bitableau \([\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
where \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p})\) is a partition with \(\lambda_{1}\leq n\), and
* \(Der_{\lambda}\) is the _Deruyts tableaux_ of shape \(\lambda\), that is the Young tableau of shape \(\lambda\): \[Der_{\lambda}=\left[\begin{array}{ccccc}1&2&\ldots&\ldots&\ldots&\lambda_{1} \\ 1&2&\ldots&\ldots&\lambda_{2}\\ \ldots&\ldots&\ldots&\ldots&\\ 1&2&\ldots&\lambda_{p}&\end{array}\right]\]
* \(Der_{\lambda}^{*}\) is the _reverse Deruyts tableaux_ of shape \(\lambda\), that is the Young tableau of shape \(\lambda\): \[Der_{\lambda}^{*}=\left[\begin{array}{ccccc}\lambda_{1}&\ldots&\ldots&\ldots &2&1\\ \lambda_{2}&\ldots&\ldots&2&1\\ \ldots&\ldots&\ldots&\\ \lambda_{p}&\ldots&2&1.&\end{array}\right].\] Capelli-Deruyts bitableaux arise, in a natural way, as generalizations to arbitrary shapes \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p})\) of the well-known _Capelli column determinant2_ elements: Footnote 2: The symbol **cdet** denotes the column determinat of a matrix \(A=[a_{ij}]\) with noncommutative entries: \(\textbf{cdet}(A)=\sum_{\sigma}\ (-1)^{|\sigma|}\ a_{\sigma(1),1}a_{\sigma(2),2} \cdots a_{\sigma(n),n}\).
\[\mathbf{H}_{n}^{(n)}=\ \textbf{cdet}\left(\begin{array}{ccccc}e_{1,1}+(n-1)&e_{1,2 }&\ldots&e_{1,n}\\ e_{2,1}&e_{2,2}+(n-2)&\ldots&e_{2,n}\\ \vdots&\vdots&\vdots&\\ e_{n,1}&e_{n,2}&\ldots&e_{n,n}\end{array}\right)\in\mathbf{U}(gl(n)), \tag{2}\]
introduced by Alfredo Capelli [11] in the celebrated identities that bear his name (see, e.g. [11], [14], [21], [22], [36], [41], [42], [1], [24], [25], [29], [30], [31], [32], [40]).
The main results we prove are the following:
* **The hook coefficient lemma**: let \(v_{\mu}\) be a \(gl(n,\mathbb{C})\)-highest weight vector of weight \(\mu=(\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n})\), with \(\mu_{i}\in\mathbb{N}\) for every \(i=1,2,\ldots,n\). Then, \(v_{\mu}\) is an _eigenvector_ of the action of the Capelli-Deruyts bitableau \(\mathbf{K}^{\lambda}\) with _eigenvalue_ the (signed) product of _hook numbers_ in the Ferrers diagram of the partition \(\mu\) (Proposition 5).
* **The expansion theorem**: the Capelli-Deruyts bitableau \(\mathbf{K}^{\lambda}\in\mathbf{U}(gl(n))\) expands as a polynomial, with explicit coefficients, in the _Capelli generators_ \[\mathbf{H}_{k}^{(j)}=\sum_{1\leq i_{1}<\cdots<i_{j}\leq k}\ \textbf{cdet}\left(\begin{array}{ccccc}e_{i_{1},i_{1}}+(j-1)&e_{i_{1},i_{2 }}&\ldots&e_{i_{1},i_{j}}\\ e_{i_{2},i_{1}}&e_{i_{2},i_{2}}+(j-2)&\ldots&e_{i_{2},i_{j}}\\ \vdots&\vdots&\vdots&\\ e_{i_{k},i_{1}}&e_{i_{j},i_{2}}&\ldots&e_{i_{j},i_{j}}\end{array}\right)\]
of the centers of the enveloping algebras \({\bf U}(gl(k))\), \(k=1,2,\ldots,n\), \(j=1,2,\ldots,k\) (Theorem 3).
Capelli-Deruyts bitableaux \({\bf K_{n}^{p}}\) of _rectangular shape_\(\lambda=n^{p}=(n,\stackrel{{ p\;times}}{{n,n}},\stackrel{{ \;times}}{{\cdots}},n)\) are of particular interest since they are _central elements_ in the enveloping algebra \({\bf U}(gl(n))\).
* The expansion theorem implies that the Capelli-Deruyts bitableau \({\bf K_{n}^{p}}\) (with \(p\) rows) equals the product of the Capelli-Deruyts bitableau \({\bf K_{n}^{p-1}}\) (with \(p-1\) rows) and the central element \[{\bf C}_{n}(p-1)=\sum_{j=0}^{n}\;(-1)^{n-j}(p-1)_{n-j}\;{\bf H}_{n}^{(j)}\] (see Corollary 1). Hence, by iterating this procedure, the central element \({\bf K_{n}^{p}}\) is explicitely described as a polynomial in the classical Capelli central elements \({\bf H}_{n}^{(j)}\) (see Corollary 3).
* The hook coefficient lemma implies -via the HarishChandra isomorphism- that the element \({\bf C}_{n}(p)\) also equals the column determinant element \[{\bf H}_{n}(p)={\bf cdet}\left[e_{h,k}+\delta_{hk}(-p+n-h)\right]_{h,k=1,\ldots,n}\in{\bf U}(gl(n)).\] Notice that \[{\bf H}_{n}(0)=\ {\bf cdet}\left(\begin{array}{cccc}e_{1,1}+(n-1)&e_{1,2}& \ldots&e_{1,n}\\ e_{2,1}&e_{2,2}+(n-2)&\ldots&e_{2,n}\\ \vdots&\vdots&\vdots&\\ e_{n,1}&e_{n,2}&\ldots&e_{n,n}\end{array}\right)={\bf H}_{n}^{(n)},\] the classical Capelli column determinant element. From these facts, the Capelli-Deruyts bitableaux \({\bf K_{n}^{p}}\) are (canonically) expressed as the products of column determinants: \[{\bf K_{n}^{p}}=(-1)^{n\binom{p}{2}}\;{\bf H}_{n}(p-1)\ \cdots\ {\bf H}_{n}(1)\;{\bf H}_{n}(0)\] (see Corollary 7).
The method of _superalgebraic virtual variables_ ([4], [5], [6], [7], [8], [9], [10]) plays a crucial role in the present paper; we provide a short presentation of the method in the Appendix.
## 2 The classical Capelli identities
The _algebra of algebraic forms \({\bf f}(\underline{x}_{1},\ldots,\underline{x}_{n})\) in \(n\) vector variables \(\underline{x}_{i}=(\underline{x}_{i1},\ldots,\underline{x}_{id})\) of dimension \(d\)_ is the polynomial algebra in \(n\times d\) (commutative) variables:
\[\mathbb{C}[M_{n,d}]=\mathbb{C}[x_{ij}]_{i=1,\ldots,n;j=1,\ldots,d},\]
and \(M_{n,d}\) denotes the matrix with \(n\) rows and \(d\) columns with "generic" entries \(x_{ij}\):
\[M_{n,d}=\left[x_{ij}\right]_{i=1,\ldots,n;j=1,\ldots,d}=\left[\begin{array}[] {ccc}x_{11}&\ldots&x_{1d}\\ x_{21}&\ldots&x_{2d}\\ \vdots&&\vdots\\ x_{n1}&\ldots&x_{nd}\end{array}\right]. \tag{3}\]
The algebra \(\mathbb{C}[M_{n,d}]\) is a \({\bf U}(gl(n))-\)module, with respect to the action:
\[e_{x_{j},x_{i}}\cdot{\bf f}=D^{l}_{x_{j},x_{i}}({\bf f}),\]
for every \({\bf f}\in\mathbb{C}[M_{n,d}]\), where, for any \(i,j=1,2,\ldots,n\), where \(D^{l}_{x_{j},x_{i}}\) is the unique _derivation_ of the algebra \(\mathbb{C}[M_{n,d}]\) such that
\[D^{l}_{x_{j},x_{i}}(x_{hk})=\delta_{ih}\ x_{jk},\]
for every \(k=1,2.\ldots,d\).
**Proposition 1**.: (**The Capelli identities**, **1887**)__
\[{\bf H}^{(n)}_{n}({\bf f})=\begin{cases}0&\text{if $n>d$}\\ \left[\underline{x}_{1},\ldots,\underline{x}_{n}\right]\,\Omega_{n}({\bf f})& \text{if $n=d$},\end{cases}\]
_where \({\bf f}(\underline{x}_{1},\ldots,\underline{x}_{n})\in\mathbb{C}[M_{n,d}]\) is an algebraic form (polynomial) in the \(n\) vector variables \(\underline{x}_{i}=(x_{i1},\ldots,x_{id})\) of dimension \(d\), and, if \(d=n\), \([\underline{x}_{1},\ldots,\underline{x}_{n}]\) is the bracket_
\[[\underline{x}_{1},\ldots,\underline{x}_{n}]=det\left[\begin{array}{ccc}x_{ 11}&\ldots&x_{1n}\\ \vdots&\vdots&\vdots\\ x_{n1}&\ldots&x_{nn}\end{array}\right],\]
_and \(\Omega_{n}\) is the Cayley \(\Omega\)-process_
\[\Omega_{n}=det\left[\begin{array}{ccc}\frac{\partial}{\partial x_{11}}& \ldots&\frac{\partial}{\partial x_{1n}}\\ \vdots&\vdots&\vdots\\ \frac{\partial}{\partial x_{n1}}&\ldots&\frac{\partial}{\partial x_{nn}}\end{array} \right].\]
From [9], we recall that the determinant element \({\bf H}_{n}^{(n)}\) can be written as the (one row) _Capelli-Deruyts bitableau_\([n\ldots 21|12\ldots n]\) ([5], see also [8], [26]).
**Proposition 2**.: _The element_
\[{\bf H}_{n}^{(n)}=\mathbf{\mathit{cdet}}\left(\begin{array}{cccc}e _{1,1}+(n-1)&e_{1,2}&\ldots&e_{1,n}\\ e_{2,1}&e_{2,2}+(n-2)&\ldots&e_{2,n}\\ \vdots&\vdots&\vdots&\\ e_{n,1}&e_{n,2}&\ldots&e_{n,n}\end{array}\right)\in{\bf U}(gl(n))\]
_equals the one row Capelli-Deruyts bitableau (see, e.g. Subsection 9.6 below)_
\[[n\ldots 21|12\ldots n]={\mathfrak{p}}\left(e_{n,\alpha}\cdots e_{2,\alpha}e_{ 1,\alpha}\cdot e_{\alpha,1}e_{\alpha,2}\cdots e_{\alpha,n}\right),\]
_where \({\mathfrak{p}}\) denotes the Capelli devirtualization epimorphism (see, e.g. Subsection 9.5 below)._
From eq. (2) and Proposition 2, it follows:
**Proposition 3**.: _We have:_
1. _Let_ \(v_{\mu}\) _be a_ \(gl(n,\mathbb{C})\)_-highest weight vector of weight_ \(\mu=(\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n}),\) _with_ \(\mu_{i}\in\mathbb{N}\) _for every_ \(i=1,2,\ldots,n.\) _Then_ \(v_{\mu}\) _is an_ eigenvector _of the action of_ \({\bf H}_{n}^{(n)}\) _with_ eigenvalue_:_ \[(\mu_{1}+n-1)(\mu_{2}+n-2)\cdots\mu_{n}.\] _In symbols,_ \[{\bf H}_{n}^{(n)}\cdot v_{\mu}=\left((\mu_{1}+n-1)(\mu_{2}+n-2)\cdots\mu_{n} \right)\ v_{\mu}.\]
2. _The element_ \({\bf H}_{n}^{(n)}\) _is_ central _in the enveloping algebra_ \({\bf U}(gl(n)).\)__
## 3 The Capelli-Deruyts bitableaux in \({\bf U}(gl(n))\)
We generalize the _one row_ Capelli bitableau \({\bf H}_{n}^{(n)}=[n\ldots 21|12\ldots n]\) to arbitrary shapes (partitions)
\[\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p}),\qquad\lambda_{ i}\in\mathbb{Z}^{+}.\]
### Capelli-Deruyts bitableaux \(\mathbf{K}^{\lambda}\) of shape \(\lambda\).
Given a partition(shape) \(\lambda=\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p}\), we recall that the _Deruyts tableaux_ of shape \(\lambda\) is the Young tableau
\[Der_{\lambda}=(\underline{\lambda_{1}},\underline{\lambda_{2}},\ldots, \underline{\lambda_{p}}) \tag{4}\]
and the _reverse Deruyts tableaux_ of shape \(\lambda\) is the Young tableau
\[Der^{*}_{\lambda}=(\underline{\lambda_{1}}^{*},\underline{\lambda_{2}}^{*}, \ldots,\underline{\lambda_{p}}^{*}),\]
where
\[\underline{\lambda_{i}}=1\ 2\ \cdots\ \lambda_{i}\]
and
\[\underline{\lambda_{i}}^{*}=\lambda_{i}\ \cdots\ 2\ 1,\]
for every \(i=1,2,\ldots,p\).
The _Capelli-Deruyts bitableau_\(\mathbf{K}^{\lambda}\) is the Capelli bitableau in \(\mathbf{U}(gl(n))\), \(n\geq\lambda_{1}\):
\[\mathbf{K}^{\lambda}=[Der^{*}_{\lambda}|Der_{\lambda}]=\mathfrak{p}\big{(}e_{ Der^{*}_{\lambda}C_{\lambda}}\cdot e_{C_{\lambda}Der_{\lambda}}\big{)},\]
where \(\mathfrak{p}\) denotes the Capelli devirtualization epimorphism and \(e_{Der^{*}_{\lambda}C_{\lambda}}\), \(e_{C_{\lambda}Der_{\lambda}}\) are _bitableax monomials_ (see., e.g. Subsection 9.6, eq. (9.6)).
**Example 1**.: _Let \(\lambda=(3,2,2)\). Then_
\[\mathbf{K}^{(3,2,2)}=\left[\begin{array}{ccc}3&2&1\\ 2&1\\ 2&1\end{array}\right|\begin{array}{ccc}1&2&3\\ 1&2\\ 1&2\end{array}\right]=\\ =\mathfrak{p}\big{(}e_{3\alpha_{1}}e_{2\alpha_{1}}e_{1\alpha_{1}}e_{2 \alpha_{2}}e_{1\alpha_{2}}e_{2\alpha_{3}}e_{1\alpha_{3}}\cdot e_{\alpha_{1}1}e _{\alpha_{1}2}e_{\alpha_{1}3}e_{\alpha_{2}1}e_{\alpha_{2}2}e_{\alpha_{3}1}e_{ \alpha_{3}2}\big{)}\in\mathbf{U}(gl(n)),\quad n\geq 3,\]
_where \(\alpha_{1},\alpha_{2},\alpha_{3}\) are (arbitrary, distinct) positive virtual symbols._
**Remark 1**.: _Given a Young tableau_
\[T=\left[\begin{array}{ccccc}x_{11}&x_{12}&\cdots&\cdots&\cdots&x_{1\lambda _{1}}\\ x_{21}&x_{22}&\cdots&\cdots&\cdots&x_{2\lambda_{2}}\\ \vdots&&&&\\ x_{i1}&x_{i2}&\cdots&\cdots&x_{i\lambda_{i}}\\ \vdots&&&&\\ x_{p1}&x_{p2}&\cdots&\cdots&x_{p\lambda_{p}}\end{array}\right],\ x_{ij}\in X, \tag{5}\]
of shape \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p})\) over the set \(X\) is said to be of Deruyts type whenever_
\[\{x_{i1},\ x_{i2},\ldots,\ x_{i\lambda_{i}}\}\subseteq\{x_{i-1\ 1},\ x_{i-1\ 2},\ldots,\ x_{i-1\ \lambda_{i-1}}\},\]
_for \(i=2,\ldots,p\)._
_Clearly, any tableau of Deruyts type (5) can be regarded as a Deruyts tableau (4), by suitably renaming and reordering the entries._
### The Capelli-Deruyts bitableaux \(\mathbf{K_{n}^{p}}\) of rectangular shape \(\lambda=n^{p}\)
Given any positive integer \(p\), we define the _rectangular Capelli/Deruyts bitableau_, with \(p\) rows of length \(\lambda_{1}=\lambda_{2}=\cdots=\lambda_{p}=n\):
\[\mathbf{K_{n}^{p}}=\left[\begin{array}{cccc|cccc}n&n-1&\ldots&3&2&1\\ n&n-1&\ldots&3&2&1\\ \cdots&&&&\\ \cdots&&&&\\ \cdots&&&&\\ n&n-1&\ldots&3&2&1\end{array}\right|\begin{array}{cccc|cccc}1&2&3&\ldots &n-1&n\\ 1&2&3&\ldots&n-1&n\\ \cdots&&&&\\ \cdots&&&&\\ \cdots&&&&\\ \cdots&&&&\\ \par n&n-1&\ldots&3&2&1\end{array}\right]\in\mathbf{U}(gl(n)).\]
From Proposition 26, we infer:
**Proposition 4**.: _The elements \(\mathbf{K_{n}^{p}}\) are central in \(\mathbf{U}(gl(n))\)._
Set, by definition, \(\mathbf{K_{n}^{0}}=1.\)
## 4 The hook eigenvalue Theorem for Capelli-Deruyts bitableaux
Any rectangular Capelli-Deruyts bitableau \(\mathbf{K_{n}^{p}}\) well behaves on \(gl(n,\mathbb{C})\)-highest weight vectors (compare with Proposition 3, item 1)).
**Theorem 1**.: _(_**The hook coefficient lemma**_)_
_Let \(v_{\mu}\) be a highest weight vector of weight \(\mu=(\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n}),\) with \(\mu_{i}\in\mathbb{N}\) for every \(i=1,2,\ldots,n.\) Then \(v_{\mu}\) is an eigenvector of the action of \(\mathbf{K_{n}^{p}}\) with eigenvalue the (signed) product of hook numbers in the Ferrers diagram of the partition \(\mu\):_
\[(-1)^{\binom{p}{2}n}\ \left(\prod_{j=1}^{p}\ (\mu_{1}-j+n)(\mu_{2}-j+n-1) \cdots(\mu_{n}-j+1)\right).\]
_In symbols,_
\[{\bf K}_{\bf n}^{\bf p}\cdot v_{\mu}=(-1)^{\binom{p}{2}n}\ \left(\prod_{j=1}^{p}\ (\mu_{1}-j+n)(\mu_{2}-j+n-1)\cdots(\mu_{n}-j+1)\right)\ v_{\mu}.\]
Theorem 1 generalizes to arbitrary Capelli-Deruyts bitableaux \({\bf K}_{\lambda}\) of shape \(\lambda\) as follows:
**Proposition 5**.: _Let \(v_{\mu}\) be a highest weight vector of weight \(\mu=(\mu_{1}\geq\mu_{2}\geq\ldots\geq\mu_{n}),\) with \(\mu_{i}\in\mathbb{N}\) for every \(i=1,2,\ldots,n.\) Let \(\lambda=(\lambda_{1}\geq\cdots\geq\lambda_{p})\) be a partition(shape). Then_
\[{\bf K}^{\lambda}\cdot v_{\mu}=\ (-1)^{\lambda_{p}(\lambda_{p-1}+ \cdots+\lambda_{1})+\lambda_{p-1}(\lambda_{p-2}+\cdots+\lambda_{1})+\cdots+ \lambda_{2}\lambda_{1}}\ \times\\ \times\left(\prod_{i=1}^{p}\ (\mu_{1}-i+\lambda_{i})(\mu_{2}-i+ \lambda_{i}-1)\cdots(\mu_{\lambda_{i}}-i+1)\right)\ v_{\mu}.\]
## 5 The factorization Theorem for Capelli-Deruyts bitableaux
Let \(J=\{j_{1}<j_{2}<\cdots<j_{k}\}\subseteq\underline{n}=\{1,2;\ldots,n\}\). With a slight abuse of notation, we write \(\underline{J}\) for the increasing word \(\underline{J}=j_{1}j_{2}\cdots j_{k}\) and \(\underline{J}^{*}\) for the decreasing word \(\underline{J}^{*}=j_{k}\cdots j_{2}j_{1}\).
Given a partition \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p})\), set \(|\lambda|=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{p}\).
We have
\[{\bf K}^{\lambda}=\left[\begin{array}{c}{\frac{{\lambda_{1}}^{*}}{ \lambda_{2}^{*}}}\\ \vdots\\ {\frac{{\lambda_{p}}^{*}}{\lambda_{p}}}\end{array}\right]\ \frac{\lambda_{1}}{ \lambda_{2}}\\ \frac{\lambda_{2}}{\lambda_{p}}\end{array}\]
and, consistently, we write, for \(J\subseteq M\),
\[\left[\begin{array}{c}{\bf K}^{\lambda}\\ J\end{array}\right]=\left[\begin{array}{c}{\frac{{\lambda_{1}}^{*}}{ \lambda_{2}^{*}}}\\ {\frac{{\lambda_{2}}}{\lambda_{2}}}\\ \vdots\\ {\frac{{\lambda_{p}}^{*}}{\underline{J}^{*}}}\end{array}\right],\ \ \ [J]=[ \underline{J}^{*}|\underline{J}].\]
**Theorem 2**.: (**The row insertion theorem**) _Let \(m\leq\lambda_{p}\). Given \(M\subseteq\underline{\lambda_{p}}\), \(|M|=m\), we have_
\[[M^{*}|M]\ \ {\bf K}^{\lambda}=\sum_{k=0}^{m}\ \langle p\rangle_{m-k}\ \sum_{J;\ J \subseteq M;\ |J|=k}\ (-1)^{|\lambda|k}\left[\begin{array}{c}{\bf K}^{\lambda}\\ J\end{array}\right],\]
_where \(\left\langle p\right\rangle_{j}\) denotes the raising factorial_
\[\left\langle p\right\rangle_{j}=p(p+1)\cdots(p+j-1).\]
**Theorem 3**.: (**The expansion theorem**) _Let \(m\leq\lambda_{p}\). Given \(M\subseteq\underline{\lambda_{p}}\), \(|M|=m\), we have_
\[(-1)^{|\lambda|m}\ \left[\begin{array}{c}\mathbf{K}^{\lambda}\\ M\end{array}\right]=\sum_{k=0}^{m}\ \ (-1)^{m-k}\left(p\right)_{m-k}\ \sum_{J;\ J\subseteq M;\ |J|=k}\ [ \underline{J}^{*}|J]\ \mathbf{K}^{\lambda},\]
_where \((p)_{j}\) denonotes the falling factorial_
\[(p)_{j}=p(p-1)\cdots(p-j+1).\]
Proof.: By Theorem 2,
\[\sum_{k=0}^{m}\ \ (-1)^{m-k}\left(p\right)_{m-k}\ \sum_{J;\ J \subseteq\ M;\ |J|=k}\ [J]\ \mathbf{K}^{\lambda}=\] \[\ \ \ \ =\sum_{k=0}^{m}\ \left(-1\right)^{m-k}\left(p\right)_{m-k} \ \sum_{J;\ J\subseteq\ M;\ |J|=k}\ \sum_{i=0}^{k}\ \left\langle p\right\rangle_{k-i}\ \sum_{I;\ I \subseteq J;\ |I|=i}\ (-1)^{|\lambda|i}\left[\begin{array}{c}\mathbf{K}^{\lambda}\\ I\end{array}\right]=\] \[=\sum_{i=0}^{m}\ \sum_{k=i}^{m}\ \ \sum_{I;\ I\subseteq\ M;\ |I|=i}\ \big{(}\sum_{J;\ M \supseteq J\supseteq I;\ |J|=k}(-1)^{m-k}\left(p\right)_{m-k}\ \left\langle p\right\rangle_{k-i}\big{)}\ (-1)^{| \lambda|i}\left[\begin{array}{c}\mathbf{K}^{\lambda}\\ I\end{array}\right]=\] \[=\sum_{i=0}^{m}\ \ \sum_{I;\ I\subseteq\ M;\ |I|=i}\ \big{(}(m-i)!\sum_{k=i}^{m}\ (-1)^{m-k} \binom{p}{m-k}\binom{p}{k-i}\big{)}\ (-1)^{|\lambda|i}\left[\begin{array}{c} \mathbf{K}^{\lambda}\\ I\end{array}\right]=\] \[=\sum_{i=0}^{m}\ \ \sum_{I;\ I\subseteq\ M;\ |I|=i}\ \big{(}(m-i)!\ \delta_{m-i,0}\big{)}\ (-1)^{| \lambda|i}\left[\begin{array}{c}\mathbf{K}^{\lambda}\\ I\end{array}\right]=\] \[=\sum_{i=0}^{m}\ \ \sum_{I;\ I\subseteq\ M;\ |I|=i}\ \big{(}(m-i)!\ \delta_{m,i}\big{)}\ (-1)^{| \lambda|i}\left[\begin{array}{c}\mathbf{K}^{\lambda}\\ I\end{array}\right]=(-1)^{|\lambda|m}\left[\begin{array}{c}\mathbf{K}^{ \lambda}\\ M\end{array}\right].\]
**Example 2**.:
1. _We have_ \[[21|12]\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right] =6\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right]+2\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\\ 1&\end{array}\right.\right]\] \[+2\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\\ 2&\end{array}\right.\right]+\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\\ 1&2\end{array}\right.\right].\]
2. _We have_ \[\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right] =2\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right]-2\left[1|1\right]\left[\begin{array}{cc}3&2&1 \\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right]\] \[-2\left[2|2\right]\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right]+\left[2&1|1&2\right]\left[\begin{array}{cc}3&2& 1\\ 2&1\end{array}\left|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right.\right].\]
## 6 The center \(\zeta(n)\) of \({\bf U}(gl(n))\)
### The Capelli generators of the center \(\zeta(n)\) of \({\bf U}(gl(n))\)
In the enveloping algebra \({\bf U}(gl(n))\), given any increasing \(k\)-tuple integers \(1\leq i_{1}<\cdots<i_{k}\leq n\).
We recall that the column determinant
\[{\bf cdet}\left(\begin{array}{cccc}e_{i_{1},i_{1}}+(k-1)&e_{i_{1},i_{2}}& \ldots&e_{i_{1},i_{k}}\\ e_{i_{2},i_{1}}&e_{i_{2},i_{2}}+(k-2)&\ldots&e_{i_{2},i_{k}}\\ \vdots&\vdots&\vdots&\\ e_{i_{k},i_{1}}&e_{i_{k},i_{2}}&\ldots&e_{i_{k},i_{k}}\end{array}\right)\in{ \bf U}(gl(n))\]
equals the _one-row_ Capelli-Deruyts bitableau
\[[i_{k}i_{k-1}\cdots i_{1}|i_{1}\cdots i_{k-1}i_{k}]={\mathfrak{p}}\left(e_{i_ {k}\alpha}e_{i_{k-1}\alpha}\cdots e_{i_{1}\alpha}e_{\alpha i_{1}}\cdots e_{ \alpha i_{k-1}}e_{\alpha i_{k}}\right)\in{\bf U}(gl(n))\]
(see, e.g. [9]).
Consider the \(k\)-th _Capelli element_
\[{\bf H}_{n}^{(k)}=\ \sum_{1\leq i_{1}<\cdots<i_{k}\leq n}\ {\bf cdet}\left(\begin{array}{cccc}e_{i_{1},i_{1}}+(k-1)&e_{i_{1},i_{2}}& \ldots&e_{i_{1},i_{k}}\\ e_{i_{2},i_{1}}&e_{i_{2},i_{2}}+(k-2)&\ldots&e_{i_{2},i_{k}}\\ \vdots&\vdots&\vdots&\\ e_{i_{k},i_{1}}&e_{i_{k},i_{2}}&\ldots&e_{i_{k},i_{k}}\end{array}\right)\]
Clearly, we have
\[{\bf H}_{n}^{(k)}=\sum_{1\leq i_{1}<\cdots<i_{k}\leq n}[i_{k}\cdots i_{2}i_{1}|i_{ 1}i_{2}\cdots i_{k}]. \tag{6}\]
We recall the following fundamental result, proved by Capelli in two papers ([12], [13]) with deceiving titles.
**Proposition 6**.: \((\)**Capelli**, **1893**\()\) _Let \(\zeta(n)\) denote be center of \({\bf U}(gl(n))\). We have:_
* _The elements_ \({\bf H}_{n}^{(k)}\)_,_ \(k=1,2,\ldots,n\) _belong to the center_ \(\zeta(n)\)_._
* _The subalgebra_ \(\zeta(n)\) _of_ \({\bf U}(gl(n))\) _is the polynomial algebra_ \[\zeta(n)\ =\ \mathbb{C}[{\bf H}_{n}^{(1)},{\bf H}_{n}^{(2)},\ldots,{\bf H}_{n}^{(n)}],\] _where_ \[{\bf H}_{n}^{(1)},{\bf H}_{n}^{(2)},\ldots,{\bf H}_{n}^{(n)}\] _is a set of algebraically independent generators of_ \(\zeta(n)\)_._
### The factorization Theorem for rectangular Capelli-Deruyts tableaux \({\bf K}_{\bf n}^{\bf p}\)
The crucial result in this section is that Capelli-Deruyts bitableaux \({\bf K}_{\bf n}^{\bf p}\) of _rectangular_ shape \(\lambda=n^{p}\) expand into _commutative_ polynomials in the Capelli elements \({\bf H}_{n}^{(j)}\), with explicit coefficients.
The next result was announced, without proof, in [3]. By eq. (6), it is a special case of Theorem 3.
**Corollary 1**.: \((\)**ExpansionTheorem**\()\)__
_Let \(p\in\mathbb{N}\) and set \({\bf H}_{n}^{(0)}={\bf 1},\) by definition. The following identity in \(\zeta(n)\) holds:_
\[{\bf K}_{\bf n}^{\bf p}=(-1)^{n(p-1)}\ {\bf C}_{n}(p-1)\ {\bf K}_{\bf n}^{{\bf p }-{\bf 1}},\]
_where, given \(p\in\mathbb{N}\),_
\[{\bf C}_{n}(p-1)=\sum_{j=0}^{n}\ (-1)^{n-j}(p-1)_{n-j}\ {\bf H}_{n}^{(j)}. \tag{7}\]
_where_
\[(m)_{k}=m(m-1)\cdots(m-k+1),\ m,k\in\mathbb{N}\]
_denotes the falling factorial coefficient._
If \(p=0\), eq. (7) collapses to
\[{\bf K_{n}^{1}}={\bf H}_{n}^{(n)}={\bf C}_{n}(0).\]
Notice that the linear relations (7), for \(p=0,\ldots,n-1\), yield a nonsingular triangular coefficients matrix.
**Corollary 2**.: _The subalgebra \(\zeta(n)\) of \({\bf U}(gl(n))\) is the polynomial algebra_
\[\zeta(n)\ =\ {\mathbb{C}}[{\bf C}_{n}(0),{\bf C}_{n}(1),\ldots,{\bf C}_{n}(n-1)],\]
_where_
\[{\bf C}_{n}(0),{\bf C}_{n}(1),\ldots,{\bf C}_{n}(n-1)\]
_is a set of algebraically independent generators of \(\zeta(n)\)._
**Corollary 3**.: _The rectangular Capelli-Deruyts bitableau \({\bf K_{n}^{p}}\) equals the commutative polynomial in the Capelli generators:_
\[{\bf K_{n}^{p}}=(-1)^{n\binom{p}{2}}\ {\bf C}_{n}(p-1)\ \cdots\ {\bf C}_{n}(1)\ {\bf C}_{n}(0).\]
**Example 3**.: _Let \(n=3\), \(p=2\). Then_
\[{\bf K_{3}^{2}}=\left[\begin{array}{cc}3\ 2\ 1\\ 3\ 2\ 1\end{array}\left|\begin{array}{cc}1\ 2\ 3\\ 1\ 2\ 3\end{array}\right.\right]=\ -\ {\bf C}_{3}(1)\ {\bf C}_{3}(0)=\left({\bf H }_{3}^{(2)}-{\bf H}_{3}^{(3)}\right){\bf H}_{3}^{(3)}.\]
__
### The Harish-Chandra isomorphism and the algebra \(\Lambda^{*}(n)\) of shifted symmetric polynomials
In this subsection we follow A. Okounkov and G. Olshanski [33].
As in the classical context of the algebra \(\Lambda(n)\) of symmetric polynomials in \(n\) variables \(x_{1},x_{2},\ldots,x_{n}\), the algebra \(\Lambda^{*}(n)\) of _shifted symmetric polynomials_ is an algebra of polynomials \(p(x_{1},x_{2},\ldots,x_{n})\) but the ordinary symmetry is replaced by the _shifted symmetry_:
\[f(x_{1},\ldots,x_{i},x_{i+1},\ldots,x_{n})=f(x_{1},\ldots,x_{i+1}-1,x_{i}+1, \ldots,x_{n}),\]
for \(i=1,2,\ldots,n-1\).
The _shifted elementary symmetric polynomials_ are the elements of \(\Lambda^{*}(n)\)
* for every \(r\in{\mathbb{Z}}^{+}\), \[{\bf e}_{k}^{*}(x_{1},x_{2},\ldots,x_{n})=\sum_{1\leq i_{1}<i_{2}<\cdots<i_{ k}\leq n}\ (x_{i_{1}}+k-1)(x_{i_{2}}+k-2)\cdots(x_{i_{k}}),\]
* \(\mathbf{e}_{0}^{*}(x_{1},x_{2},\ldots,x_{n})=\mathbf{1}\).
The _Harish-Chandra isomorphism_ is the algebra isomorphism
\[\chi_{n}:\zeta(n)\longrightarrow\Lambda^{*}(n),\hskip 28.452756ptA\mapsto\chi_{n}(A),\]
\(\chi_{n}(A)\) being the shifted symmetric polynomial such that, for every highest weight module \(V_{\mu}\), the evaluation \(\chi_{n}(A)(\mu_{1},\mu_{2},\ldots,\mu_{n})\) equals the eigenvalue of \(A\in\zeta(n)\) in \(V_{\mu}\) ([33], Proposition 2.1).
### The Harish-Chandra isomorphism interpretation of Theorem 1 and Theorem 3
Notice that
\[\chi_{n}(\mathbf{H}_{n}^{(r)})=\mathbf{e}_{r}^{*}(x_{1},x_{2},\ldots,x_{n})\in \Lambda^{*}(n),\]
for every \(r=1,2,\ldots,n\).
Furthermore, from Theorem 1 it follows
**Corollary 4**.: \[\chi_{n}(\mathbf{K}_{\mathbf{n}}^{\mathbf{p}})=(-1)^{\binom{p}{2}n}\left( \prod_{j=1}^{p}\;(x_{1}-j+n)(x_{2}-j+n-1)\cdots(x_{n}-j-1)\right).\]
By Corollary 1, we have
\[\chi_{n}(\mathbf{K}_{\mathbf{n}}^{\mathbf{p+1}})=\ \chi_{n}(\mathbf{C}_{n}(p)) \ \chi_{n}(\mathbf{K}_{\mathbf{n}}^{\mathbf{p}}).\]
and Corollary 4 implies
**Proposition 7**.: _For every \(p\in\mathbb{N}\),_
\[\chi_{n}(\mathbf{C}_{n}(p))=(x_{1}-p+n-1)(x_{2}-p+n-2)\cdots(x_{n}-p).\]
**Proposition 8**.: _The set_
\[\chi_{n}(\mathbf{C}_{n}(0)),\ \chi_{n}(\mathbf{C}_{n}(1)),\ \ldots\,\ \chi_{n}( \mathbf{C}_{n}(n-1))\]
_is a system of algebraically independent generators of the ring \(\Lambda^{*}(n)\) of shifted symmetric polynomials in the variables \(x_{1},x_{2},\ldots,x_{n}\)._
Given \(p\in\mathbb{N}\), consider the column determinant
\[\mathbf{H}_{n}(p)=\mathbf{cdet}\left(\begin{array}{cccc}e_{1,1}-p+(n-1)&e_{1,2 }&\ldots&e_{1,n}\\ e_{2,1}&e_{2,2}-p+(n-2)&\ldots&e_{2,n}\\ \vdots&\vdots&\vdots&\\ e_{n,1}&e_{n,2}&\ldots&e_{n,n}-p\end{array}\right). \tag{8}\]
We recall a standard result (for an elementary proof see e.g. [41]):
**Proposition 9**.: _For every \(p\in\mathbb{N}\), the element_
\[\mathbf{H}_{n}(p)=\boldsymbol{cdet}[e_{h,k}+\delta_{hk}(-p+n-h)]_{h,k=1,\ldots,n}\in\mathbf{U}(gl(n)).\]
_is central. In symbols, \(\mathbf{H}_{n}(p)\in\zeta(n)\)._
Equation (8), Proposition 9 and Proposition 7 imply
\[\chi_{n}(\mathbf{H}_{n}(p))=(x_{1}-p+n-1)(x_{2}-p+n-2)\cdots(x_{n}-p)=\chi_{n}( \mathbf{C}_{n}(p)).\]
Hence, we get the well-known identity (see, e.g. [27]):
**Corollary 5**.: _For every \(p\in\mathbb{N}\), we have_
\[\mathbf{H}_{n}(p) =\boldsymbol{cdet}[e_{h,k}+\delta_{hk}(-p+n-h)]_{h,k=1,\ldots,n}\] \[=\sum_{j=0}^{n}\;(-1)^{n-j}(p)_{n-j}\;\mathbf{H}_{n}^{(j)}= \mathbf{C}_{n}(p).\]
**Corollary 6**.: _The subalgebra \(\zeta(n)\) of \(\mathbf{U}(gl(n))\) is the polynomial algebra_
\[\zeta(n)\ =\ \mathbb{C}[\mathbf{H}_{n}(0),\mathbf{H}_{n}(1),\ldots,\mathbf{H}_{ n}(n-1)],\]
_where_
\[\mathbf{H}_{n}(0),\mathbf{H}_{n}(1),\ldots,\mathbf{H}_{n}(n-1)\]
_is a set of algebraically independent generators of \(\zeta(n)\)._
**Corollary 7**.: _The rectangular Capelli-Deruyts bitableau \(\mathbf{K}_{\mathbf{n}}^{\mathbf{p}}\) equals the product of column determinants:_
\[\mathbf{K}_{\mathbf{n}}^{\mathbf{p}}=(-1)^{n\binom{p}{2}}\;\mathbf{H}_{n}(p- 1)\;\cdots\;\mathbf{H}_{n}(1)\;\mathbf{H}_{n}(0).\]
**Example 4**.: _Let \(n=3\), \(p=2\). Then_
\[\mathbf{K_{3}^{2}}=\left[\begin{array}{ccc}3&2&1\\ 3&2&1\end{array}\left|\begin{array}{ccc}1&2&3\\ 1&2&3\end{array}\right.\right]=-\ \mathbf{H}_{3}(1)\ \mathbf{H}_{3}(0)=\\ =-\boldsymbol{cdet}\left(\begin{array}{ccc}e_{1,1}+1&e_{1,2}&e_{1,3 }\\ e_{2,1}&e_{2,2}&e_{2,3}\\ e_{3,1}&e_{3,2}&e_{3,3}-1\end{array}\right)\ \boldsymbol{cdet}\left(\begin{array}{ cccc}e_{1,1}+2&e_{1,2}&e_{1,3}\\ e_{2,1}&e_{2,2}+1&e_{2,3}\\ e_{3,1}&e_{3,2}&e_{3,3}\end{array}\right).\]
Corollaries 3 and 7 generalize to Capelli-Deruyts bitableaux \(\mathbf{K}^{\lambda}\) of arbitrary shape \(\lambda\). Theorem 3 implies:
**Proposition 10**.: _Let \(n\in\mathbb{Z}\), \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p})\), \(\lambda_{1}\leq n\). Set \(\lambda^{\prime}=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p-1})\). Then_
\[\mathbf{K}^{\lambda}=(-1)^{\lambda_{p}(\lambda_{p-1}+\cdots+\lambda_{11})}\ \mathbf{C}_{\lambda_{p}}(p-1)\ \mathbf{K}^{\lambda^{\prime}},\]
_where_
\[\mathbf{C}_{\lambda_{p}}(p-1)=\sum_{j=0}^{\lambda_{p}}\ (-1)^{\lambda_{p}-j}\ (p- 1)_{\lambda_{p}-j}\ \mathbf{H}_{\lambda_{p}}^{(j)}.\]
**Corollary 8**.: _Let \(n\in\mathbb{Z}\), \(\lambda=(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{p})\), \(\lambda_{1}\leq n\). For \(i=1,2,\ldots,p\), set_
\[\mathbf{C}_{\lambda_{i}}(i-1)=\sum_{j=0}^{\lambda_{i}}\ (-1)^{\lambda_{i}-j}\ (i-1)_{ \lambda_{i}-j}\ \mathbf{H}_{\lambda_{i}}^{(j)}.\]
_Then,_
1. _The element_ \(\mathbf{C}_{\lambda_{i}}(i-1)\) _is_ central _in the enveloping algebra_ \(\mathbf{U}(gl(\lambda_{i}))\)_, for_ \(i=1,2,\ldots,p\)_._
2. _The Capelli-Deruyts bitableau_ \(\mathbf{K}^{\lambda}\) _equals the polynomial in the_ Capelli elements__\(\mathbf{H}_{\lambda_{i}}^{(j)}\)_:_ \[\mathbf{K}^{\lambda}=(-1)^{\lambda_{p}(\lambda_{p-1}+\cdots+\lambda_{1})+ \cdots+\lambda_{2}\lambda_{1}}\ \mathbf{C}_{\lambda_{p}}(p-1)\cdots\mathbf{C}_{ \lambda_{2}}(1)\ \mathbf{C}_{\lambda_{1}}(0).\]
**Example 5**.: _Let \(n=3\), \(\lambda=(3,2)\) and let_
\[{\bf K}^{(3,2)}=\ \left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\right|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right].\]
_Then,_
\[{\bf K}^{(3,2)}=\ {\bf C}_{2}(1)\ {\bf C}_{3}(0)=\left({\bf H}_{2}^{(2)}-{\bf H }_{2}^{(1)}\right){\bf H}_{3}^{(3)}.\]
For \(i=1,2,\ldots,p\), consider the center \(\zeta(\lambda_{i})\) of \({\bf U}(gl(\lambda_{i}))\) and the Harish-Chandra isomorphisms
\[\chi_{\lambda_{i}}:\zeta(\lambda_{i})\longrightarrow\Lambda^{*}(\lambda_{i}).\]
Proposition 5 and Proposition 10 imply:
\[\chi_{\lambda_{i}}\big{(}{\bf C}_{\lambda_{i}}(i-1)\big{)}=\ (x_{1}-i+ \lambda_{i})(x_{2}-i+\lambda_{i}-1)\cdots(x_{\lambda_{i}}-i+1). \tag{9}\]
Proposition 9 implies that the element
\[{\bf H}_{\lambda_{i}}(i-1)={\bf cdet}\left[e_{h,k}+\delta_{hk}(\lambda_{i}-i -h+1)\right]_{h,k=1,\ldots,\lambda_{i}}\in{\bf U}(gl(\lambda_{i})).\]
is central in the enveloping algebra \({\bf U}(gl(\lambda_{i}))\). In symbols, \({\bf H}_{n}(p)\in\zeta(\lambda_{i})\).
Clearly,
\[\chi_{\lambda_{i}}\big{(}{\bf H}_{\lambda_{i}}(i-1)\big{)}=\ (x_{1}-i+ \lambda_{i})(x_{2}-i+\lambda_{i}-1)\cdots(x_{\lambda_{i}}-i+1),\]
and, therefore, from eq. (9), we have
**Corollary 9**.: \({\bf H}_{\lambda_{i}}(i-1)={\bf C}_{\lambda_{i}}(i-1).\)__
From Corollary 8, we have
**Corollary 10**.: _The Capelli-Deruyts bitableau \({\bf K}_{\lambda}\) equals the product of column determinants:_
\[{\bf K}^{\lambda}=(-1)^{\lambda_{p}(\lambda_{p-1}+\cdots+\lambda_{1})+\cdots+ \lambda_{2}\lambda_{1}}\ {\bf H}_{\lambda_{p}}(p-1)\cdots{\bf H}_{\lambda_{2}}(1)\ {\bf H}_{\lambda_{1}}(0).\]
**Example 6**.: _We have_
\[{\bf K}^{(3,2)}==\left[\begin{array}{cc}3&2&1\\ 2&1\end{array}\right|\begin{array}{cc}1&2&3\\ 1&2\end{array}\right]={\bf H}_{2}(1)\ {\bf H}_{3}(0)=\\ ={\boldsymbol{cdet}}\left(\begin{array}{cc}e_{1,1}&e_{1,2}\\ e_{2,1}&e_{2,2}-1\end{array}\right)\ {\boldsymbol{cdet}}\left(\begin{array}{cc}e_{1,1}+2&e_{1,2}&e_{1,3}\\ e_{2,1}&e_{2,2}+1&e_{2,3}\\ e_{3,1}&e_{3,2}&e_{3,3}\end{array}\right).\]
### Polynomial identities
Let \(t\) be a variable and consider the polynomial
\[{\bf H}_{n}(t)={\bf cdet}\left(\begin{array}{cccc}e_{1,1}-t+(n-1)&e_{1,2}& \ldots&e_{1,n}\\ e_{2,1}&e_{2,2}-t+(n-2)&\ldots&e_{2,n}\\ \vdots&\vdots&\vdots&\\ e_{n,1}&e_{n,2}&\ldots&e_{n,n}-t\end{array}\right)=\]
\[={\bf cdet}\left[e_{i,j}+\delta_{ij}(-t+n-i)\right]_{i,j=1,\ldots,n}\]
with coefficients in \({\bf U}(gl(n))\).
**Corollary 11**.: _(see, e.g. [41]) In the polynomial algebra \(\zeta(n)[t]\), the following identity holds:_
\[{\bf H}_{n}(t)=\sum_{j=0}^{n}\;(-1)^{n-j}\;{\bf H}_{n}^{(j)}\;(t)_{n-j},\]
_where, for every \(k\in\mathbb{N}\), \((t)_{k}=t(t-1)\cdots(t-k+1)\) denotes the \(k-\)th falling factorial polynomial._
**Corollary 12**.: _In the polynomial algebra \(\Lambda^{*}(n)[t]\), the following identity holds:_
\[(x_{1}-t+n-1)(x_{2}-t+n-2)\cdots(x_{n}-t)=\sum_{j=0}^{n}\;(-1)^{n-j}\;{\bf e} _{j}^{*}(x_{1},x_{2},\ldots,x_{n})\;(t)_{n-j}.\]
Following Molev [28] Chapt. **7** (see also Howe and Umeda [22]), consider the "Capelli determinant"
\[{\cal C}_{n}(s)={\bf cdet}\left(\begin{array}{cccc}e_{1,1}+s&e_{1,2}&\ldots &e_{1,n}\\ e_{2,1}&e_{2,2}+s-1&\ldots&e_{2,n}\\ \vdots&\vdots&\vdots&\\ e_{n,1}&e_{n,2}&\ldots&e_{n,n}+s-(n-1)\end{array}\right)=\]
\[={\bf cdet}\left[e_{i,j}+\delta_{ij}(s-i+1)\right]_{i,j=1,\ldots,n},\]
regarded as a polynomial in the variable \(s\).
By the formal (column) Laplace rule, the coefficients \({\cal C}_{n}^{(h)}\in{\bf U}(gl(n))\) in the expansion
\[{\cal C}_{n}(s)=s^{n}+{\cal C}_{n}^{(1)}s^{n-1}+{\cal C}_{n}^{(2)}s^{n-2}+ \ldots+{\cal C}_{n}^{(n)},\]
are the sums of the minors:
\[{\cal C}_{n}^{(h)}=\sum_{1\leq i_{1}<i_{2}<\ldots<i_{h}\leq n}\ {\cal M}_{i_{1},i_{2}, \ldots,i_{h}},\]
where \({\cal M}_{i_{1},i_{2},\ldots,i_{h}}\) denotes the column determinant of the submatrix of the matrix \({\cal C}_{n}(0)\) obtained by selecting the rows and the columns with indices \(i_{1}<i_{2}<\ldots<i_{h}\).
Since \({\cal C}_{n}(s)={\bf H}_{n}(-s+(n-1))\), from Proposition 11 it follows:
**Corollary 13**.: \[{\cal C}_{n}(s)=\sum_{j=0}^{n}\ (-1)^{n-j}(-s+(n-1))_{n-j}\ {\bf H}_{n}^{(j)}.\]
**Corollary 14**.: _We have:_
* _The elements_ \({\cal C}_{n}^{(h)},\ h=1,2,\ldots,n\) _are central and provide a system of algebraically independent generators of_ \(\zeta(n)\)_._
* \(\chi_{n}({\cal C}_{n}^{(h)})=\bar{\bf e}_{h}(x_{1},x_{2},\ldots,x_{n})={\bf e} _{h}(x_{1},x_{2}-1,\ldots,x_{n}-(n-1)),\) _where_ \({\bf e}_{h}\) _denotes the_ \(h-\)_th elementary symmetric polynomial._
### The shaped Capelli central elements \({\bf K}_{\lambda}(n)\)
Given a partition \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{p})\), \(\lambda_{1}\leq n\), consider the _shaped Capelli elements_ (see [9])
\[{\bf K}_{\lambda}(n)=\sum_{S}\ \mbox{\Large$\mathfrak{p}$}\big{(}e_{S,C_{ \lambda}^{*}}\cdot e_{C_{\lambda}^{*},S}\big{)}=\sum_{S}\ [S|S]\in{\bf U}(gl(n)),\]
where the sum is extended to all row-increasing tableaux \(S\), \(sh(S)=\lambda\).
Notice that the elements \({\bf K}_{\lambda}(n)\) are _radically different_ from the elements \({\bf H}_{\lambda}(n)={\bf H}_{\lambda_{1}}(n)\cdots{\bf H}_{\lambda_{p}}(n)\) and are _radically different_ from the elements \({\bf K}^{\lambda}\).
Since the adjoint representation acts by derivation, we have
\[ad(e_{ij})\big{(}\sum_{S}\ e_{S,C_{\lambda}^{*}}\cdot e_{C_{\lambda}^{*},S} \big{)}=0,\]
for every \(e_{ij}\in gl(n)\) and, then, from Proposition 26, it follows
**Proposition 11**.: _The elements \({\bf K}_{\lambda}(n)\) are central in \({\bf U}(gl(n))\)._
Let \(\mathbf{\zeta}(n)^{(m)}\) be the \(m\)-th filtration element of the center \(\mathbf{\zeta}(n)\) of \({\bf U}(gl(n))\).
Clearly, \({\bf K}_{\lambda}(n),{\bf H}_{\lambda}(n)\in\mathbf{\zeta}(n)^{(m)}\) if and only if \(m\geq|\lambda|\).
**Proposition 12**.: \[\mathbf{K}_{\lambda}(n)=\pm\mathbf{H}_{\lambda}(n)+\sum\ c_{\lambda,\mu}\mathbf{F} _{\mu}(n),\]
_where \(\mathbf{F}_{\mu}(n)\in\boldsymbol{\zeta}(n)^{(m)}\) for some \(m<|\lambda|\)._
Proof.: Immediate from Corollary 16.
Therefore, the central elements \(\mathbf{K}_{\lambda}(n)\), \(|\lambda|\leq m\) are linearly independent in \(\boldsymbol{\zeta}(n)^{(m)}\), and the next result follows at once.
**Proposition 13**.: _The set_
\[\big{\{}\mathbf{K}_{\lambda}(n);\lambda_{1}\leq n\ \big{\}}\]
_is a linear basis of the center \(\boldsymbol{\zeta}(n)\)._
Let \(\mathcal{K}\) be the _Koszul equivariant isomorphism_[9]
\[\mathcal{K}:\mathbf{U}(gl(n))\to\mathbb{C}[M_{n,n}],\] \[\mathcal{K}:[S|S]\mapsto(S|S). \tag{10}\]
Clearly, the Koszul map \(\mathcal{K}\) induces, by restriction, an isomorphism from the center \(\boldsymbol{\zeta}(n)\) of \(\mathbf{U}(gl(n))\) to the algebra \(\mathbb{C}[M_{n,n}]^{ad_{gl(n)}}\) of \(ad_{gl(n)}-\)invariants in \(\mathbb{C}[M_{n,n}]\).
Consider to the polynomial
\[\mathbf{h}_{k}(n) =\sum_{1\leq i_{1}<\cdots<i_{k}\leq n}\ (i_{k}\cdots i_{2}i_{1}|i_{1}i_{2}\cdots i_{k})\] \[=\sum_{1\leq i_{1}<\cdots<i_{k}\leq n}\mathbf{det}\left(\begin{array} []{ccc}(i_{1}|i_{1})&\ldots&(i_{1}|i_{k})\\ \vdots&&\vdots\\ (i_{k}|i_{1})&\ldots&(i_{k}|i_{k})\end{array}\right)\in\mathbb{C}[M_{n,n}].\]
Clearly, \(\mathbf{h}_{k}(n)\in\mathbb{C}[M_{n,n}]^{ad_{gl(n)}}\).
Notice that the polynomials \(\mathbf{h}_{k}(n)\)'s appear as coefficients (in \(\mathbb{C}[M_{n,n}]\)) of the characteristic polynomial:
\[P_{M_{n,n}}(t)=det\big{(}tI-M_{n,n}\big{)}=t^{n}+\sum_{i=1}^{n}\ (-1)^{i}\ \mathbf{h}_{i}(n)\ t^{n-i}.\]
From (10), we have
**Proposition 14**.: \[\mathcal{K}\big{(}\mathbf{K}_{\lambda}(n)\big{)}=(-1)^{\binom{|\lambda|}{2}} \ \mathbf{h}_{\lambda_{1}}(n)\mathbf{h}_{\lambda_{2}}(n)\cdots\mathbf{h}_{ \lambda_{p}}(n),\quad|\lambda|=\sum_{i}\ \lambda_{i}.\]
Proposition 13 implies (is actually equivalent to) the well-known theorem for the algebra of invariants \(\mathbb{C}[M_{n,n}]^{ad_{gl(n)}}\):
**Proposition 15**.: \[\mathbb{C}[M_{n,n}]^{ad_{gl(n)}}=\mathbb{C}\big{[}\mathbf{h}_{1}(n),\mathbf{h}_{ 2}(n),\ldots,\mathbf{h}_{n}(n)\big{]}.\]
_Moreover, the \(\mathbf{h}_{k}(n)\)'s are algebraically independent._
Proposition 15 is usually stated in terms of the algebra \(\mathbb{C}[M_{n,n}]^{GL(n)}=\mathbb{C}[M_{n,n}]^{ad_{gl(n)}}\), where \(\mathbb{C}[M_{n,n}]^{GL(n)}\) is the subalgebra of invariants with respect to the _conjugation action_ of the general linear group \(GL(n)\) on \(\mathbb{C}[M_{n,n}]\) (see, e.g. [36]).
## 7 Proof of Theorem 2
### A commutation identity for enveloping algebras of Lie superalgebras
Let \((L=L_{0}\oplus L_{1},[\,\ ])\) be a _Lie superalgebra_ over \(\mathbb{C}\) (see, e.g. [23], [39]), where \([\,\ ]\) denotes the _superbracket_ bilinear form.
Given \(a\in L\), consider the linear operator \(T_{a}\) from \(U(L)\) to itself defined by setting
\[T_{a}(\mathbf{N})=a\ \mathbf{N}-(-1)^{|a||\mathbf{N}|}\mathbf{N}\ a,\]
for every \(\mathbf{N}\in U(L)\), \(\mathbb{Z}_{2}\)-homogeneous of degree \(|\mathbf{N}|\).
We recall that \(T_{a}\) is the unique (left) superderivation of \(U(L)\), \(\mathbb{Z}_{2}\)-homogeneous of degree \(|a|\), such that
\[T_{a}(b)=[a,b],\]
for every \(b\in L\).
Furthermore, given \(a,b\in L=L_{0}\oplus L_{1}\), from (super) skew-symmetry and the (super) Jacobi identity, it follows:
\[T_{a}\circ T_{b}-(-1)^{|a||b|}T_{b}\circ T_{a}=T_{[a,b]}.\]
The Lie algebra representation
\[\begin{array}{c}Ad_{L}:L=L_{0}\oplus L_{1}\to End_{\mathbb{C}}\big{[} \mathbf{U}(L)\big{]}=End_{\mathbb{C}}\big{[}\mathbf{U}(L)\big{]}_{0}\oplus End _{\mathbb{C}}\big{[}\mathbf{U}(L)\big{]}_{1}\\ \\ e_{a}\mapsto T_{a}\end{array}\]
is the _adjoint representation_ of \(U(L)\) on itself.
**Proposition 16**.: \[a_{i_{1}}a_{i_{2}}\cdots a_{i_{m}}\omega=\omega a_{i_{1}}a_{i_{2}} \cdots a_{i_{m}}(-1)^{|\omega|(|a_{i_{1}}|+|a_{i_{2}}|+\cdots+|a_{i_{m}}|)}+\\ +\sum_{k=1}^{m}\ \sum_{\sigma(1)<\cdots<\sigma(k);\sigma(k+1)<\cdots< \sigma(m)}\ \big{(}(T_{a_{i_{\sigma(1)}}}\ldots T_{a_{i_{\sigma(k)}}}(\omega)\big{)}\ a_{i_{ \sigma(k+1)}}\cdots a_{i_{\sigma(m)}}\times\\ \times sgn(a_{i_{\sigma(1)}}\ldots a_{i_{\sigma(k)}};a_{i_{\sigma( k+1)}}\cdots a_{i_{\sigma(m)}})\ \big{(}-1\big{)}^{|\omega|(|a_{i_{\sigma(k+1)}}|+\cdots+|a_{i_{\sigma(m)}}|)} \big{)}.\]
Proof.: By induction hypothesis,
\[a_{i_{1}}(a_{i_{2}}\cdots a_{i_{m}})\omega=a_{i_{1}}\omega a_{i_ {2}}\cdots a_{i_{m}}(-1)^{|\omega|(|a_{i_{2}}|+\cdots+|a_{i_{m}}|)}+\\ +a_{i_{1}}\sum_{h=2}^{m}\ \sum_{\tau(2)<\cdots<\tau(h);\tau(h+1)< \cdots<\tau(m)}\big{(}T_{a_{i_{\tau(2)}}}\ldots T_{a_{i_{\tau(h)}}}(\omega)a_{ i_{\tau(h+1)}}\cdots a_{i_{\tau(m)}}\times\\ \times sgn(a_{i_{\tau(2)}}\cdots a_{i_{\tau(h)}};a_{i_{\tau(h+1)}} \cdots a_{i_{\tau(m)}})(-1)^{|\omega|(|a_{i_{\tau(h+1)}}|+\cdots+|a_{i_{\tau(m )}}|)}\big{)}=\\ =\omega a_{i_{1}}a_{i_{2}}\cdots a_{i_{m}}(-1)^{|\omega|(|a_{i_{1 }}|+|a_{i_{2}}|+\cdots+|a_{i_{m}}|)}+T_{a_{i_{1}}}(\omega)a_{i_{2}}\cdots a_{i_ {m}}(-1)^{|\omega|(|a_{i_{2}}|+\cdots+|a_{i_{m}}|)}+\\ +\sum_{h=2}^{m}\ \sum_{\tau(2)<\cdots<\tau(h);\tau(h+1)<\cdots<\tau(m)} \big{(}T_{a_{i_{1}}}T_{a_{\tau(2)}}\cdots T_{a_{\tau(h)}}(\omega)a_{\tau(h+1) }\cdots a_{i_{\tau(m)}}\times\\ \times sgn(a_{\tau(2)}\cdots a_{\tau(h)};a_{\tau(h+1)}\cdots a_{ \tau(m)})(-1)^{|\omega|(|a_{\tau(h+1)}|+\cdots+|a_{i_{\tau(m)}}|)}+\\ +T_{a_{\tau(2)}}\cdots T_{a_{\tau(h)}}(\omega)a_{i_{1}}a_{\tau(h+ 1)}\cdots a_{i_{\tau(m)}}\times\\ (-1)^{|a_{i_{1}}|(|\omega|+|a_{\tau(2)}|+\cdots+|a_{i_{\tau(m)}}|) }\times sgn(a_{\tau(2)}\cdots a_{\tau(h)};a_{\tau(h+1)}\cdots a_{\tau(m)})(-1) ^{|\omega|(|a_{\tau(h+1)}|+\cdots+|a_{i_{\tau(m)}}|)},\]
where
\[(-1)^{|a_{i_{1}}|(|\omega|+|a_{i_{\tau(2)}}|+\cdots+|a_{i_{\tau( m)}}|)+|\omega|(|a_{\tau(h+1)}|+\cdots+|a_{i_{\tau(m)}}|)}\times sgn(a_{i_{ \tau(2)}}\cdots a_{i_{\tau(m)}};a_{i_{\tau(h+1)}}\cdots a_{i_{\tau(m)}})=\\ =sgn(a_{i_{\tau(2)}}\cdots a_{i_{\tau(h)}};a_{i_{1}}a_{i_{\tau(h+ 1)}}\cdots a_{i_{\tau(m)}})(-1)^{|\omega|(|a_{i_{1}}|+|a_{i_{\tau(h+1)}}+ \cdots+|a_{i_{\tau(m)}}|)}.\]
Then, the assertion follows.
In the Sweedler notation of the _supersymmetric_ superbialgebra \(Super(L)\), Theorem 16 can be stated in the following compact form:
**Proposition 17**.: _Let_
\[\alpha=a_{i_{1}}a_{i_{2}}\cdots a_{i_{m}}.\]
_Then_
\[\alpha\omega=\sum_{(\alpha)}\ T_{\alpha_{(1)}}(\omega)\alpha_{(2)}(-1)^{| \omega||\alpha_{(2)}|}.\]
Proof.: Let
\[\alpha=a_{i_{1}}a_{i_{2}}\cdots a_{i_{m}}.\]
Then, the coproduct (in the Sweedler notation)
\[\Delta(\alpha)=\sum_{(\alpha)}\ \alpha_{(1)}\otimes\alpha_{(2)}\]
equals
\[\sum_{k=0}^{m}\ \sum_{\sigma(1)<\cdots<\sigma(k);\sigma(k+1)<\cdots<\sigma(m)} \ \bigl{(}a_{i_{\sigma(1)}}\ldots a_{i_{\sigma(k)}}\otimes a_{i_{\sigma(k+1)}} \cdots a_{i_{\sigma(m)}}\times\]
\[\times sgn\bigl{(}a_{i_{\sigma(1)}}\cdots a_{i_{\sigma(k)}};a_{i_{\sigma(k+1)}} \cdots a_{i_{\sigma(m)}}\bigr{)}\bigr{)}.\]
Furthermore
**Lemma 1**.: _Let \(T_{\alpha}=T_{a_{1}}T_{a_{2}}\cdots T_{a_{m}}\). Then_
\[T_{\alpha}(\omega_{1}\cdot\omega_{2})=\sum_{(\alpha)}\ T_{\alpha_{(1)}}( \omega_{1})T_{\alpha_{(2)}}(\omega_{2})(-1)^{|\alpha_{(2)}||\omega_{1}|}.\]
### Some preliminary remarks and definitions
#### 7.2.1 The virtual algebra and the Capelli devirtualization epimorphism
Given a vector space \(V\) of dimension \(n\), we will regard it as a subspace of a \(\mathbb{Z}_{2}-\)graded vector space \(V_{0}\oplus V_{1}\), where \(V_{1}=V.\) The vector spaces \(V_{0}\) (we assume that \(dim(V_{0})=m\) is "sufficiently large") is called the _positive virtual (auxiliary) vector space_ and \(V\) is called the _(negative) proper vector space_.
Let \(\mathcal{A}_{0}=\{\alpha_{1},\ldots,\alpha_{m_{0}}\}\), \(\mathcal{L}=\{1,2,\ldots,n\}\) denote _fixed bases_ of \(V_{0}\) and \(V=V_{1}\), respectively; therefore \(|\alpha_{s}|=0\in\mathbb{Z}_{2}\), and \(|i|=1\in\mathbb{Z}_{2}\).
Let
\[\{e_{a,b};a,b\in\mathcal{A}_{0}\cup\mathcal{L}\},\qquad|e_{a,b}|=|a|+|b|\in \mathbb{Z}_{2}\]
be the standard \(\mathbb{Z}_{2}-\)homogeneous basis of the Lie superalgebra \(gl(m|n)\) provided by the elementary matrices. The elements \(e_{a,b}\in gl(m|n)\) are \(\mathbb{Z}_{2}-\)homogeneous of \(\mathbb{Z}_{2}-\)degree \(|e_{a,b}|=|a|+|b|\).
The superbracket of the Lie superalgebra \(gl(m|n)\) has the following explicit form:
\[[e_{a,b},e_{c,d}]=\delta_{bc}\ e_{a,d}-(-1)^{(|a|+|b|)(|c|+|d|)}\delta_{ad}\ e_{c,b},\]
\(a,b,c,d\in{\cal A}_{0}\cup{\cal L}\).
In the following, the elements of the sets \({\cal A}_{0},{\cal L}\) will be called _positive virtual symbols_ and _negative proper symbols_, respectively.
The inclusion \(V\subset V_{0}\oplus V_{1}\) induces a natural embedding of the ordinary general linear Lie algebra \(gl(n)=gl(0|n)\) of \(V\) into the _auxiliary_ general linear Lie _superalgebra_\(gl(m|n)\) of \(V_{0}\oplus V_{1}\) (see, e.g. [23], [39]) and, hence, a natural embedding \({\bf U}(gl(n))\subset{\bf U}(gl(m|n))\).
In the following, we will systematically refer to the _Capelli devirtualization epimorphism_
\[{\mathfrak{p}}:Virt(m,n)\twoheadrightarrow{\bf U}(gl(0|n))={\bf U}(gl(n)),\]
where \(Virt(m,n)\) is the _virtual subalgebra_ of \({\bf U}(gl(m|n))\).
For definitions and details, we refer the reader to Subsection 9.5.
#### 7.2.2 A more readable notation
In the following, we will adopt the more readable notation:
* We write \(\{a|b\}\) for the elements \(e_{a,b}\) of the standard basis of \(gl(m|n)\).
* Given two words \(I=i_{1}\ i_{2}\ \cdots\ i_{p}\), \(J=j_{1}\ j_{2}\ \cdots\ j_{p}\), with \(i_{h},j_{h}\in{\cal L}\) and a virtual symbol \(\alpha\), we write \[\{J|\alpha\}=\{j_{1}\ j_{2}\ \cdots\ j_{p}|\alpha\},\quad\{\alpha|I\}=\{ \alpha|i_{1}\ i_{2}\ \cdots\ i_{p}\}\] in place of \[e_{j_{1},\alpha}e_{j_{2},\alpha}\cdots e_{j_{p},\alpha},\quad e_{\alpha,i_{1}} e_{\alpha,i_{2}}\cdots e_{\alpha,i_{p}},\] respectively.
In this notation, given a pair of Young tableaux
\[S=(w_{1},w_{2},\ldots,w_{p}),\quad T=(\overline{w}_{1},\overline{w}_{2},\ldots,\overline{w}_{p}),\qquad sh(S)=sh(T)=\lambda,\]
the _Capelli bitableau_
\[[S|T]={\mathfrak{p}}\big{(}e_{SC_{\lambda}}\cdot e_{C_{\lambda}T}\big{)}\in{ \bf U}(gl(n))\]
is
\[[S|T]={\mathfrak{p}}\big{(}{\bf P}_{S}\cdot{\bf P}_{T}\big{)},\]
where
\[{\bf P}_{S}=\{w_{1}|\beta_{1}\}\{w_{2}|\beta_{2}\}\cdots\{w_{p}|\beta_{p}\}, \qquad{\bf P}_{T}=\{\beta_{1}|\overline{w}_{1}\}\{\beta_{2}|\overline{w}_{2}\} \cdots\{\beta_{p}|\overline{w}_{p}\}.\]
Furthermore, for the adjoint representation
\[Ad_{gl(m|n)}:gl(m|n)\to End_{\mathbb{C}}\big{[}{\bf U}(gl(m|n))\big{]}\]
we write
* \(T_{i\alpha},\ T_{\alpha i}\) in place of \(Te_{i\alpha},\ T_{e_{\alpha i}}\).
* \(T_{I\alpha},\ T_{\alpha I}\) in place of \(T_{i_{1},\alpha}T_{i_{2},\alpha}\cdots T_{i_{p},\alpha},\quad T_{\alpha,i_{1}} T_{\alpha,i_{2}}\cdots T_{\alpha,i_{p}}\), respectively.
#### 7.2.3 The coproduct in \(\Lambda(V)=\Lambda({\cal L})\), Sweedler notation and _split notation_
Given a word \(I=i_{1}\ i_{2}\ \cdots\ i_{m},\ i_{t}\in{\cal L}\) in \(\Lambda(V)=\Lambda({\cal L})\), and a natural integer \(k,\quad k=0,1,\cdots,m\), consider the homogeneous component
\[\Delta_{k,m-k}:\Lambda({\cal L})\rightarrow\Lambda({\cal L})_{k}\otimes \Lambda({\cal L})_{m-k}\]
of the coproduct
\[\Delta:\Lambda({\cal L})\rightarrow\Lambda({\cal L})\otimes\Lambda({\cal L}).\]
Given a permutation \(\sigma\) with
\[\sigma(1)<\cdots<\sigma(k),\quad\sigma(k+1)<\cdots<\sigma(m),\]
and the two subwords
\[I_{(1)}=i_{\sigma(1)}\ \ \cdots\ i_{\sigma(k)},\quad I_{(2)}=i_{\sigma(k+1)} \ \ \cdots\ i_{\sigma(m)}\]
we call the pair \((I_{(1)},I_{(2)})\) a _split_ of \(I\) of step \((k,m-k)\) of signature \(sgn(I;I_{(1)},I_{(2)})=sgn(\sigma)\). Clearly, \(I=sgn(I;I_{(1)},I_{(2)})\ I_{(1)}I_{(2)}\).
We denote by \({\bf S}(I;k,m-k)\) the set of all splits of \(I\) of step \((k,m-k)\).
Then, the coproduct component
\[\Delta_{k,m-k}(I)=\sum_{(I)_{k,m-k}}\ I_{(1)}\otimes I_{(2)}\]
can be explicitly written as
\[\Delta_{k,m-k}(I)=\sum_{(I_{(1)},I_{(2)})\in{\bf S}(I;k,m-k)}sgn(I;I_{(1)},I_{ (2)})\ I_{(1)}\otimes I_{(2)}.\]
### Some lemmas
Consider the Capelli bitableau
\[[S|T]=\mathfrak{p}\big{(}\mathbf{P}_{S}\cdot\mathbf{P}_{T}\big{)}\]
as in Eq. (7.2.2).
From Proposition 17, we derive the following pair of Lemmas.
**Lemma 2**.: _Let \(I=i_{1}\ i_{2}\cdots\ i_{m}\), \(J=j_{1}\ j_{2}\ \cdots\ j_{m}\), \(m\leq\lambda_{p}\). Then_
\[\{J|\alpha\}\{\alpha|I\}\ \mathbf{P}_{S}\]
_equals_
\[\{J|\alpha\}\ \sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\ T_{\alpha\ I_{(1)}}\big{(} \mathbf{P}_{S}\big{)}\{\alpha|I_{(2)}\}(-1)^{|\mathbf{P}_{S}|(m-k)}.\]
Since
\[\mathfrak{p}\big{(}\{J|\alpha\}\{\alpha|I\}\ \mathbf{P}_{S}\cdot\mathbf{P}_{T} \big{)}=[J|I]\ [S|T],\]
**Lemma 3**.: _We have_
\[[J|I]\ [S|T]=\ (-1)^{(|\mathbf{P}_{T}|+k)(m-k)}\times\\ \times\mathfrak{p}\big{(}\sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\ \sum_{(J)_{k,m-k}}\ T_{J_{(1)} \alpha}T_{\alpha I_{(1)}}\big{(}\mathbf{P}_{S}\big{)}\ \{J_{(2)}|\alpha\}\ \mathbf{P}_{T}\ \{\alpha|I_{(2)}\}\big{)}. \tag{11}\]
Proof.: We have
\[\{J|\alpha\}\{\alpha|I\}\ \mathbf{P}_{S}\ \mathbf{P}_{T}=\\ =\{J|\alpha\}\ \sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\ T_{\alpha\ I_{(1)}} \big{(}\mathbf{P}_{S}\big{)}\ \{\alpha|I_{(2)}\}\ \mathbf{P}_{T}\ (-1)^{|\mathbf{P}_{S}|(m-k)}=\\ =\sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\{J|\alpha\}\ T_{\alpha\ I_{(1)}} \big{(}\mathbf{P}_{S}\big{)}\ \{\alpha|I_{(2)}\}\ \mathbf{P}_{T}\ (-1)^{|\mathbf{P}_{S}|(m-k)}=\\ =\sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\ \left(\sum_{h=0}^{m}\sum_{(J)_{h,m -h}}\ T_{J_{(1)}\alpha}\big{(}T_{\alpha\ I_{(1)}}\big{(}\mathbf{P}_{S}\big{)} \big{)}\{J_{(2)}|\alpha\}\ (-1)^{(|\mathbf{P}_{S}|+h)(m-h)}\right)\times\\ \times\{\alpha|I_{(2)}\}\ \mathbf{P}_{T}\ (-1)^{|\mathbf{P}_{S}|(m-k)}.\]
Now, if \(h<k\), then \(m-h>m-k\) and, hence,
\[\sum_{(I)_{k,m-k}}\ \left(\sum_{(J)_{h,m-h}}T_{J_{(1)}\alpha}\big{(}T_{ \alpha\ I_{(1)}}\big{(}\boldsymbol{P}_{S}\big{)}\big{)}\{J_{(2)}|\alpha\}\ (-1)^{(|\boldsymbol{P}_{S}|+h)(m-h)}\right)\times\\ \times\{\alpha|I_{(2)}\}\ \boldsymbol{P}_{T}\ (-1)^{|\boldsymbol{P}_{S}|(m-k)}\]
is an _irregular element_, since the \(\{J_{(2)}|\alpha\}\{\alpha|I_{(2)}\}\) are irregular monomials; so, its image with respect to the Capelli epimorphism \(\mathfrak{p}\) equals zero.
If \(h>k\), then,
\[T_{J_{(1)}\alpha}\big{(}T_{\alpha\ I_{(1)}}\big{(}\boldsymbol{P}_{S}\big{)} \big{)}=0.\]
and, hence,
\[\sum_{(I)_{k,m-k}}\ \left(\sum_{(J)_{h,m-h}}T_{J_{(1)}\alpha} \big{(}T_{\alpha\ I_{(1)}}\big{(}\boldsymbol{P}_{S}\big{)}\big{)}\{J_{(2)}| \alpha\}\ (-1)^{(|\boldsymbol{P}_{S}|+h)(m-h)}\right)\times\\ \times\{\alpha|I_{(2)}\}\ \boldsymbol{P}_{T}\ (-1)^{|\boldsymbol{P}_{S}|(m-k)}=0.\]
Then,
\[[J|I]\ [S|T]= \ (-1)^{(|\boldsymbol{P}_{S}|+k)(m-k)}(-1)^{|\boldsymbol{P}_{S}|(m -k)}\times\] \[\times\mathfrak{p}\big{(}\sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\ \sum_{(J)_{k,m-k}}T_{J_{(1)}\alpha}T_{\alpha I_{(1)}}\big{(} \boldsymbol{P}_{S}\big{)}\ \{J_{(2)}|\alpha\}\ \{\alpha|I_{(2)}\}\ \boldsymbol{P}_{T}\big{)}\] \[= \ (-1)^{(|\boldsymbol{P}_{T}|+k)(m-k)}\times\] \[\times\mathfrak{p}\big{(}\sum_{k=0}^{m}\ \sum_{(I)_{k,m-k}}\ \sum_{(J)_{k,m-k}}T_{J_{(1)}\alpha}T_{\alpha I_{(1)}}\big{(} \boldsymbol{P}_{S}\big{)}\ \{J_{(2)}|\alpha\}\ \boldsymbol{P}_{T}\ \{\alpha|I_{(2)}\}\big{)}.\]
**Corollary 15**.: _Let \(m\leq\lambda_{p}\). Then_
\[[J|I]\ [S|T]=\pm\left[\begin{array}{c|c}S\\ J\end{array}\right|\ \begin{array}{c}T\\ I\end{array}\right]+\sum c_{m,\lambda}\ \mathbf{G}_{m,\lambda},\]
_where_
\[[J|I]\ [S|T],\quad\left[\begin{array}{c|c}S\\ J\end{array}\right|\ \begin{array}{c}T\\ I\end{array}\right]\notin\mathbf{U}(gl(n))^{(n)}\quad whenever\quad n<m+| \lambda|,\]
_and_
\[\mathbf{G}_{m,\lambda}\in\mathbf{U}(gl(n))^{(n)}\quad for\ some\quad n<m+| \lambda|.\]
**Corollary 16**.: _Let \(m\leq\lambda_{p}\). Then_
\[[S|T]=\pm\ [\omega_{1}|\overline{\omega}_{1}]\ [\omega_{2}|\overline{\omega}_{2} ]\cdots[\omega_{p}|\overline{\omega}_{p}]+\sum d_{\lambda}\ {\bf F}_{\lambda},\]
_where_
\[[S|T],\ [\omega_{1}|\overline{\omega}_{1}]\ [\omega_{2}|\overline{\omega}_{2} ]\cdots[\omega_{p}|\overline{\omega}_{p}]\notin{\bf U}(gl(n))^{(n)}\quad whenever \quad n<|\lambda|,\]
_and_
\[{\bf F}_{\lambda}\in{\bf U}(gl(n))^{(n)}\quad for\ some\quad n<|\lambda|.\]
We specialize the previous results to Capelli-Deruyts bitableaux \({\bf K}^{\lambda}\).
Let
\[{\bf M}^{*}=\{\underline{\lambda_{1}}^{*}|\beta_{1}\}\cdots\{\underline{ \lambda_{p}}^{*}|\beta_{p}\},\quad{\bf M}=\{\beta_{1}|\underline{\lambda_{1}} \}\cdots\{\beta_{p}|\underline{\lambda_{p}}\},\]
where \(\lambda=(\lambda_{1}\geq\cdots\geq\lambda_{p})\) and \(|{\bf M}^{*}|=|{\bf M}|=|\lambda|=\lambda_{1}+\cdots+\lambda_{p}\in\mathbb{Z}_ {2}\).
Given an increasing word \(W=h_{1}\ h_{2}\ \cdots\ h_{p}\) on \({\cal L}=\{1,2,\ldots,n\}\), denote by \(W^{*}\) its _reverse_ word, that is:
\[W^{*}=h_{p}\ \cdots\ h_{2}\ h_{1}.\]
Let \(I=1\ 2\ \cdots\ m\), \(I^{*}=m\ m-1\ \cdots\ 1\), \(m\leq\lambda_{p}\).
In this notation
\[{\bf K}^{\lambda}={\mathfrak{p}}\big{(}{\bf M}^{*}\cdot{\bf M}\big{)}\]
and
\[[I^{*}|I]\ {\bf K}^{\lambda}={\mathfrak{p}}\big{(}\ \{I^{*}|\alpha\}\{ \alpha|I\}\ {\bf M}^{*}\cdot{\bf M}\ \big{)}.\]
We apply Lemma 3 to the element \([I^{*}|I]\ {\bf K}^{\lambda}\). As we shall see, the double sum
\[\sum_{(I^{*})_{k,m-k}}\ \sum_{(I)_{k,m-k}}\]
in eq. (11) reduces to a single sum
\[\sum_{(I)_{k,m-k}}\]
since the only splits \(I^{*}_{(1)},\ I^{*}_{(2)}\) in \((I^{*})_{k,m-k}\) that give rise to nonzero summands are those for
\[I^{*}_{(1)}=(I_{(1)})^{*}\quad\mbox{and}\quad I^{*}_{(2)}=(I_{(2)})^{*},\]
where \((I_{(1)})^{*},\ (I_{(2)})^{*}\) are the reverse words of \(I_{(1)}\) and \(I_{(2)}\), respectively.
**Lemma 4**.: _The element_
\[[I^{*}|I]\ {\bf K}^{\lambda}={\mathfrak{p}}\big{(}\{I^{*}|\alpha\}\{\alpha|I\}\ {\bf M}^{*}\cdot{\bf M}\big{)}\]
_equals_
\[\sum_{k=0}^{m}\qquad(-1)^{(|{\bf M}|+k)(m-k)}\sum_{(I)_{k,m-k}}\qquad{\mathfrak{ p}}\big{(}T_{(I_{(1)})^{*}\alpha}\big{(}T_{\alpha\ I_{(1)}}\big{(}{\bf M}^{*} \big{)}\big{)}\{(I_{(2)})^{*}|\alpha\}{\bf M}\{\alpha|I_{(2)}\}\big{)}.\]
Proof.: From Lemma 3, we have
\[{\mathfrak{p}}\big{(}\{I^{*}|\alpha\}\{\alpha|I\}\ {\bf M}^{*} \cdot{\bf M}\big{)}=\sum_{k=0}^{m}\ (-1)^{(|{\bf M}|+k)(m-k)}\\ \big{(}\sum_{(I)_{k,m-k}}\ \sum_{(I^{*})_{k,m-k}}\ {\mathfrak{p}} \big{(}T_{I^{*}_{(1)}\alpha}\big{(}T_{\alpha\ I_{(1)}}\big{(}{\bf M}^{*} \big{)}\big{)}\ \{(I^{*}_{(2)}|\alpha\}\ {\bf M}\ \{\alpha|I_{(2)}\}\ )\big{)}.\]
Let \(k=0,1,\ldots,m\) and examine the element
\[\sum_{(I)_{k,m-k}}\ \sum_{(I^{*})_{k,m-k}}T_{I^{*}_{(1)}\alpha} \big{(}T_{\alpha\ I_{(1)}}\big{(}{\bf M}^{*}\big{)}\big{)}\{I^{*}_{(2)}|\alpha \}\ {\bf M}\{\alpha|I_{(2)}\}=\\ =\sum_{(I)_{k,m-k}}\ \sum_{(I^{*})_{k,m-k}}T_{I^{*}_{(1)}\alpha} \big{(}T_{\alpha\ I_{(1)}}\big{(}\{\underline{\lambda_{1}}^{*}|\beta_{1}\} \cdots\{\underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)}\big{)}\{(I_{(2)})^{*}| \alpha\}\ {\bf M}\{\alpha|I_{(2)}\}.\]
If \(i\in I_{(2)}\), then \(i\notin I_{(1)}\). Hence, all the variables
\[\{i|\beta_{q}\}\qquad q=1,2,\ldots,p\]
appear in
\[T_{\alpha,I_{(1)}}\big{(}\{\underline{\lambda_{1}}^{*}|\beta_{1}\}\cdots\{ \underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)},\]
for every \(q=1,2,\ldots,p\).
Assume that \(i\notin I^{*}_{(2)}\), then \(i\in I^{*}_{(1)}\). Hence, \(\exists\ \underline{q}\in\{1,2,\ldots,p\}\) such that the variable
\[\{i|\beta_{\underline{q}}\}\]
is _created_ by the action of
\[T_{I^{*}_{(1)}\alpha}\]
on
\[T_{\alpha,I_{(1)}}\big{(}\{\underline{\lambda_{1}}^{*}|\beta_{1}\}\cdots\{ \underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)}\qquad(*).\]
Then \((*)\) contains two occurrences of \(\{i|\beta_{\underline{q}}\}\) and, hence, _equals zero_. Therefore
\[T_{I_{(1)}^{*}\alpha}T_{\alpha,I_{(1)}}\big{(}\{\underline{\lambda_{1}}^{*}| \beta_{1}\}\cdots\{\underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)}\neq 0\]
implies
\[i\in I_{(2)}\implies i\in I_{(2)}^{*}.\]
Since \(I_{(2)}\) and \(I_{(2)}^{*}\) are words of the same length \(m-k\), this implies that the only _not zero_ summands - _with respect to the action of the Capelli epimorphism_\(\mathfrak{p}\) - in
\[\sum_{(I)_{k,m-k}}\ \sum_{(I^{*})_{k,m-k}}\mathfrak{p}\big{(}T_{I_{(1)}^{*} \alpha}\big{(}T_{\alpha\ I_{(1)}}\big{(}\mathbf{M}^{*}\big{)}\big{)}\{I_{(2)}^ {*}|\alpha\}\ \mathbf{M}\{\alpha|I_{(2)}\}\big{)}\]
are for \(I_{(1)}^{*}=(I_{(1)})^{*}\) and \(I_{(2)}^{*}=(I_{(2)})^{*}\), that is
\[\mathfrak{p}\big{(}T_{(I_{(1)})^{*}\alpha}\big{(}T_{\alpha\ I_{(1)}}\big{(} \mathbf{M}^{*}\big{)}\big{)}\ \{(I_{(2)})^{*}|\alpha\}\ \mathbf{M}\{\alpha|I_{(2)}\}\big{)}.\]
Let us examine the expression
\[\sum_{(I)_{k.m-k}}(-1)^{k(m-k)}\ T_{(I_{(1)})^{*}\alpha}\big{(}T_{\alpha\ I_{ (1)}}\big{(}\mathbf{M}^{*}\big{)}\big{)}\ \{(I_{(2)})^{*}|\alpha\}\{\alpha|I_{(2)}\}. \tag{12}\]
in the notation of _splits_.
**Corollary 17**.: _The expression (12) equals_
\[\sum_{(A,B)\in S(I;k,m-k)}\ T_{A^{*}\alpha}\big{(}T_{\alpha A}\big{(}\mathbf{M }^{*}\big{)}\big{)}\ \{B^{*}|\alpha\}\{\alpha|B\}.\]
Proof.: In the notation of _splits_, the expression (12) equals
\[(-1)^{k(m-k)}\ \sum_{(A,B)\in S(I;k,m-k)}T_{A^{*}\alpha}\big{(}T_{ \alpha\ A}\big{(}\mathbf{M}^{*}\big{)}\big{)}\ \{B^{*}|\alpha\}\{\alpha|B\}\times\\ sgn(I;A,B)sgn(I^{*};A^{*},B^{*}).\]
We have
\[(-1)^{k(m-k)}sgn(I;A,B)sgn(I^{*};A^{*},B^{*})=\\ =(-1)^{k(m-k)}(-1)^{k(m-k)}sgn(I;A,B)sgn(I^{*};B^{*},A^{*}).\]
But \(sgn(I;A,B)sgn(I^{*};B^{*},A^{*})=1\)
Given \((A,B)\in S(I;k,m-k)\), let \(A=a_{1}a_{2}\cdots a_{k}\), \(\{a_{1}<a_{2}<\cdots<a_{k}\}\subseteq\{1,2,\ldots,m\}\) and recall
\[\mathbf{M}^{*}=\{\underline{\lambda}_{1}|\beta_{1}\}\cdots\{\underline{\lambda _{p}}^{*}|\beta_{p}\};\]
we examine the element
\[T_{A^{*}\alpha}T_{\alpha\ A}\big{(}\mathbf{M}^{*}\big{)}. \tag{13}\]
**Lemma 5**.: _We have_
\[T_{A^{*}\alpha}T_{\alpha\ A}\big{(}\mathbf{M}^{*}\big{)}=\langle\ p\rangle_{k }\ \{\underline{\lambda_{1}}^{*}|\beta_{1}\}\cdots\{\underline{\lambda_{p}}^{*}| \beta_{p}\}=\ \langle\ p\rangle_{k}\ \mathbf{M}^{*},\]
_where_
\[\langle\ p\rangle_{k}=p(p+1)\cdots(p+k-1)\]
_is the raising factorial coefficient._
Proof.: By skew-symmetry, a simple computation shows that (13) equals
\[\sum_{h_{1}+\cdots+h_{p}=k}\ \sum_{(A_{1},\ldots,A_{p})\in S(A;h_{1},\ldots,h _{p})}\ T_{(A_{1})^{*}\alpha}T_{\alpha A_{1}}\big{(}\{\underline{\lambda_{1}}^ {*}|\beta_{1}\}\big{)}\cdots T_{(A_{p})^{*}\alpha}T_{\alpha A_{p}}\big{(}\{ \underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)}. \tag{14}\]
We examine the value of
\[T_{C^{*}\alpha}T_{\alpha C}\big{(}\{\underline{q}^{*}|\beta\}\big{)}\]
for \(C=c_{1}c_{2}\cdots c_{h}\), \(\{c_{1}<c_{2}<\ldots<c_{h}\}\subseteq\{1,2,\ldots q\}\).
Clearly
\[\{\underline{q}^{*}|\beta\}=\{\underline{q}|\beta\}(-1)^{\binom{q}{2}},\]
and a simple computation shows that
\[T_{C^{*}\alpha}T_{\alpha C}\big{(}\{\underline{q}|\beta\}\big{)}=h!\ \{ \underline{q}|\beta\}.\]
Indeed, we have
\[T_{\alpha C}\big{(}\{\underline{q}|\beta\}\big{)} =T_{c_{1}\alpha}\cdots T_{c_{h}\alpha}\big{(}\{1|\beta\}\cdots\{ q|\beta\}\big{)}\] \[=\{1|\beta\}\cdots\widehat{\{c_{1}|\beta\}}\{\alpha|\beta\} \cdots\widehat{\{c_{h}|\beta\}}\{\alpha|\beta\}\cdots\{q|\beta\}\ (-1)^{c_{h}-1+\cdots+c_{1}-1}\] \[=\{\alpha|\beta\}^{h}\{1|\beta\}\cdots\widehat{\{c_{1}|\beta\}} \cdots\widehat{\{c_{h}|\beta\}}\cdots\{q|\beta\}\ (-1)^{c_{h}-1+\cdots+c_{1}-1};\]
now,
\[T_{C\alpha}T_{\alpha C}\big{(}\{\underline{q}|\beta\}\big{)} =T_{c_{h}\alpha}\cdots T_{c_{1}\alpha}\big{(}\{\alpha|\beta\}^{h} \{1|\beta\}\cdots\widehat{\{c_{1}|\beta\}}\cdots\big{)}(-1)^{c_{h}-1+\cdots+c_ {1}-1}\] \[=h!\{c_{h}|\beta\}\cdots\{c_{1}|\beta\}\cdots\widehat{\{c_{1}| \beta\}}\cdots\widehat{\{c_{h}|\beta\}}(-1)^{c_{h}-1+\cdots+c_{1}-1}\] \[=h!\{1|\beta\}\cdots\{q|\beta\}=h!\{\underline{q}|\beta\}.\]
Then,
\[T_{C^{*}\alpha}T_{\alpha C}\big{(}\{\underline{q}^{*}|\beta\}\big{)}=(-1)^{ \binom{q}{2}}T_{C^{*}\alpha}T_{\alpha C}\big{(}\{\underline{q}|\beta\}\big{)}=(- 1)^{\binom{q}{2}}\ h!\ \{\underline{q}|\beta\}=h!\ \{\underline{q}^{*}|\beta\}.\]
Hence, (14) equals
\[\sum_{(h_{1},\ldots,h_{p});h_{1}+\cdots+h_{p}=k}\ \sum_{(A_{1}, \ldots,A_{p})\in S(A;h_{1},\ldots,h_{p})}\ h_{1}!\cdots h_{p}!\ \big{(}\{\underline{\lambda_{1}}^{*}|\beta_{1}\} \cdots\{\underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)}=\] \[=\sum_{h_{1}+\cdots+h_{p}=k}\ \frac{k!}{h_{1}!\cdots h_{p}!}\ h_{1}! \cdots h_{p}!\ \big{(}\{\underline{\lambda_{1}}^{*}|\beta_{1}\}\cdots\{ \underline{\lambda_{p}}^{*}|\beta_{p}\}\big{)}\]
that equals
\[\left\langle\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Proof of Theorem 1
Proof.: Recall that
\[v_{\mu}=(Der_{\tilde{\mu}}|Der_{\tilde{\mu}}^{P}),\]
where \((Der_{\tilde{\mu}}|Der_{\tilde{\mu}}^{P})\) is the Young bitableau (see, e.g. Subsection 9.7 below)
\[\left(\begin{array}{cccc|cccc}1&2&\cdots&\cdots&\tilde{\mu}_{1}\\ &&&&\\ 1&2&\cdots&\tilde{\mu}_{2}&\\ \cdots&\cdots&\cdots&\\ 1&2&\tilde{\mu}_{q}&\end{array}\right|\begin{array}{cccc|cccc}1&2&\cdots& \cdots&\tilde{\mu}_{1}\\ &&&&\\ 1&2&\cdots&\tilde{\mu}_{2}&\\ \cdots&\cdots&\cdots&\\ 1&2&\tilde{\mu}_{q}&\end{array}\right)\]
in the polynomial algebra \(\mathbb{C}[M_{n,d}]\).
Set
\[e_{Der_{n^{p}}^{*},Coder_{n^{p}}}\ =\ e_{n\alpha_{1}}\cdots e_{1\alpha_{1}} \cdots\cdots e_{n\alpha_{p-1}}\cdots e_{1\alpha_{p-1}}e_{n\alpha_{p}}\cdots e_{ 1\alpha_{p}}.\]
Set
\[e_{Coder_{n^{p}},Der_{n^{p}}}\ =\ e_{\alpha_{1}1}\cdots e_{\alpha_{1}n}\cdots \cdots e_{\alpha_{p-1}1}\cdots e_{\alpha_{p-1}n}e_{\alpha_{p}1}\cdots e_{ \alpha_{p}n}.\]
Since
\[\mathbf{K_{n}^{p}}=\mathfrak{p}\big{(}e_{Der_{n^{p}}^{*},Coder_{n^{p}}}\ e_{ Coder_{n^{p}},Der_{n^{p}}}\big{)},\]
the action of \(\mathbf{K_{n}^{p}}\) on \(v_{\mu}=(Der_{\tilde{\mu}}|Der_{\tilde{\mu}}^{P})\) is the same as the action of
\[e_{Der_{n^{p}}^{*},Coder_{n^{p}}}\ e_{Coder_{n^{p}},Der_{n^{p}}}.\]
We follow [37] (see Proposition 5).
Now, if \(\mu_{n}=0\), then
\[e_{\alpha_{p}n}\cdot(Der_{\tilde{\mu}}|Der_{\tilde{\mu}}^{P})\]
is zero.
In the following, we limit ourselves to write the left parts of the Young bitableaux involved.
If \(\mu_{n}\geq 1\), then
\[e_{\alpha_{p}n}\cdot(Der_{\tilde{\mu}}|Der_{\tilde{\mu}}^{P})\]
equals
\[(-1)^{n-1}\left(\begin{array}{ccccc}1&2&\cdots&n-1&\alpha_{p}\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1&n\\ 1&2&\cdots&\cdots\\ \cdots&\cdots&\cdots\end{array}\right)+\cdots+(-1)^{n(\mu_{n}-1)+n-1}\left( \begin{array}{ccccc}1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1&\alpha_{p}\\ 1&2&\cdots&\cdots\\ \cdots&\cdots\end{array}\right), \tag{15}\]
by Proposition 30.
A simple sign computation shows that (15) equals
\[(-1)^{n-1}\ \mu_{n}(-1)^{n-1}\left(\begin{array}{ccccc}1&2&\cdots&n-1&\alpha_{p} \\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1&n\\ 1&2&\cdots&\cdots\\ \cdots&\cdots&\cdots\end{array}\right).\]
Now, again by Proposition 30 and simple computation, we have:
\[e_{\alpha_{p}n-1}\cdot\left(\begin{array}{ccccc}1&2&\cdots&n-1&\alpha_{p}\\ 1&2&\cdots&\cdots&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&\cdots\\ \cdots&\cdots&\cdots\end{array}\right)=\]
\[=(-1)^{n-2}\left(\begin{array}{ccccc}1&2&\cdots&\alpha_{p}&\alpha_{p}\\ 1&2&\cdots&\cdots&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&\cdots\\ \cdots&\cdots&\cdots\end{array}\right)+\]
\[+\sum_{i=2}^{\mu_{n}}\;(-1)^{(n-1)+(i-2)n+(n-2)}\left(\begin{array}{ccccc}1& 2&\cdots&n-1&\alpha_{p}\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&\alpha_{p}&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&\cdots\\ 1&2&\cdots&n-1\\ \end{array}\right)+\]
\[+\sum_{i=\mu_{n}+1}^{\mu_{n-1}}\;(-1)^{(n-1)+(\mu_{n}-1)n+(i-\mu_{n}-1)(n-1)+ (n-2)}\left(\begin{array}{ccccc}1&2&\cdots&n-1&\alpha_{p}\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&\cdots&n\\ 1&2&\cdots&n-1\\ \cdots&\cdots&\cdots\\ 1&2&\cdots&n-1\\ \end{array}\right),\]
where the tableaux in the two sums are the tableaux with the second occurence of \(\alpha_{p}\) in the \(i\)th row.
By the _Straightening Law_ of Grosshans, Rota and Stein ([20], Proposition 10, see also
[2], Thm. 8.1), each summand in the two sums equals
\[(-1)^{n-2}\frac{1}{2}\left(\begin{array}{cccccc}1&2&\cdots&\alpha_{p}&\alpha_{p }\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots&&&\\ 1&2&\cdots&\cdots&n\\ 1&2&\cdots&&&\\ \cdots&\cdots&\cdots&&&\end{array}\right)\]
and, hence,
\[e_{\alpha_{p}n-1}\cdot\left(\begin{array}{cccccc}1&2&\cdots&n-1&\alpha_{p}\\ 1&2&\cdots&\cdots&n\\ \cdots&\cdots&\cdots&&&\\ 1&2&\cdots&\cdots&&&\\ \cdots&\cdots&\cdots&&&\end{array}\right)=(-1)^{n-2}\frac{(\mu_{n-1}+1)}{2} \left(\begin{array}{cccccc}1&2&\cdots&\alpha_{p}&\alpha_{p}\\ 1&2&\cdots&n-1&n\\ \cdots&\cdots&\cdots&&&\\ 1&2&\cdots&&&\\ \cdots&\cdots&\cdots&&&\end{array}\right).\]
By iterating this argument, we obtain:
\[e_{\alpha_{p}j}\cdot\big{(}\frac{1}{(n-j)!}\left(\begin{array}{cccccc}1&2& \cdots&j&\alpha_{p}^{n-j}\\ 1&2&\cdots&j&\cdots&n\\ \cdots&\cdots&\cdots&&&\\ 1&2&\cdots&j&\cdots&n\\ \cdots&\cdots&&&\end{array}\right)\big{)}=\]
By iterating this procedure,
\[e_{\alpha_{p}1}\cdots e_{\alpha_{p}n}\cdot(Der_{\tilde{\mu}}|Der_{\tilde{\mu} }^{P})=\\ =\frac{(-1)^{\binom{n}{2}}}{n!}\ (\mu_{1}+n-1)(\mu_{2}+n-2)\cdots\mu_{n} \left(\begin{array}{cccccc}\alpha_{p}&\alpha_{p}&\cdots&\alpha_{p}\\ 1&2&\cdots&n\\ \cdots&\cdots&\cdots&&&\\ 1&2&\cdots&&&\\ \cdots&\cdots&&&\end{array}\right).\]
\[e_{Coder_{n^{p}},Der_{n^{p}}}\cdot(Der_{\hat{\mu}}|Der_{\hat{\mu}}^{P})=\]
\[=\left(\prod_{i=0}^{p-1}\,(\mu_{1}-i+n-1)\cdots(\mu_{n}-i)\right)\ \frac{(-1)^{\binom{n}{2}p}}{(n!)^{p}}\ \left(\begin{array}{cccc} \alpha_{p}&\alpha_{p}&\cdots&\alpha_{p}\\ \alpha_{p-1}&\alpha_{p-1}&\cdots&\alpha_{p-1}\\ \cdots&\cdots&\cdots&\\ \alpha_{1}&\alpha_{1}&\cdots&\alpha_{1}\\ 1&2&\cdots&\\ \cdots&\cdots&\end{array}\right)=\]
\[=\left(\prod_{i=0}^{p-1}\,(\mu_{1}-i+n-1)\cdots(\mu_{n}-i)\right)\ \frac{(-1)^{\binom{n}{2}p+\binom{p}{2}n}}{(n!)^{p}}\ \left(\begin{array}{cccc} \alpha_{1}&\alpha_{1}&\cdots&\alpha_{1}\\ \cdots&\cdots&\cdots&\\ \alpha_{p-1}&\alpha_{p-1}&\cdots&\alpha_{p-1}\\ \alpha_{p}&\alpha_{p}&\cdots&\alpha_{p}\\ 1&2&\cdots&\\ \cdots&\end{array}\right).\]
Since
\[e_{Der_{n^{p}}^{*},Coder_{n^{p}}}\cdot\left(\begin{array}{cccc} \alpha_{1}&\alpha_{1}&\cdots&\alpha_{1}\\ \cdots&\cdots&\cdots&\\ \alpha_{p-1}&\alpha_{p-1}&\cdots&\alpha_{p-1}\\ \alpha_{p}&\alpha_{p}&\cdots&\alpha_{p}\\ 1&2&\cdots&\\ \cdots&\end{array}\right)=\ (-1)^{\binom{n}{2}p}(n!)^{p}\ (Der_{\hat{\mu}}|Der_{\hat{\mu}}^{P})=\] \[=\mathbf{K_{n}^{p}}(v_{\mu})=\mathbf{K_{n}^{p}}\cdot(Der_{\hat{ \mu}}|Der_{\hat{\mu}}^{P})=e_{Der_{n^{p}}^{*},Coder_{n^{p}}}\ e_{Coder_{n^{p}}, Der_{n^{p}}}\cdot(Der_{\hat{\mu}}|Der_{\hat{\mu}}^{P})=\] \[=\left(\prod_{i=0}^{p-1}\,(\mu_{1}-i+n-1)\cdots(\mu_{n}-i)\right) \ \frac{(-1)^{\binom{n}{2}p}}{(n!)^{p}}\ (-1)^{\binom{p}{2}n}\times\] \[\qquad\times e_{Der_{n^{p}}^{*},Coder_{n^{p}}}\cdot\left(\left( \begin{array}{cccc}\alpha_{1}&\alpha_{1}&\cdots&\alpha_{1}\\ \cdots&\cdots&\cdots&\\ \alpha_{p-1}&\alpha_{p-1}&\cdots&\alpha_{p-1}\\ \alpha_{p}&\alpha_{p}&\cdots&\alpha_{p}\\ 1&2&\cdots&\\ \cdots&\end{array}\right)\right)=\] \[=\left(\prod_{i=0}^{p-1}\,(\mu_{1}-i+n-1)\cdots(\mu_{n}-i) \right)(-1)^{\binom{p}{2}n}(Der_{\hat{\mu}}|Der_{\hat{\mu}}^{P}).\]
Notice that, if \(\mu_{n}<p\), then \(\mathbf{K_{n}^{p}}(v_{\mu})=0\).
Appendix. A glimpse on the superalgebraic method of virtual variables
In this section, we summarize the main features of the superalgebraic method of virtual variables. We follow [8] and [9].
### The general linear Lie super algebra \(gl(m|n)\)
Given a vector space \(V\) of dimension \(n,\) we will regard it as a subspace of a \(\mathbb{Z}_{2}-\)graded vector space \(V_{0}\oplus V_{1},\) where \(V_{1}=V.\) The vector spaces \(V_{0}\) (we assume that \(dim(V_{0})=m\) is "sufficiently large") is called the _positive virtual (auxiliary) vector space_ and \(V\) is called the _(negative) proper vector space_.
The inclusion \(V\subset V_{0}\oplus V_{1}\) induces a natural embedding of the ordinary general linear Lie algebra \(gl(n)\) of \(V_{n}\) into the _auxiliary_ general linear Lie _superalgebra_\(gl(m|n)\) of \(V_{0}\oplus V_{1}\) (see, e.g. [23], [39]).
Let \({\cal A}_{0}=\{\alpha_{1},\ldots,\alpha_{m_{0}}\},\)\({\cal L}=\{x_{1},x_{2},\ldots,x_{n}\}\) denote _fixed bases_ of \(V_{0}\) and \(V=V_{1},\) respectively; therefore \(|\alpha_{s}|=0\in\mathbb{Z}_{2},\) and \(|i|=1\in\mathbb{Z}_{2}.\)
Let
\[\{e_{a,b};a,b\in{\cal A}_{0}\cup{\cal L}\},\qquad|e_{a,b}|=|a|+|b|\in\mathbb{Z} _{2}\]
be the standard \(\mathbb{Z}_{2}-\)homogeneous basis of the Lie superalgebra \(gl(m|n)\) provided by the elementary matrices. The elements \(e_{a,b}\in gl(m|n)\) are \(\mathbb{Z}_{2}-\)homogeneous of \(\mathbb{Z}_{2}-\)degree \(|e_{a,b}|=|a|+|b|.\)
The superbracket of the Lie superalgebra \(gl(m_{0}|m_{1}+n)\) has the following explicit form:
\[[e_{a,b},e_{c,d}]=\delta_{bc}\ e_{a,d}-(-1)^{(|a|+|b|)(|c|+|d|)}\delta_{ad}\ e_{c,b},\]
\(a,b,c,d\in{\cal A}_{0}\cup{\cal L}.\)
For the sake of readability, we will frequently write \({\cal L}=\{1,2,\ldots,n\}\) in place of \({\cal L}=\{x_{1},x_{2},\ldots,x_{n}\}.\)
The elements of the sets \({\cal A}_{0},{\cal L}\) are called _positive virtual symbols_ and _negative proper symbols_, respectively.
### The supersymmetric algebra \(\mathbb{C}[M_{m|n,d}]\)
For the sake of readability, given \(n,d\in\mathbb{Z}^{+},\)\(n\leq d,\) we write
\[M_{n,d}=\left[(i|j)\right]_{i=1,\ldots,n,j=1,\ldots,d}=\left(\begin{array}{ ccc}(1|1)&\ldots&(1|d)\\ \vdots&&\vdots\\ (n|1)&\ldots&(n|d)\end{array}\right)\]
in place of
\[M_{n,d}=\left[x_{ij}\right]_{i=1,\ldots,n;j=1,\ldots,d}=\left[\begin{array}{ccc} x_{11}&\ldots&x_{1d}\\ x_{21}&\ldots&x_{2d}\\ \vdots&&\vdots\\ x_{n1}&\ldots&x_{nd}\end{array}\right].\]
(compare with eq. (3)) and, consistently,
\[\mathbb{C}[M_{n,d}]=\mathbb{C}[(i|j)]_{i=1,\ldots,n,j=1,\ldots,d}\]
in place of
\[\mathbb{C}[M_{n,d}]=\mathbb{C}[x_{ij}]_{i=1,\ldots,n,j=1,\ldots,d}\]
for the polynomial algebra in the (commutative) entries \((i|j)\) of the matrix \(M_{n,d}\).
We regard the commutative algebra \(\mathbb{C}[M_{n,d}]\) as a subalgebra of the _"auxiliary" supersymmetric algebra_
\[\mathbb{C}[M_{m|n,d}]\]
generated by the (\(\mathbb{Z}_{2}\)-graded) variables
\[(a|j),\quad a\in\mathcal{A}_{0}\cup\mathcal{L},\quad j\in\mathcal{P}=\{j=1, \ldots,d;|j|=1\in\mathbb{Z}_{2}\},\]
with \(|(a|j)|=|a|+|j|\in\mathbb{Z}_{2}\), subject to the commutation relations:
\[(a|h)(b|k)=(-1)^{|(a|h)||(b|k)|}\ (b|k)(a|h).\]
In plain words, \(\mathbb{C}[M_{m|n,d}]\) is the free supersymmetric algebra
\[\mathbb{C}\big{[}(\alpha_{s}|j),(i|j)\big{]}\]
generated by the (\(\mathbb{Z}_{2}\)-graded) variables \((\alpha_{s}|j),(i|j)\), \(j=1,2,\ldots,d\), where all the variables commute each other, with the exception of pairs of variables \((\alpha_{s}|j),(\alpha_{t}|j)\) that skew-commute:
\[(\alpha_{s}|j)(\alpha_{t}|j)=-(\alpha_{t}|j)(\alpha_{s}|j).\]
In the standard notation of multilinear algebra, we have:
\[\mathbb{C}[M_{m|n,d}]\cong\Lambda\big{[}V_{0}\otimes P_{d}\big{]}\otimes \operatorname{Sym}\big{[}V_{1}\otimes P_{d}\big{]}\]
where \(P_{d}=(P_{d})_{1}\) denotes the trivially \(\mathbb{Z}_{2}-\)graded vector space with distinguished basis \(\mathcal{P}=\{j=1,\ldots,d;|j|=1\in\mathbb{Z}_{2}\}\).
### Left superderivations and left superpolarizations
A _left superderivation_\(D^{l}\) (\(\mathbb{Z}_{2}-\)homogeneous of degree \(|D^{l}|\)) (see, e.g. [39], [23]) on \(\mathbb{C}[M_{m|n,d}]\) is an element of the superalgebra \(End_{\mathbb{C}}[\mathbb{C}[M_{m|n,d}]]\) that satisfies "Leibniz rule"
\[D^{l}(\mathbf{p}\cdot\mathbf{q})=D^{l}(\mathbf{p})\cdot\mathbf{q}+(-1)^{|D^{l }||\mathbf{p}|}\mathbf{p}\cdot D^{l}(\mathbf{q}),\]
for every \(\mathbb{Z}_{2}-\)homogeneous of degree \(|\mathbf{p}|\) element \(\mathbf{p}\in\mathbb{C}[M_{m|n,d}]\).
Given two symbols \(a,b\in\mathcal{A}_{0}\cup\mathcal{L}\), the _left superpolarization_\(D^{l}_{a,b}\) of \(b\) to \(a\) is the unique _left_ superderivation of \(\mathbb{C}[M_{m|n,d}]\) of \(\mathbb{Z}_{2}-\)degree \(|D^{l}_{a,b}|=|a|+|b|\in\mathbb{Z}_{2}\) such that
\[D^{l}_{a,b}\left((c|j)\right)=\delta_{bc}\ (a|j),\ c\in\mathcal{A}_{0}\cup \mathcal{L},\ j=1,\ldots,n.\]
Informally, we say that the operator \(D^{l}_{a,b}\)_annihilates_ the symbol \(b\) and _creates_ the symbol \(a\).
### The superalgebra \(\mathbb{C}[M_{m|n,d}]\) as a \(\mathbf{U}(gl(m|n))\)-module
Since
\[D^{l}_{a,b}D^{l}_{c,d}-(-1)^{(|a|+|b|)(|c|+|d|)}D^{l}_{c,d}D^{l}_{a,b}=\delta_ {b,c}D^{l}_{a,d}-(-1)^{(|a|+|b|)(|c|+|d|)}\delta_{a,d}D^{l}_{c,b},\]
the map
\[e_{a,b}\mapsto D^{l}_{a,b},\qquad a,b\in\mathcal{A}_{0}\cup\mathcal{L}\]
is a Lie superalgebra morphism from \(gl(m|n)\) to \(End_{\mathbb{C}}\big{[}\mathbb{C}[M_{m|n,d}]\big{]}\) and, hence, it uniquely defines a representation:
\[\varrho:\mathbf{U}(gl(m|n))\to End_{\mathbb{C}}[\mathbb{C}[M_{m|n,d}]],\]
where \(\mathbf{U}(gl(m|n))\) is the enveloping superalgebra of \(gl(m|n)\).
In the following, we always regard the superalgebra \(\mathbb{C}[M_{m|n,d}]\) as a \(\mathbf{U}(gl(m|n))-\)supermodule, with respect to the action induced by the representation \(\varrho\):
\[e_{a,b}\cdot\mathbf{p}=D^{l}_{a,b}(\mathbf{p}),\]
for every \(\mathbf{p}\in\mathbb{C}[M_{m|n,d}]\).
We recall that \(\mathbf{U}(gl(m|n))-\)module \(\mathbb{C}[M_{m|n,d}]\) is a semisimple module, whose simple submodules are - up to isomorphism - _Schur supermodules_ (see, e.g. [4], [5], [2]. For a more traditional presentation, see also [15]).
Clearly, \(\mathbf{U}(gl(0|n))=\mathbf{U}(gl(n))\) is a subalgebra of \(\mathbf{U}(gl(m|n))\) and the subalgebra \(\mathbb{C}[M_{n,d}]\) is a \(\mathbf{U}(gl(n))-\)submodule of \(\mathbb{C}[M_{m|n,d}]\).
### The virtual algebra \(Virt(m,n)\) and the virtual presentations of elements in \({\bf U}(gl(n))\)
We say that a product
\[e_{a_{m},b_{m}}\cdots e_{a_{1},b_{1}}\in{\bf U}(gl(m|n)),\quad a_{i},b_{i}\in{ \cal A}_{0}\cup{\cal L},\ i=1,\ldots,m\]
is an _irregular expression_ whenever there exists a right subword
\[e_{a_{i},b_{i}}\cdots e_{a_{2},b_{2}}e_{a_{1},b_{1}},\]
\(i\leq m\) and a virtual symbol \(\gamma\in{\cal A}_{0}\) such that
\[\#\{j;b_{j}=\gamma,j\leq i\}>\#\{j;a_{j}=\gamma,j<i\}.\]
The meaning of an irregular expression in terms of the action of \({\bf U}(gl(m|n))\) by left superpolarization on the algebra \({\mathbb{C}}[M_{m|n,d}]\) is that there exists a virtual symbol \(\gamma\) and a right subsequence in which the symbol \(\gamma\) is _annihilated_ more times than it was already _created_ and, therefore, the action of an irregular expression on the algebra \({\mathbb{C}}[M_{n,d}]\) is _zero_.
**Example 7**.: _Let \(\gamma\in{\cal A}_{0}\) and \(x_{i},x_{j}\in{\cal L}.\) The product_
\[e_{\gamma,x_{j}}e_{x_{i},\gamma}e_{x_{j},\gamma}e_{\gamma,x_{i}}\]
_is an irregular expression._
Let \({\bf Irr}\) be the _left ideal_ of \({\bf U}(gl(m|n))\) generated by the set of irregular expressions.
**Proposition 19**.: _The superpolarization action of any element of \({\bf Irr}\) on the subalgebra \({\mathbb{C}}[M_{n,d}]\subset{\mathbb{C}}[M_{m|n,d}]\) - via the representation \(\varrho\) - is identically zero._
**Proposition 20**.: _The sum \({\bf U}(gl(0|n))+{\bf Irr}\) is a direct sum of vector subspaces of \({\bf U}(gl(m|n)).\)_
**Proposition 21**.: _The direct sum vector subspace \({\bf U}(gl(0|n))\oplus{\bf Irr}\) is a subalgebra of \({\bf U}(gl(m|n)).\)_
The subalgebra
\[Virt(m,n)={\bf U}(gl(0|n))\oplus{\bf Irr}\subset{\bf U}(gl(m|n)).\]
is called the _virtual algebra_.
**Proposition 22**.: _The left ideal \({\bf Irr}\) of \({\bf U}(gl(m|n))\) is a two sided ideal of \(Virt(m,n)\)._
The _Capelli devirtualization epimorphism_ is the surjection
\[{\mathfrak{p}}:Virt(m,n)={\bf U}(gl(0|n))\oplus{\bf Irr}\twoheadrightarrow{\bf U }(gl(0|n))={\bf U}(gl(n))\]
with \(Ker({\mathfrak{p}})={\bf Irr}\).
Any element in \({\bf M}\in Virt(m,n)\) defines an element in \({\bf m}\in{\bf U}(gl(n))\) - via the map \({\mathfrak{p}}\) - and \({\bf M}\) is called a _virtual presentation_ of \({\bf m}\).
Furthermore,
**Proposition 23**.: _The subalgebra \({\mathbb{C}}[M_{n,d}]\subset{\mathbb{C}}[M_{m|n,d}]\) is invariant with respect to the action of the subalgebra \(Virt(m,n).\)_
**Proposition 24**.: _For every element \({\bf m}\in{\bf U}(gl(n))\), the action of \({\bf m}\) on the subalgebra \({\mathbb{C}}[M_{n,d}]\) is the same of the action of any of its virtual presentation \({\bf M}\in Virt(m,n).\) In symbols,_
\[if\quad{\mathfrak{p}}({\bf M})={\bf m}\quad then\quad{\bf m}\cdot{\bf P}={\bf M }\cdot{\bf P},\quad for\ every\ {\bf P}\in{\mathbb{C}}[M_{n,d}].\]
Since the map \({\mathfrak{p}}\) a surjection, any element \({\bf m}\in{\bf U}(gl(n))\) admits several virtual presentations. In the sequel, we even take virtual presentations as the _definition_ of special elements in \({\bf U}(gl(n))\), and this method will turn out to be quite effective.
The superalgebra \({\bf U}(gl(m|n))\) is a Lie module with respect to the adjoint representation \(Ad_{gl(m|n)}\). Since \(gl(n)=gl(0|n)\) is a Lie subalgebra of \(gl(m|n)\), then \({\bf U}(gl(m|n))\) is a \(gl(n)-\)module with respect to the adjoint action \(Ad_{gl(n)}\) of \(gl(n)\).
**Proposition 25**.: _The virtual algebra \(Virt(m,n)\) is a submodule of \({\bf U}(gl(m|n))\) with respect to the adjoint action \(Ad_{gl(n)}\) of \(gl(n)\)._
**Proposition 26**.: _The Capelli epimorphism_
\[{\mathfrak{p}}:Virt(m,n)\twoheadrightarrow{\bf U}(gl(n))\]
_is an \(Ad_{gl(n)}-\)equivariant map._
**Corollary 18**.: _The isomorphism \({\mathfrak{p}}\) maps any \(Ad_{gl(n)}-\)invariant element \({\bf m}\in Virt(m,n)\) to a central element of \({\bf U}(gl(n))\)._
_Balanced monomials_ are elements of the algebra \({\bf U}(gl(m|n))\) of the form:
* \(e_{i_{1},\gamma_{p_{1}}}\cdots e_{i_{k},\gamma_{p_{k}}}\cdot e_{\gamma_{p_{1} },j_{1}}\cdots e_{\gamma_{p_{k}},j_{k}},\)
* \(e_{i_{1},\theta_{q_{1}}}\cdots e_{i_{k},\theta_{q_{k}}}\cdot e_{\theta_{q_{1}}, \gamma_{p_{1}}}\cdots e_{\theta_{q_{k}},\gamma_{p_{k}}}\cdot e_{\gamma_{p_{1}},j_ {1}}\cdots e_{\gamma_{p_{k}},j_{k}}\),
* and so on,
where \(i_{1},\ldots,i_{k},j_{1},\ldots,j_{k}\in L\), i.e., the \(i_{1},\ldots,i_{k},j_{1},\ldots,j_{k}\) are \(k\) proper (negative) symbols, and the \(\gamma_{p_{1}},\ldots,\gamma_{p_{k}},\ldots,\theta_{q_{1}},\ldots,\theta_{q_{ k}},\ldots\) are virtual symbols. In plain words, a balanced monomial is product of two or more factors where the rightmost one _annihilates_ (by superpolarization) the \(k\) proper symbols \(j_{1},\ldots,j_{k}\) and _creates_ (by superpolarization) some virtual symbols; the leftmost one _annihilates_ all the virtual symbols and _creates_ the \(k\) proper symbols \(i_{1},\ldots,i_{k}\); between these two factors, there might be further factors that annihilate and create virtual symbols only.
**Proposition 27**.: _Every balanced monomial belongs to \(Virt(m,n)\). Hence, the Capelli epimorphism \(\mathfrak{p}\) maps balanced monomials to elements of \(\mathbf{U}(gl(n))\)._
### Bitableaux monomials and Capelli bitableaux in \(\mathbf{U}(gl(n))\)
We will introduce two classes of remarkable elements of the enveloping algebra \(\mathbf{U}(gl(n))\), that we call _bitableaux monomials_, _Capelli bitableaux_, respectively.
Let \(\lambda\vdash h\) be a partition, and label the boxes of its Ferrers diagram with the numbers \(1,2,\ldots,h\) in the following way:
\[\begin{array}{ccccc}1&2&\cdots&\cdots&\lambda_{1}\\ \lambda_{1}+1&\lambda_{1}+2&\cdots&\lambda_{1}+\lambda_{2}\\ \cdots&\cdots&\cdots&\\ \cdots&\cdots&h\end{array}.\]
A _Young tableau_\(T\) of shape \(\lambda\) over the alphabet \(\mathcal{A}=\mathcal{A}_{0}\cup\mathcal{L}\) is a map \(T:\underline{h}=\{1,2,\ldots,h\}\rightarrow\mathcal{A}\); the element \(T(i)\) is the symbol in the cell \(i\) of the tableau \(T\).
The sequences
\[\begin{array}{l}T(1)T(2)\cdots T(\lambda_{1}),\\ T(\lambda_{1}+1)T(\lambda_{1}+2)\cdots T(\lambda_{1}+\lambda_{2}),\\ \cdots\cdots\end{array}\]
are called the _row words_ of the Young tableau \(T\).
We will also denote a Young tableau by its sequence of rows words, that is \(T=(\omega_{1},\omega_{2},\ldots,\omega_{p})\). Furthermore, the _word of the tableau_\(T\) is the concatenation
\[w(T)=\omega_{1}\omega_{2}\cdots\omega_{p}.\]
The _content_ of a tableau \(T\) is the function \(c_{T}:\mathcal{A}\rightarrow\mathbb{N}\),
\[c_{T}(a)=\sharp\{i\in\underline{h};\ T(i)=a\}.\]
Given a shape/partition \(\lambda\), we assume that \(|{\cal A}_{0}|=m\geq\widetilde{\lambda}_{1}\), where \(\widetilde{\lambda}\) denotes the conjugate shape/partition of \(\lambda\). Let us denote by \(\alpha_{1},\ldots,\alpha_{p}\in{\cal A}_{0}\) an _arbitrary_ family of _distinct positive symbols_. Set
\[C^{*}_{\lambda}=\left(\begin{array}{c}\alpha_{1}\ldots\ldots\ldots\alpha_{1} \\ \alpha_{2}\ldots\ldots\alpha_{2}\\ \ldots\ldots\\ \alpha_{p}\ldots\alpha_{p}\end{array}\right). \tag{16}\]
The tableaux of kind (16) are called _virtual Coderuyts tableaux_ of shape \(\lambda\),.
Let \(S\) and \(T\) be two Young tableaux of same shape \(\lambda\vdash h\) on the alphabet \({\cal A}_{0}\cup{\cal L}\):
\[S=\left(\begin{array}{c}z_{i_{1}}\ldots\ldots\ldots z_{i_{\lambda_{1}}}\\ z_{j_{1}}\ldots\ldots z_{j_{\lambda_{2}}}\\ \ldots\ldots\\ z_{s_{1}}\ldots z_{s_{\lambda_{p}}}\end{array}\right),\qquad T=\left( \begin{array}{c}z_{h_{1}}\ldots\ldots\ldots z_{h_{\lambda_{1}}}\\ z_{k_{1}}\ldots\ldots z_{k_{\lambda_{2}}}\\ \ldots\ldots\\ z_{t_{1}}\ldots z_{t_{\lambda_{p}}}\end{array}\right).\]
To the pair \((S,T)\), we associate the _bitableau monomial_:
\[e_{S,T}=e_{z_{i_{1}},z_{h_{1}}}\cdots e_{z_{i_{\lambda_{1}}},z_{h_{\lambda_{1} }}}e_{z_{j_{1}},z_{k_{1}}}\cdots e_{z_{j_{\lambda_{2}}},z_{k_{\lambda_{2}}}} \cdots\cdots e_{z_{s_{1}},z_{t_{1}}}\cdots e_{z_{s_{\lambda_{p}}},z_{t_{ \lambda_{p}}}}\]
in \({\bf U}(gl(m|n))\).
Given a pair of Young tableaux \(S,T\) of the same shape \(\lambda\) on the proper alphabet \(L\), consider the elements
\[e_{S,C^{*}_{\lambda}}\;e_{C^{*}_{\lambda},T}\in{\bf U}(gl(m|n)).\]
Since these elements are _balanced monomials_ in \({\bf U}(gl(m|n))\), then they belong to the _virtual subalgebra_\(Virt(m,n)\).
Hence, we can consider their images in \({\bf U}(gl(n))\) with respect to the Capelli epimorphism \(\mathfrak{p}\).
We set
\[\mathfrak{p}\Big{(}e_{S,C^{*}_{\lambda}}\;e_{C^{*}_{\lambda},T}\Big{)}=[S|T] \in{\bf U}(gl(n)), \tag{17}\]
and call the element \([S|T]\) a _Capelli bitableau_.
The elements defined in (17) do not depend on the choice of the virtual Coderuyts tableau \(C^{*}_{\lambda}\).
### Biproducts and bitableaux in \(\mathbb{C}[M_{m|n,d}]\)
Embed the algebra
\[\mathbb{C}[M_{m|n,d}]=\mathbb{C}[(\alpha_{s}|j),(i|j)]\]
into the (supersymmetric) algebra \(\mathbb{C}[(\alpha_{s}|j),(i|j),(\gamma|j)]\) generated by the (\(\mathbb{Z}_{2}\)-graded) variables \((\alpha_{s}|j),(i|j),(\gamma|j),\,j=1,2,\ldots,d\), where
\[|(\gamma|j)|=1\in\mathbb{Z}_{2}\ \ for\ every\ j=1,2,\ldots,d,\]
and denote by \(D^{l}_{z_{i},\gamma}\) the superpolarization of \(\gamma\) to \(z_{i}\).
Let \(\omega=z_{1}z_{2}\cdots z_{p}\) be a word on \(\mathcal{A}_{0}\cup\mathcal{L}\), and \(\varpi=j_{t_{1}}j_{t_{2}}\cdots j_{t_{q}}\) a word on the alphabet \(P=\{1,2,\ldots,d\}\). The _biproduct_
\[(\omega|\varpi)=(z_{1}z_{2}\cdots z_{p}|j_{t_{1}}j_{t_{2}}\cdots j_{t_{q}})\]
is the element
\[D^{l}_{z_{1},\gamma}D_{z_{2},\gamma}\cdots D^{l}_{z_{p},\gamma}\Big{(}(\gamma |j_{t_{1}})(\gamma|j_{t_{2}})\cdots(\gamma|j_{t_{q}})\Big{)}\in\mathbb{C}[M_{m |n,d}]\]
if \(p=q\) and is set to be zero otherwise.
**Claim 1**.: _The biproduct \((\omega|\varpi)=(z_{1}z_{2}\cdots z_{p}|j_{t_{1}}j_{t_{2}}\cdots j_{t_{q}})\) is supersymmetric in the \(z\)'s and skew-symmetric in the \(j\)'s. In symbols_
1. \((z_{1}z_{2}\cdots z_{i}z_{i+1}\cdots z_{p}|j_{t_{1}}j_{t_{2}}\cdots j_{t_{q}})=\)__ \[(-1)^{|z_{i}||z_{i+1}|}(z_{1}z_{2}\cdots z_{i+1}z_{i}\cdots z_{p}|j_{t_{1}}j_{t_ {2}}\cdots j_{t_{q}})\]
2. \((z_{1}z_{2}\cdots z_{i}z_{i+1}\cdots z_{p}|j_{t_{1}}j_{t_{2}}\cdots j_{t_{i}}j _{t_{i+1}}\cdots j_{t_{q}})=\)__ \[-\left(z_{1}z_{2}\cdots z_{i}z_{i+1}\cdots z_{p}|j_{t_{1}}\cdots j_{t_{i+1}} j_{t_{i}}\cdots j_{t_{q}}\right).\]
**Proposition 28**.: _(Laplace expansions) We have_
1. \((\omega_{1}\omega_{2}|\varpi)=\Sigma_{(\varpi)}\ (-1)^{|\varpi_{(1)}|| \omega_{2}|}\ (\omega_{1}|\varpi_{(1)})(\omega_{2}|\varpi_{(2)}).\)__
2. \((\omega|\varpi_{1}\varpi_{2})=\Sigma_{(\omega)}\ (-1)^{|\varpi_{1}|| \omega_{(2)}|}\ (\omega_{(1)}|\varpi_{1})(\omega_{(2)}|\varpi_{2}.)\)__
_where_
\[\triangle(\varpi)=\Sigma_{(\varpi)}\ \varpi_{(1)}\otimes\varpi_{(2)},\ \ \ \triangle(\omega)=\Sigma_{(\omega)}\ \omega_{(1)}\otimes\omega_{(2)}\]
_denote the coproducts in the Sweedler notation of the elements \(\varpi\) and \(\omega\) in the supersymmetric Hopf algebra of \(W\)_ (see, e.g. [2]) _and in the free exterior Hopf algebra generated by \(j=1,2,\ldots,d\), respectively._
Let \(\omega=i_{1}i_{2}\cdots i_{p}\), \(\varpi=j_{1}j_{1}\cdots j_{p}\) be words on the negative alphabet \({\cal L}=\{1,2,\ldots,n\}\) and on the negative alphabet \({\cal P}=\{1,2,\ldots,d\}\).
From Proposition 28, we infer
**Corollary 19**.: _The biproduct of the two words \(\omega\) and \(\varpi\)_
\[(\omega|\varpi)=(i_{1}i_{2}\cdots i_{p}|j_{1}j_{2}\cdots j_{p})\]
_is the signed minor:_
\[(\omega|\varpi)=(-1)^{\binom{p}{2}}\ det\Big{(}\ (i_{r}|j_{s})\ \Big{)}_{r,s=1,2, \ldots,p}\in{\mathbb{C}}[M_{n,d}].\]
Following the notation introduced in the previous sections, let
\[Super[V_{0}\oplus V_{1}]=Sym[V_{0}]\otimes\Lambda[V_{1}]\]
denote the _(super)symmetric_ algebra of the space
\[V_{0}\oplus V_{1}\]
(see, e.g. [39]).
By multilinearity, the algebra \(Super[V_{0}\oplus V_{1}]\) is the same as the superalgebra \(Super[{\cal A}_{0}\cup{\cal L}]\) generated by the "variables"
\[\alpha_{1},\ldots,\alpha_{m_{0}}\in{\cal A}_{0},\quad 1,\ldots,n\in L,\]
modulo the congruences
\[zz^{\prime}=(-1)^{|z||z^{\prime}|}z^{\prime}z,\quad z,z^{\prime}\in{\cal A}_{0 }\cup{\cal L}.\]
Let \(d^{l}_{z,z^{\prime}}\) denote the (left)polarization operator of \(z^{\prime}\) to \(z\) on
\[Super[W]=Super[{\cal A}_{0}\cup{\cal L}],\]
that is the unique superderivation of \({\mathbb{Z}}_{2}\)-degree
\[|z|+|z^{\prime}|\in{\mathbb{Z}}_{2}\]
such that
\[d^{l}_{z,z^{\prime}}(z^{\prime\prime})=\delta_{z^{\prime},z^{\prime\prime}} \cdot z,\]
for every \(z,z^{\prime},z^{\prime\prime}\in{\cal A}_{0}\cup{\cal L}\).
Clearly, the map
\[e_{z,z^{\prime}}\to d^{l}_{z,z^{\prime}}\]
is a Lie superalgebra map and, therefore, induces a structure of
\[gl(m|n)-module\]
on \(Super[{\cal A}_{0}\cup{\cal L}]=Super[V_{0}\oplus V_{1}]\).
**Proposition 29**.: _Let \(\varpi=j_{t_{1}}j_{t_{2}}\cdots j_{t_{q}}\) be a word on \(P=\{1,2,\ldots,d\}\). The map_
\[\Phi_{\varpi}:\omega\mapsto(\omega|\varpi),\]
\(\omega\) _any word on \(\mathcal{A}_{0}\cup\mathcal{L}\), uniquely defines \(gl(m|n)-\)equivariant linear operator_
\[\Phi_{\varpi}:Super[\mathcal{A}_{0}\cup\mathcal{L}]\rightarrow\mathbb{C}[M_{m |n,d}],\]
_that is_
\[\Phi_{\varpi}\big{(}e_{z,z^{\prime}}\cdot\omega\big{)}=\Phi_{\varpi}\big{(}d^{ l}_{z,z^{\prime}}(\omega)\big{)}=D^{l}_{z,z^{\prime}}\big{(}(\omega|\varpi) \big{)}=e_{z,z^{\prime}}\cdot(\omega|\varpi),\]
_for every \(z,z^{\prime}\in\mathcal{A}_{0}\cup\mathcal{L}\)._
With a slight abuse of notation, we will write (29) in the form
\[D^{l}_{z,z^{\prime}}\big{(}(\omega|\varpi)\big{)}=(d^{l}_{z,z^{\prime}}(\omega )|\varpi). \tag{18}\]
Let \(S=(\omega_{1},\omega_{2},\ldots,\omega_{p}\) and \(T=(\varpi_{1},\varpi_{2},\ldots,\varpi_{p})\) be Young tableaux on \(\mathcal{A}_{0}\cup\mathcal{L}\) and \(P=\{1,2,\ldots,d\}\) of shapes \(\lambda\) and \(\mu\), respectively.
If \(\lambda=\mu\), the _Young bitableau_\((S|T)\) is the element of \(\mathbb{C}[M_{m|n,d}]\) defined as follows:
\[(S|T)=\left(\begin{array}{c}\omega_{1}\\ \omega_{2}\\ \vdots\\ \omega_{p}\end{array}\right)=\pm\ (\omega_{1})|\varpi_{1})(\omega_{2})|\varpi_{2}) \cdots(\omega_{p})|\varpi_{p}),\]
where
\[\pm=(-1)^{|\omega_{2}||\varpi_{1}|+|\omega_{3}|(|\varpi_{1}|+|\varpi_{2}|)+ \cdots+|\omega_{p}|(|\varpi_{1}|+|\varpi_{2}|+\cdots+|\varpi_{p-1}|)}.\]
If \(\lambda\neq\mu\), the _Young bitableau_\((S|T)\) is set to be zero.
By naturally extending the slight abuse of notation (18), the action of any polarization on bitableaux can be explicitly described:
**Proposition 30**.: _Let \(z,z^{\prime}\in\mathcal{A}_{0}\cup\mathcal{L}\), and let \(S=(\omega_{1},\ldots,\omega_{p})\), \(T=(\varpi_{1},\ldots,\varpi_{p})\). We have the following identity:_
\[e_{z,z^{\prime}}\cdot(S\,|\,T)\ =\ D^{l}_{z,z^{\prime}}\ \big{(}\left( \begin{array}{c}\omega_{1}\\ \omega_{2}\\ \vdots\\ \omega_{p}\end{array}\right)\big{)}\] \[=\ \sum_{s=1}^{p}\ (-1)^{(|z|+|z^{\prime}|)\epsilon_{s}}\ \left( \begin{array}{c}\omega_{1}\\ \omega_{2}\\ \vdots\\ d^{l}_{z,z^{\prime}}(\omega_{s})\\ \vdots\\ \omega_{p}\end{array}\right),\]
_where_
\[\epsilon_{1}=1,\quad\epsilon_{s}=|\omega_{1}|+\cdots+|\omega_{s-1}|,\quad s=2, \ldots,p.\]
**Example 8**.: _Let \(\alpha_{i}\in\mathcal{A}_{0}\), \(1,2,3,4\in L\), \(|D_{\alpha_{i},2}|=1\). Then_
\[e_{\alpha_{i},2}\cdot\left(\begin{array}{cc}1&3&2\\ 2&3\\ 4&2\end{array}\left|\begin{array}{cc}1&2&3\\ 2&3\\ 3&1\end{array}\right.\right)=D^{l}_{\alpha_{i},2}\ \left(\begin{array}{cc}1&3&2\\ 2&3\\ 4&2\end{array}\left|\begin{array}{cc}1&2&3\\ 2&3\\ 3&1\end{array}\right.\right))=\]
\[=\left(\begin{array}{cc}1&3&\alpha_{i}\\ 2&3\\ 4&2\end{array}\left|\begin{array}{cc}1&2&3\\ 2&3\\ 3&1\end{array}\right.\right)-\left(\begin{array}{cc}1&3&2\\ \alpha_{i}&3\\ 4&2\end{array}\left|\begin{array}{cc}1&2&3\\ 2&3\\ 3&1\end{array}\right.\right)+\left(\begin{array}{cc}1&3&2\\ 2&3\\ 4&\alpha_{i}\end{array}\left|\begin{array}{cc}1&2&3\\ 2&3\\ 3&1\end{array}\right.\right).\]
|
2304.07906 | Sidon sets, sum-free sets and linear codes | Finding the maximum size of a Sidon set in $\mathbb{F}_2^t$ is of research
interest for more than 40 years. In order to tackle this problem we recall a
one-to-one correspondence between sum-free Sidon sets and linear codes with
minimum distance greater or equal 5. Our main contribution about codes is a new
non-existence result for linear codes with minimum distance 5 based on a
sharpening of the Johnson bound. This gives, on the Sidon set side, an
improvement of the general upper bound for the maximum size of a Sidon set.
Additionally, we characterise maximal Sidon sets, that are those Sidon sets
which can not be extended by adding elements without loosing the Sidon
property, up to dimension 6 and give all possible sizes for dimension 7 and 8
determined by computer calculations. | Ingo Czerwinski, Alexander Pott | 2023-04-16T22:02:57Z | http://arxiv.org/abs/2304.07906v3 | # Sidon sets, sum-free sets and linear codes
###### Abstract
Finding the maximum size of a Sidon set in \(\mathbb{F}_{2}^{t}\) is of research interest for more than 40 years. In order to tackle this problem we recall a one-to-one correspondence between sum-free Sidon sets and linear codes with minimum distance greater or equal 5. Our main contribution about codes is a new non-existence result for linear codes with minimum distance 5 based on a sharpening of the Johnson bound. This gives, on the Sidon set side, an improvement of the general upper bound for the maximum size of a Sidon set. Additionally, we characterise maximal Sidon sets, that are those Sidon sets which can not be extended by adding elements without loosing the Sidon property, up to dimension 6 and give all possible sizes for dimension 7 and 8 determined by computer calculations.
**Keywords** Sidon set, sum-free set, maximum size, linear binary code, codes bound.
**Mathematics Subject Classification (2020)** 11B13, 94B05, 94B65
## 1 Introduction
In the early 1930s Sidon introduced \(B_{2}\)-sequences of positive integers in connection with his work on Fourier analysis [15], [16]. Later, Babai and Sos [1] generalised the definition of \(B_{2}\)-sequences to arbitrary groups and called them Sidon sets. In this work, we focus only on Sidon sets in \(\mathbb{F}_{2}^{t}\), the \(t\)-dimensional vector space over the binary field \(\mathbb{F}_{2}\).
**Definition 1.1**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\). \(M\) is called Sidon if \(m_{1}+m_{2}\neq m_{3}+m_{4}\) for all pairwise distinct \(m_{1},m_{2},m_{3},m_{4}\in M\)._
Since the definition of Sidon sets is based on sums, we introduce the following notation: Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\). For any \(k\geq 2\) we call
\[\mathcal{S}_{k}(M)=\{m_{1}+\cdots+m_{k}:m_{1},\ldots,m_{k}\in M\}\]
the _\(k\)-sums_ of \(M\) and
\[\mathcal{S}_{k}^{*}(M)=\{m_{1}+\cdots+m_{k}:m_{1},\ldots,m_{k}\in M\text{ pairwise distinct}\}\]
the _\(k\)-star-sums_ of \(M\). In this paper, only 2, 3 and 4-(star)-sums are considered. The characteristic 2 of \(\mathbb{F}_{2}^{t}\) leads to the following frequently used properties:
1. \(\mathcal{S}_{2}(M)=\mathcal{S}_{2}^{*}(M)\cup\{0\}\)_;_
2. \(\mathcal{S}_{3}(M)=\mathcal{S}_{3}^{*}(M)\cup M\)_;_
3. \(\mathcal{S}_{4}(M)=\mathcal{S}_{4}^{*}(M)\cup\mathcal{S}_{2}^{*}(M)\cup\{0\}\)_._
We recall also, that the _(Hamming) weight_ of a vector is the number of its non-zero entries. The weight of a vector is of interest when considering the sums of the elements of a set containing the standard basis.
The Sidon property of \(M\) can be characterised in terms of 2-star-sums and 3-star-sums by the following equivalent statements:
1. \(M\) _is Sidon;_
2. \(|\mathcal{S}_{2}^{*}(M)|=\binom{|M|}{2}\)_;_
3. \(\mathcal{S}_{3}^{*}(M)\cap M=\emptyset\)_._
Now that we have briefly introduced Sidon sets, in Section 2 we will begin to discuss the fundamental problem of their maximum size. For this we study maximal Sidon sets, which are Sidon sets that cannot be extended by adding new elements without losing the Sidon property. Section 3 introduces sum-free sets which are used to recall a one-to-one correspondence between additive structures and linear codes in Section 4. In Section 5 we give a new non-existence result for linear codes with minimum distance 5 based on a sharpening of the Johnson bound, which, on the Sidon set side, gives an improvement of the general upper bound for the maximum size of a Sidon set (Theorem 5.3).
We note that several statements in Section 2 and 4 have also been investigated in connection with almost perfect nonlinear (APN) functions, which are special types of Sidon sets. For instance, Proposition 2.2 and 2.4 can be found, in APN language, in [3], our Proposition 4.1 is related to Theorem 5 in [4], and Theorem 2.10 is related to Proposition 4 in [4].
Another combinatorial problem and its connections to linear codes, which contains Sidon sets as a special case, is studied in [17].
## 2 Maximal Sidon sets
The fundamental problem about a Sidon set is the question about its maximum size, which was already discussed about 40 years ago by Babai and Sos [1]. We will denote by \(s_{max}(\mathbb{F}_{2}^{t})\) the maximum size of a Sidon set in \(\mathbb{F}_{2}^{t}\). An upper bound arises directly from the fact that all 2-star-sums of \(M\) have to be distinct and non-zero, hence
\[\binom{|M|}{2}=\frac{|M|\left(|M|-1\right)}{2}\leq\left|\mathbb{F}_{2}^{t} \setminus\left\{0\right\}\right|.\]
This bound is still the best known upper bound and was translated by Carlet and Mesnager [5] into an explicit form:
\[s_{max}(\mathbb{F}_{2}^{t})\leq\left\lfloor\frac{1+\sqrt{2^{t+3}-7}}{2}\right \rfloor.\]
In the next Proposition we rewrite this bound slightly and call it from now on the _trivial upper bound_ for the maximum size of a Sidon set. Later in Theorem 5.3 we will improve this trivial upper bound.
**Proposition 2.1**.: _For any \(t\), an upper bound for the maximum size of a Sidon set in \(\mathbb{F}_{2}^{t}\) is given by_
\[s_{max}(\mathbb{F}_{2}^{t})\leq\begin{cases}2^{\frac{t+1}{2}}&\text{ for $t$ odd},\\ \left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor&\text{ for $t$ even}.\end{cases} \tag{1}\]
Proof.: Let \(M\subseteq\mathbb{F}_{2}^{t}\) be a Sidon set. Then the sums \(m_{1}+m_{2}\) are distinct and non-zero for all distinct \(m_{1},m_{2}\in M\) by the Sidon definition. Therefore
\[\binom{|M|}{2}=\frac{|M|\left(|M|-1\right)}{2}\leq\left|\mathbb{F}_{2}^{t} \setminus\left\{0\right\}\right|.\]
When \(t\) is odd, then \(|M|=2^{\frac{t+1}{2}}\) fulfills this inequality, but \(|M|=2^{\frac{t+1}{2}}+1\) not anymore.
Let \(t\) be even. We show that \(\left\lfloor\frac{1+\sqrt{2^{t+3}-7}}{2}\right\rfloor=\left\lfloor\sqrt{2^{t+ 1}}+0.5\right\rfloor\). Of course, we have \(\frac{1+\sqrt{2^{t+3}-7}}{2}\ \leq\ \sqrt{2^{t+1}}+0.5\). Assume that there exists \(k\in\mathbb{N}\) such that
\[\frac{1+\sqrt{2^{t+3}-7}}{2}\ <\ k\ \leq\ \sqrt{2^{t+1}}+0.5,\]
which is equivalent to
\[2^{t+1}-\frac{7}{4}\ <\ (k-\frac{1}{2})^{2}\ \leq\ 2^{t+1}\]
and
\[2^{t+1}-2\;<\;k(k-1)\;\leq\;2^{t+1}-\frac{1}{4}.\]
But this contradicts \(k(k-1)\) even.
After looking at an upper bound for the maximum size of a Sidon set, it is natural to ask whether every Sidon set can be extended to a Sidon set of maximum size by adding elements. Already in dimension 6 this is not true, as we will see later in Proposition 2.7.
Therefore, we now concentrate on the question how a Sidon set can be extended by adding elements without loosing the Sidon property. Before we state the next Proposition from [13], recall the following notation for sets A, B and C: \(A\cup B=C\) means \(A\cup B=C\) and \(A\cap B=\emptyset\).
**Proposition 2.2** ([13]).: _Let \(M\) be a Sidon set in \(\mathbb{F}_{2}^{t}\) and \(g\in\mathbb{F}_{2}^{t}\setminus M\). Then \(M\cup\{g\}\) is Sidon if and only if \(g\in\mathbb{F}_{2}^{t}\setminus\mathcal{S}_{3}(M)=\mathbb{F}_{2}^{t}\setminus \big{(}\mathcal{S}_{3}^{*}(M)\cup M\big{)}\)._
Proof.: Let \(M\subseteq\mathbb{F}_{2}^{t}\) be a Sidon set. From \(g\in\mathbb{F}_{2}^{t}\setminus M\) follows that \(M\cup\{g\}\) is not Sidon if and only if there exist \(m,m_{1},m_{2}\in M\) pairwise distinct such that \(g+m=m_{1}+m_{2}\) which is equivalent to \(g=m+m_{1}+m_{2}\). Hence \(g\in\mathcal{S}_{3}^{*}(M)\subseteq\mathcal{S}_{3}^{*}(M)\cup M=\mathcal{S}_ {3}(M)\).
Those Sidon sets which can not be extended by adding elements without loosing the Sidon property are of particular interest:
**Definition 2.3**.: _A Sidon set \(M\subseteq\mathbb{F}_{2}^{t}\) is called maximal if \(M=S\) for every Sidon set \(S\) with \(M\subseteq S\subseteq\mathbb{F}_{2}^{t}\)._
Proposition 2.2 helps us to characterise maximal Sidon sets via their 3-star-sums and 3-sums.
**Proposition 2.4** ([13]).: _Let \(M\) be a Sidon set in \(\mathbb{F}_{2}^{t}\). Then the following statements are equivalent:_
* \(M\) _is maximal;_
* \(\mathcal{S}_{3}(M)=\mathbb{F}_{2}^{t}\)_;_
* \(\mathcal{S}_{3}^{*}(M)\cup M=\mathbb{F}_{2}^{t}\) _(that means:_ \(\mathcal{S}_{3}^{*}(M)\cup M=\mathbb{F}_{2}^{t}\) _and_ \(\mathcal{S}_{3}^{*}(M)\cap M=\emptyset\)_)._
The property of being Sidon and of being maximal Sidon is invariant under the action of the affine group.
**Proposition 2.5**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\) and \(T\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) be an affine permutation. Then_
1. \(M\) _is Sidon if and only if_ \(T(M)\) _is Sidon; and_
2. \(M\) _is maximal Sidon if and only if_ \(T(M)\) _is maximal Sidon._
Proof.: Let be \(T=L+a\) where \(L\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) is a a linear permutation and \(a\in\mathbb{F}_{2}^{t}\). From \(T\) affine and bijective follows \(|T(M)|=|M|\),
\[\big{|}\mathcal{S}_{2}^{*}\big{(}T(M)\big{)}\big{|} =\big{|}L\big{(}\mathcal{S}_{2}^{*}(M)\big{)}\big{|}=|\mathcal{S}_ {2}^{*}(M)|=\binom{|M|}{2},\] \[\big{|}\mathcal{S}_{3}^{*}\big{(}T(M)\big{)}\big{|} =\big{|}T\big{(}\mathcal{S}_{3}^{*}(M)\big{)}\big{|}=|\mathcal{S}_ {3}^{*}(M)|=\big{|}\mathbb{F}_{2}^{t}\setminus M\big{|}\,,\]
and therefore (a) and (b).
After introducing maximal Sidon sets we shortly mention the well-known fact that the graph of an APN function is a Sidon set. In [13] it is shown that the graph of the classical APN example \(x^{3}\) is a maximal Sidon set. And in [3], APN functions whose graphs are maximal Sidon sets are discussed in general. Recently, some maximal Sidon sets (first introduced in [5], [6]), which are larger than graphs of APN functions, are discussed in [12].
In order to characterise maximal Sidon sets in small dimensions, we recall some basic properties of subsets of \(\mathbb{F}_{2}^{t}\).
**Lemma 2.6**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\) and let \(e_{1},\ldots,e_{t}\) be the standard basis of \(\mathbb{F}_{2}^{t}\)._
1. _If_ \(\dim\langle M\rangle=t\) _and_ \(|M|\geq t+1\)_, then there exists an affine permutation_ \(T\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) _such that_ \[\{0,e_{1},\ldots,e_{t}\}\subseteq T(M).\]
2. _If_ \(\{0,e_{1},\ldots,e_{t}\}\subseteq M\) _and if there is an element_ \(m\) _of_ \(M\) _with weight_ \(w\geq 2\)_, then there exists a linear permutation_ \(L\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) _such that_ \[\{0,e_{1},\ldots,e_{t},e_{1}+e_{2}+\cdots+e_{w}\}\subseteq L(M).\]
Using Proposition 2.5 and Lemma 2.6 we are able to characterise all maximal Sidon sets up to dimension \(6\).
**Proposition 2.7**.: _Let \(M\) be a maximal Sidon set of \(\mathbb{F}_{2}^{t}\) and let \(e_{1},\ldots,e_{t}\) be the standard basis of \(\mathbb{F}_{2}^{t}\). Then there exists an affine permutation \(T\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) such that \(T(M)\) equals to_
1. \(M_{1}=\{0,e_{1}\}=\mathbb{F}_{2}^{1}\) _with_ \(|M_{1}|=2\)_._
2. \(M_{2}=\{0,e_{1},e_{2}\}=\mathbb{F}_{2}^{2}\setminus\{0\}\) _with_ \(|M_{2}|=3\)_._
_t=3:_: \(M_{3}=\{0,e_{1},e_{2},e_{3}\}\) _with_ \(|M_{3}|=4\)_._
_t=4:_: \(M_{4}=\{0,e_{1},e_{2},e_{3},e_{4},e_{1}+e_{2}+e_{3}+e_{4}\}\) _with_ \(|M_{4}|=6\)_._
_t=5:_: \(M_{5}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{1}+e_{2}+e_{3}+e_{4}\}\) _with_ \(|M_{5}|=7\)_._
_t=6:_: \(M_{6a}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{1}+e_{2}+e_{3}+e_{4},e_{1}+e_{ 2}+e_{5}+e_{6}\}\) _or_ \(M_{6b}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{1}+e_{2}+e_{3}+e_{4}+e_{5}+e_{ 6}\}\) _with_ \(|M_{6a}|=9>|M_{6b}|=8\)_._
Proof.: Let \(M\subseteq\mathbb{F}_{2}^{t}\) be maximal Sidon, hence \(\mathcal{S}_{3}(M)=\mathbb{F}_{2}^{t}\). Because of Proposition 2.5 and Lemma 2.6 (a) we may assume without loss of generality that \(\{0,e_{1},\ldots,e_{t}\}\subseteq M\) and therefore \(M\) contains no elements of weight \(2\) or \(3\). Hence \(\mathbb{F}_{2}^{t}\setminus\mathcal{S}_{3}(M)\) consists of vectors of weight \(\geq 4\) and the cases up to \(t=4\) are shown. A case distinction of the maximal weight for vectors in \(M\) leads, together with Lemma 2.6 (b) and straight forward calculations, to the remaining cases:
t=5: \(M_{5}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{1}+e_{2}+e_{3}+e_{4}\}\) or \(M_{5}^{\prime}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{1}+e_{2}+e_{3}+e_{4}+e_{5}\}\) with \(|M_{5}|=|M_{5}^{\prime}|=7\). t=6: \(M_{6a1}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{1}+e_{2}+e_{3}+e_{4},e_{i }+e_{i_{2}}+e_{5}+e_{6}\}\), \(M_{6a2}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{1}+e_{2}+e_{3}+e_{4}+e_{5},e_ {j_{1}}+e_{j_{2}}+e_{j_{3}}+e_{6}\}\) or \(M_{6b}=\{0,e_{1},e_{2},e_{3},e_{4},e_{5},e_{6},e_{1}+e_{2}+e_{3}+e_{4}+e_{5}+e _{6}\}\) with distinct \(i_{1},i_{2}\in\{1,2,3,4\}\), pairwise distinct \(j_{1},j_{2},j_{3}\in\{1,2,3,4,5\}\) and \(|M_{6a1}|=|M_{6a2}|=9>|M_{6c}|=8\).
Now we show that some of the cases above can be transformed into each other via affine transformations. The affine transformation \(T_{5}=L_{5}+e_{5}\) on \(\mathbb{F}_{2}^{5}\) fulfils \(T_{5}(M_{5}^{\prime})=M_{5}\) where the used linear transformation \(L_{5}\colon\mathbb{F}_{2}^{5}\to\mathbb{F}_{2}^{5}\) is defined by:
\[e_{1}\mapsto e_{1}+e_{5}\quad e_{2}\mapsto e_{2}+e_{5}\quad e_{3}\mapsto e_{3} +e_{5}\quad e_{4}\mapsto e_{4}+e_{5}\quad e_{5}\mapsto e_{5}.\]
Simple permutations of the standard basis result in affine transformations \(T_{61},T_{62}\colon\mathbb{F}_{2}^{6}\to\mathbb{F}_{2}^{6}\) such that
\[T_{61}(M_{6a1})=M_{6a}\text{ and }\] \[T_{62}(M_{6a2})=M_{6a}^{\prime}=\{0,e_{1},e_{2},e_{3},e_{4},e_{ 5},e_{6},e_{1}+e_{2}+e_{3}+e_{4}+e_{5},\] \[e_{1}+e_{2}+e_{5}+e_{6}\}.\]
Extending the definition of \(L_{5}\) by \(e_{6}\mapsto e_{6}+e_{5}\) leads to a linear transformation \(L_{6}\colon\mathbb{F}_{2}^{6}\to\mathbb{F}_{2}^{6}\) and results in an affine transformation \(T_{6}=L_{6}+e_{5}\) on \(\mathbb{F}_{2}^{6}\) which fulfils \(T_{6}(M_{6a}^{\prime})=M_{6a}\)
For dimension 7 and 8 we were not able to classify all maximal Sidon sets, but determined all possible sizes of maximal Sidon sets by computer calculations.
**Proposition 2.8**.: _Let \(M\) be a maximal Sidon set of \(\mathbb{F}_{2}^{t}\). If \(t=7\), then \(|M|=12\) and if \(t=8\), then \(|M|\in\{15,16,18\}\)._
Examples of maximal Sidon sets with the sizes from Proposition 2.8 can be found in Table 1. We use the standard integer representation of vectors in \(\mathbb{F}_{2}^{t}\): the integer \(\sum_{i=0}^{t-1}a_{i}2^{i}\) in 2-adic representation "is" the vector \((a_{0},\ldots,a_{t-1})\).
**Remark 2.9**.: _Here are some details about the computer calculations used in Proposition 2.8:_
1. _The algorithm is based on Proposition_ 2.2_: if_ \(M\subseteq\mathbb{F}_{2}^{t}\) _is Sidon and_ \(g\in\mathbb{F}_{2}^{t}\setminus\mathcal{S}_{3}(M)\)_, then_ \(M\cup\{g\}\) _is Sidon._
2. _Because of Proposition_ 2.5 _and Lemma_ 2.6 _(a) we may assume without loss of generality that_ \(0,e_{1},\ldots,e_{t}\) _is contained in any maximal Sidon set in_ \(\mathbb{F}_{2}^{t}\)_._
3. _For dimension 7, the assumption from (b) is sufficient to complete the calculations after 12 seconds. As a result, we get 524160 maximal Sidon sets of size 12 containing_ \(0,e_{1},\ldots,e_{7}\)_._
4. _For dimension 8, the assumption from (b) is still not sufficient to complete the calculations. We divided the calculation into 5 subtasks with the help of Lemma_ 2.6 _(b): let_ \(M\) _be a maximal Sidon set containing_ \(0,e_{1},\ldots,e_{t}\)_. Then the maximal weight of all elements of_ \(M\) _is either 4,5,6,7 or 8 and we can assume without loss of generality, because of the Sidon property, that_ 1. _for the maximal weight 4,_ \(e_{1}+e_{2}+e_{3}+e_{4}\) _is contained in_ \(M\) _and the weight of all other elements is at most 4;_
\begin{table}
\begin{tabular}{l l l} \hline \hline \(t\) & \(|M|\) & \(M\) \\ \hline
7 & 12 & \(\{0,1,2,4,8,16,32,64,15,60,101,87\}\) \\
8 & 15 & \(\{0,1,2,4,8,16,32,64,128,29,58,116,135,223,236\}\) \\
8 & 16 & \(\{0,1,2,4,8,16,32,64,128,29,58,116,232,205,135,222\}\) \\
8 & 18 & \(\{0,1,2,4,8,16,32,64,128,29,58,116,232,205,135,254,91,171\}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of maximal Sidon sets \(M\) in \(\mathbb{F}_{2}^{t}\) of all possible sizes for dimension \(t=7\) and \(t=8\).
_;_
2. _for the maximal weight 5,_ \(e_{1}+e_{2}+e_{3}+e_{4}+e_{5}\) _is contained in_ \(M\) _and the weight of all other elements is at most 5;_
3. _for the maximal weight 6,_ \(e_{1}+e_{2}+\cdots+e_{5}+e_{6}\) _is contained in_ \(M\) _and the weight of all other elements is at most 6;_
4. _for the maximal weight 7,_ \(e_{1}+e_{2}+\cdots+e_{6}+e_{7}\) _is contained in_ \(M\) _and the weight of all other elements is at most 6;_
5. _for the maximal weight 8,_ \(e_{1}+e_{2}+\cdots+e_{7}+e_{8}\) _is contained in_ \(M\) _and the weight of all other elements is at most 5._
_Task (5) was the most time-consuming and took about 14 days without parallelisation._
We close this section with a result on the 4-sums of Sidon sets. As seen before in Proposition 2.4, a maximal Sidon set can be characterised via its 3-sums, i.e. a Sidon set \(M\) is maximal if and only if \(\mathcal{S}_{3}(M)=\mathbb{F}_{2}^{t}\). For sufficiently large Sidon sets we obtain the following result about the 4-sums of Sidon sets. It is used later in Proposition 4.3 in connection with linear codes to indicate the covering radius of the code associated with a sum-free Sidon set.
**Theorem 2.10**.: _Let \(M\) be a Sidon set of \(\mathbb{F}_{2}^{t}\). If \(|M|>s_{\text{max}}(\mathbb{F}_{2}^{t-1})\), then_
1. \(\mathcal{S}_{4}(M)=\mathbb{F}_{2}^{t}\) _and_
2. \(\dim\langle M\rangle=\dim\langle\mathcal{S}_{2}^{*}(M)\rangle=t\)_._
Proof.: We only prove (a) as (b) is a direct consequence of it.
Assume that there exists an \(a\in\mathbb{F}_{2}^{t}\setminus\mathcal{S}_{4}(M)\). Let \(b\in\mathbb{F}_{2}^{t}\setminus a^{\perp}\) and \(f_{b}\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}\) be defined by \(g\mapsto b\cdot g\), where \(\cdot\) denotes the standard inner product, and \({}^{\perp}\) denotes the orthogonal space with respect to this inner product. We now consider the mapping \(T\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) defined by \(g\mapsto g+f_{b}(g)\cdot a\). From \(b\cdot a=1\), it follows \(\mathbb{F}_{2}^{t}=b^{\perp}\cup(a+b^{\perp})\) and \(T(g)=\begin{cases}g&\text{ for }g\in b^{\perp},\\ g+a&\text{ for }g=a+b^{\perp},\end{cases}\) hence \(T(\mathbb{F}_{2}^{t})\subseteq b^{\perp}\). Assuming \(T(m_{1})=T(m_{2})\) for distinct \(m_{1},m_{2}\in M\) would lead to \(m_{1}+m_{2}=a\), but this contradicts \(a\notin\mathcal{S}_{4}(M)=\mathcal{S}_{4}^{*}(M)\cup\mathcal{S}_{2}(M)\). Hence \(|T(M)|=|M|\). Assuming \(T(m_{1})+T(m_{2})=T(m_{3})+T(m_{4})\) would lead because of the Sidon property of \(M\) to \(m_{1}+m_{2}+m_{3}+m_{4}=a\) but this contradicts \(a\notin\mathcal{S}_{4}^{*}(M)\). Hence \(T(M)\) is Sidon. But now we found a Sidon set \(T(M)\subseteq b^{\perp}\) with \(|T(M)|>s_{\text{max}}(\mathbb{F}_{2}^{t-1})\), a contradiction.
Note that the opposite direction of Theorem 2.10 is, in general, not true. For instance, \(M=\{0,e_{1},\ldots,e_{t}\}\subseteq\mathbb{F}_{2}^{t}\) is a Sidon set with \(\dim\langle M\rangle=\dim\langle\mathcal{S}_{2}^{*}(M)\rangle=t\) but \(\mathcal{S}_{4}(M)\subsetneq\mathbb{F}_{2}^{t}\) and \(|M|\leq s_{\text{max}}(\mathbb{F}_{2}^{t-1})\) for \(t\geq 5\).
## 3 Sum-free sets
In this section we introduce sum-free sets, give some basic properties and show that it is equivalent to discuss the maximum size of a sum-free Sidon set instead of a Sidon set. Then, in the next section, we recall a one-to-one correspondence between sum-free Sidon sets and linear codes with a minimum distance greater than or equal to \(5\), which gives us the possibility to translate the question about the maximum size of a Sidon set into a question about linear codes with certain properties.
**Definition 3.1**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\). \(M\) is called sum-free if \(m_{1}+m_{2}\neq m_{3}\) for all \(m_{1},m_{2},m_{3}\in M\)._
By definition, \(0\) is never contained in a sum-free set. We give some basic properties: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\)._
1. \(M\) _is sum-free if and only if_ \(\mathcal{S}_{2}(M)\cap M=\emptyset\)_._
2. _If_ \(M\) _is sum-free, then_ \(|M|\leq 2^{t-1}\)_._
3. \(M\) _is sum-free and_ \(|M|=2^{t-1}\) _if and only if_ \(M=H+a\) _for a hyperplane_ \(H\) _of_ \(\mathbb{F}_{2}^{t}\) _(which is a linear subspace of dimension_ \(t-1\)_) and_ \(a\in\mathbb{F}_{2}^{t}\setminus H\)_._
More on sum-free sets can be found in the survey papers of Green and Ruzsa [10] as well as Tao and Vu [19].
The Sidon property and the maximal Sidon property are invariant under the action of the affine group. This is not true, in general, for the property of being sum-free. But being sum-free is still invariant under the action of the general linear group:
**Proposition 3.2**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\), \(L\colon\mathbb{F}_{2}^{t}\to\mathbb{F}_{2}^{t}\) be a linear permutation and \(a\in\mathbb{F}_{2}^{t}\). Then_
1. \(M\) _is sum-free if and only if_ \(L(M)\) _is sum-free._
2. \(M+a\) _is sum-free if and only if_ \(a\in\mathbb{F}_{2}^{t}\setminus\mathcal{S}_{3}(M)=\mathbb{F}_{2}^{t}\setminus \big{(}\mathcal{S}_{3}^{*}(M)\cup M\big{)}\)_._
Proof.: (a) follows directly from the definition of sum-free and from the linearity and bijectivity of \(L\). In order to show (b) we assume that \(M+a\) is not sum-free. Hence there exist \(m_{1},m_{2},m_{3}\in M\) such that \(m_{1}+a+m_{2}+a=m_{3}+a\), thus \(m_{1}+m_{2}+m_{3}=a\). But this is equivalent to \(a\in\mathcal{S}_{3}(M)=(\mathcal{S}_{3}^{*}(M)\cup M)\) and (b) is shown.
The next Proposition is about extending sum-free sets and sum-free Sidon sets by adding elements.
**Proposition 3.3**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\) and \(g\in\mathbb{F}_{2}^{t}\setminus M\)._
1. _Let_ \(M\) _be sum-free. Then_ \(M\cup\{g\}\) _is sum-free if and only if_ \(g\in\mathbb{F}_{2}^{t}\setminus\mathcal{S}_{2}(M)\)_._
2. _Let_ \(M\) _be sum-free Sidon. Then_ \(M\cup\{g\}\) _is sum-free Sidon if and only if_ \(g\in\mathbb{F}_{2}^{t}\setminus\big{(}\mathcal{S}_{3}(M)\cup\mathcal{S}_{2}( M)\big{)}\)_._
3. _Let_ \(M\) _be sum-free Sidon. Then_ \(M\cup\{g\}\) _is Sidon and not sum-free if and only if_ \(g\in\mathcal{S}_{2}(M)\setminus\mathcal{S}_{3}(M)\)_._
Proof.: Let \(M\) be sum-free and \(g\in\mathbb{F}_{2}^{t}\setminus M\). We assume that \(M\cup\{g\}\) is not sum-free. Hence there exist \(m_{1},m_{2}\in M\cup\{g\}\) such that \(m_{1}+m_{2}=g\). But this is equivalent to either \(g\in\mathcal{S}_{2}^{*}(M)\) or \(g=0\), hence \(g\in\mathcal{S}_{2}(M)=\mathcal{S}_{2}^{*}(M)\cup\{0\}\) and (a) is shown. Cases (b) and (c) follow from (a) and Proposition 2.2.
The case when \(0\) is contained in a Sidon set is of special interest.
**Proposition 3.4**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\)._
1. _If_ \(M\) _is sum-free Sidon, then_ \(M\cup\{0\}\) _is Sidon and not sum-free._
2. _If_ \(M\) _is Sidon and_ \(0\in M\)_, then_ \(M\setminus\{0\}\) _is sum-free Sidon._
Proof.: (a) is a direct consequence of Proposition 3.3 (c) due to \(0\in\mathcal{S}_{2}(M)\setminus\mathcal{S}_{3}(M)\).
Now we consider (b). Since every subset of a Sidon set is Sidon, it remains to show that \(M\setminus\{0\}\) is sum-free. We assume that \(M\setminus\{0\}\) is not sum-free. Hence, there exist \(m_{1},m_{2},m_{3}\in M\setminus\{0\}\) such that \(m_{1}+m_{2}=m_{3}\). Since \(0\notin M\setminus\{0\}\) it follows that \(m_{1},m_{2},m_{3}\) are pairwise distinct. But then, we found \(m_{1},m_{2},m_{3},0\in M\) pairwise distinct such that \(m_{1}+m_{2}=m_{3}+0\) which contradicts \(M\) Sidon and (b) is shown.
Therefore, the problem to find the maximum size of a sum-free Sidon set is equivalent to find the maximum size of a Sidon set.
**Proposition 3.5**.: _Let \(\text{sfs}_{\text{max}}(\mathbb{F}_{2}^{t})\) denote the maximum size of a sum-free Sidon set in \(\mathbb{F}_{2}^{t}\). Then_
\[s_{\text{max}}(\mathbb{F}_{2}^{t})=\text{sfs}_{\text{max}}(\mathbb{F}_{2}^{t} )+1.\]
## 4 Linear Codes
A (binary) _linear code_\(\mathcal{C}\) of _length_\(n\) and _dimension_\(k\) is a \(k\) dimensional vector subspace \(\mathcal{C}\) in \(\mathbb{F}_{2}^{n}\). Such a code \(\mathcal{C}\) is called an \([n,k]\)-code and \(c\in\mathcal{C}\) is called a _code word_ of \(\mathcal{C}\). We consider all vectors to be row vectors.
If the _minimum distance_ of \(\mathcal{C}\) is \(d\), that is the minimum number of non-zero entries of all non-zero code words of \(\mathcal{C}\), then \(\mathcal{C}\) is called a \([n,k,d]\)-code.
A _parity check matrix_ of an \([n,k]\)-code \(\mathcal{C}\) is an \((n-k)\times n\) matrix \(\mathcal{H}\) such that \(\mathcal{C}\) equals to the kernel of \(\mathcal{H}\) i.e \(\mathcal{C}=\{v\in\mathbb{F}_{2}^{n}:\mathcal{H}\cdot v^{\intercal}=0\}\). We note that the rank of \(\mathcal{H}\) is \(n-k\).
The _covering radius_ of an \([n,k]\)-code \(\mathcal{C}\) with parity check matrix \(\mathcal{H}\) is the smallest integer \(R\) such that every binary column vector with \(n-k\) entries can be written as the sum of at most \(R\) columns of \(\mathcal{H}\).
We recall a fruitful one-to-one correspondence between additive structures and linear codes, see [7]. It translates additive properties of a subset \(M\) of \(\mathbb{F}_{2}^{t}\) into properties of an associated code of length \(|M|\), such as minimum distance or covering radius, and vice versa. Independently from us, [12] also discussed the one-to-one correspondence with the focus set on Sidon sets. A similar discussion is done in [4] for the specific case of graphs of APN functions, which are Sidon sets.
When formulating the correspondence we will see that we need an ordering for the elements of \(\mathbb{F}_{2}^{t}\). Therefore, for the rest of this section, we assume, that \(\mathbb{F}_{2}^{n}\) is endowed with an ordering, but all what follows is independent of this ordering.
Additionally, our purpose is to read information about a given set \(M\) from its associated code. However, this is not possible if the associated code is trivial, that is, of dimension \(0\) or \(|M|\). So we formulate the correspondence in such a way that we never obtain a trivial associated code from a given set \(M\).
The _one-to-one correspondence_
\[\left\{\begin{array}{rcl}M\subseteq\mathbb{F}_{2}^{t}\setminus\{0\}\\ \text{with }|M|\geq t+1\end{array}\right\} \longleftrightarrow\left\{\begin{array}{rcl}[n,k,d]\text{-code }\mathcal{C}\\ \text{with }n-1\geq k\geq 1\text{ and }d\geq 3\end{array}\right\}\] \[M \longmapsto\quad\mathcal{C}_{M}\] \[M_{\mathcal{C}} \longleftrightarrow\quad\mathcal{C}\]
is defined as follows:
Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(|M|\geq t+1\). We define the _associated matrix_\(\mathcal{M}_{M}\) of \(M\) as the \(t\times|M|\) matrix, where the columns are the vectors of \(M\), i.e
\[\mathcal{M}_{M}=(m^{\intercal})_{m\in M}\]
and the _associated code_\(\mathcal{C}_{M}\) of \(M\) is the kernel of this matrix, i.e
\[\mathcal{C}_{M}=\{v\in\mathbb{F}_{2}^{|M|}:\mathcal{M}_{M}\cdot v^{\intercal}= 0\}.\]
If the rank of \(\mathcal{M}_{M}\) is \(t\), then it is a parity check matrix of the associated code \(\mathcal{C}_{M}\).
The dimension of \(\mathcal{C}_{M}\) is never \(0\) because of \(|M|\geq t+1\), and never \(|M|\) because \(0\notin M\).
Some basic properties of the associated codes are the following:
_Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(\left|M\right|\geq t+1\) and let \(\mathcal{C}_{M}\) be its associated \(\left[\left|M\right|,k,d\right]\)-code. Then_
1. \(\left|M\right|>t\geq 2\)_;_
2. \(\left|M\right|-1\geq k\geq\left|M\right|-t\geq 1\)_;_
3. \(k=\left|M\right|-t\) _if and only if_ \(\dim\langle M\rangle=t\)_;_
4. \(d\geq 3\)_, as no column is 0 and no column appears twice._
The columns of a parity check matrix of an \([n,k,d]\)-code \(\mathcal{C}\) with \(n-1\geq k\geq 1\) and \(d\geq 3\) form a subset \(M_{\mathcal{C}}\) of \(\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(t=n-k\) and \(\left|M_{\mathcal{C}}\right|=n\geq t+1=n-k+1\), which we call the _associated set_ of \(\mathcal{C}\).
It should be noted that the associated set \(M_{\mathcal{C}}\) of a code \(\mathcal{C}\) is not unique, just like the parity check matrix of a code is not unique.
As an example of the one-to-one correspondence we give the following proposition (Proposition 2.1 of [7]), with the proof appended for the convenience of the reader.
**Proposition 4.1**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(\left|M\right|\geq t+1\) and let \(\mathcal{C}_{M}\) be its associated \(\left[\left|M\right|,k,d\right]\)-code. Then_
1. \(M\) _is sum-free if and only if_ \(d\geq 4\)_;_
2. \(M\) _is sum-free Sidon if and only if_ \(d\geq 5\)_._
Proof.: The minimum distance of an associated code is at least 3.
1. \(M\) is sum-free if and only if the equation \[m_{1}+m_{2}+m_{3}=0\] has no solution for pairwise distinct \(m_{1},m_{2},m_{3}\in M\). This is equivalent to: \(\mathcal{C}_{M}\) has no code words of weight 3.
2. \(M\) is sum-free Sidon if and only if the system of equations \[\begin{cases}m_{1}+m_{2}+m_{3}=0\\ m_{1}+m_{2}+m_{3}+m_{4}=0\end{cases}\] has no solution for pairwise distinct \(m_{1},m_{2},m_{3},m_{4}\in M\). This is equivalent to: \(\mathcal{C}_{M}\) has no code words of weight 3 or 4.
Part (a) of Proposition 4.1 is also included in [8]. For consequences in the APN setting, see [4].
The following Theorem gives details on the one-to-one correspondence with a focus on Sidon sets.
**Theorem 4.2**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(|M|\geq t+1\) and let \(\mathcal{C}_{M}\) be its associated \([|M|\,,k,d]\)-code._
1. _If_ \(M\) _is sum-free and if_ \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t})\)_, then_ \(d=4\) _and_ \(M\) _is not Sidon._
2. _If_ \(M\) _is sum-free Sidon and if_ \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})\)_, then_ \(k=|M|-t\) _and_ \(\mathcal{M}_{M}\) _is a parity check matrix of_ \(\mathcal{C}_{M}\)_._
3. _If_ \(M\) _is sum-free Sidon and if_ \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})+1\)_, then_ \(d=5\)_._
Proof.:
1. Let \(M\subseteq\mathbb{F}_{2}^{t}\setminus\{0\}\) be sum-free. Then \(d\geq 4\) from Proposition 4.1 (a). If \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t})\) then \(|M\mathbin{\mathaccent 0{\cdot}\cup}\{0\}|>s_{\text{max}}(\mathbb{F}_{2}^{t})\) and therefore neither \(M\mathbin{\mathaccent 0{\cdot}\cup}\{0\}\) nor \(M\) is Sidon, hence \(d=4\) due to Proposition 4.1 (b).
2. If \(M\subseteq\mathbb{F}_{2}^{t}\setminus\{0\}\) is sum-free Sidon and if \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})\) then \(M\mathbin{\mathaccent 0{\cdot}\cup}\{0\}\) is still Sidon and \(|M\mathbin{\mathaccent 0{\cdot}\cup}\{0\}|>s_{\text{max}}(\mathbb{F}_{2}^{t-1})\). From Theorem 2.10 (b) follows \(\dim\langle M\rangle=t\) and therefore \(k=|M|-t\).
3. From \(M\) sum-free Sidon follows that \(d\geq 5\) and due to \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})+1\) and (a), the matrix \(\mathcal{H}_{M}:=\mathcal{M}_{M}\) is a parity check matrix of \(\mathcal{C}_{M}\). Assume that \(d\geq 6\). Then \(\mathcal{PU}(\mathcal{C}_{M})\), the puncturing of \(\mathcal{C}_{M}\) (remove one column and one row of \(\mathcal{H}_{M}\)), is an \([|M|-1,|M|-t,d^{\prime}]\)-code with \(d^{\prime}\geq 5\). Therefore the columns of the check matrix \(\mathcal{H}_{\mathcal{PU}(\mathcal{C}_{M})}\) of \(\mathcal{PU}(\mathcal{C}_{M})\) form a sum-free Sidon set \(M^{\prime}\subseteq\mathbb{F}_{2}^{t^{\prime}}\) with \(|M^{\prime}|=|M|-1\) and \(t^{\prime}=t-1\). From \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})+1\) follows that \(|M^{\prime}|=|M|-1\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})\) and \(M^{\prime}\subseteq\mathbb{F}_{2}^{t-1}\) not Sidon due to (a).
Another interesting connection between a code property and a Sidon property is the following:
**Theorem 4.3**.: _Let \(M\) be a subset of \(\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(|M|\geq t+1\) and let \(\mathcal{C}_{M}\) be its associated \([|M|\,,k,d]\)-code with covering radius \(R\)._
1. _If_ \(M\) _is sum-free Sidon and_ \(|M|\geq s_{\text{max}}(\mathbb{F}_{2}^{t-1})\)_, then_ \(R=3\) _or_ \(R=4\)_._
2. \(M\) _is maximal sum-free Sidon (that means we cannot extend it to a larger sum-free Sidon set by adding elements) if and only if_ \(R=3\)
Proof.:
1. \(M\) is Sidon and therefore \(|\mathcal{S}_{2}^{*}(M)|=\binom{|M|}{2}\). From \(|M|\geq t+1\) follows that \(|M|>t\geq 2\). But then \(|\mathcal{S}_{2}^{*}(M)|=\binom{|M|}{2}<2^{t}\) and \(R\geq 3\). From Theorem 2.10 (a) follows \(R\leq 4\).
2. This is a direct consequence of Proposition 2.4.
We close this section by discussing the best possible minimum distance of a code with given length \(n\) and dimension \(k\). Therefore we define
\[d_{max}(n,k)=\max\{d:\text{there exists an }[n,k,d]\text{-code}\}.\]
as the _maximal minimum distance_ of a code with given length \(n\) and dimension \(k\). It is one of the main properties of _optimal_ codes and frequently listed as a matrix \((d_{max}(n,k))_{n,k}\), for example in Grassl's codes table [9] ([http://codetables.de](http://codetables.de)) or the codes table of the MinT project from Schurer and Schmid [14] ([http://mint.sbg.ac.at/](http://mint.sbg.ac.at/)). Now we translate Theorem 4.2 to some properties of the subdiagonals of \((d_{max}(n,k))_{n,k}\), namely the entries \((d_{max}(n,n-t))_{n}\) for a fixed \(t\).
**Proposition 4.4**.: _Let be \(n,t\in\mathbb{N}\) with \(n>t\geq 2\). Then_
1. \(d_{max}(n,n-t)=3\) _if and only if_ \(2^{t-1}<n<2^{t}\)_;_
2. \(d_{max}(n,n-t)=4\) _if and only if_ \(s_{max}(\mathbb{F}_{2}^{t})\leq n\leq 2^{t-1}\)_;_
3. \(d_{max}(n,n-t)=5\) _if and only if_ \(s_{max}(\mathbb{F}_{2}^{t-1})<n<s_{max}(\mathbb{F}_{2}^{t})\)_;_
4. \(d_{max}(n,n-t)\geq 6\) _if and only if_ \(n\leq s_{max}(\mathbb{F}_{2}^{t-1})\)_._
Proof.: From our correspondence it follows that every \(M\subseteq\mathbb{F}_{2}^{t}\setminus\{0\}\) with \(|M|\geq t+1\) gives rise to an associated \(\left[\left|M\right|,k,d\right]\)-code with \(d\geq 3\).
1. If \(|M|>2^{t-1}\), then \(\dim\langle M\rangle=t\) and \(k=|M|-d\), but \(M\) cannot be sum-free anymore. Thus \(d=3\) and (a) is shown.
2. Let \(M=H+a\) with a hyperplane \(H\) of \(\mathbb{F}_{2}^{t}\) (which is a linear subspace of dimension \(t-1\)) and \(a\in\mathbb{F}_{2}^{t}\setminus H\). Hence \(M\) is sum-free and \(d=4\) from Theorem 4.2 (a). Additionally, \(\dim\langle M\rangle=t\) and \(k=|M|-d\). Now, removing elements from \(M\) such that \(\dim\langle M\rangle=t\) is still valid, leads, together with Theorem 4.2 (a), to (b).
3. This is a direct consequence of Theorem 4.2 (b) and (c).
4. This follows from (a), (b) and (c).
## 5 Non-existence results
Due to the importance of non-existence statements for Sidon sets and as well for linear codes we reformulate Proposition 4.4.
**Corollary 5.1**.: _Let be \(n,t\in\mathbb{N}\) with \(n>t\geq 2\). Then the following statements are equivalent:_
1. _There is no_ \([n,n-t,5]\) _code._
2. _There is no Sidon set_ \(M\subseteq\mathbb{F}_{2}^{t}\) _of size_ \(n+1\)_._
3. \(s_{max}(\mathbb{F}_{2}^{t})\leq n\)_._
The following result from Brouwer and Tolhuizen [2] is achieved by a sharpening of the Johnson bound. It improves the trivial upper bound (1) for odd dimension.
**Theorem 5.2** ([2]).: _There is no \([n,n-t,5]\) code for \(n=2^{(t+1)/2}-2\), hence_
\[s_{max}(\mathbb{F}_{2}^{t})\leq 2^{(t+1)/2}-2\]
_for \(t\) odd with \(t\geq 7\)._
With arguments similar to those used by Brouwer and Tolhuizen, we are able to generalise this result to arbitrary \(t\geq 6\) and thereby further improve the trivial upper bound (1). This improves also a recent bound given by Tait and Won (Theorem 5.1 of [18]).
**Theorem 5.3**.: _Let \(t\geq 6\), and write \(\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-4=3a+b\) with \(a\in\mathbb{Z}_{\geq 0}\), \(b\in\{0,1,2\}\), \(\varepsilon=\sqrt{2^{t+1}}+0.5-\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor\in [0,1)\) and_
\[\lambda_{a,b,\varepsilon}=\begin{cases}1&\text{for $a$ odd and $b=0$},\\ 2&\text{for $a$ odd, $b=1$ and $0\leq\varepsilon\leq 1-\frac{1}{2^{(t-4)/2}}$},\\ 1&\text{for $a$ odd, $b=1$ and $1-\frac{1}{2^{(t-4)/2}}<\varepsilon<1$},\\ 2&\text{for $a$ odd and $b=2$},\\ 2&\text{for $a$ even, $b=0$ and $0\leq\varepsilon\leq 0.5$},\\ 1&\text{for $a$ even, $b=0$ and $0.5<\varepsilon<1$},\\ 2&\text{for $a$ even, $b=1$ and $0\leq\varepsilon\leq 1-\frac{1}{2^{(t-5)/2}}$},\\ 1&\text{for $a$ even, $b=1$ and $1-\frac{1}{2^{(t-5)/2}}<\varepsilon\leq 1-\frac{1}{2^{(t+7)/2}} $},\\ 0&\text{for $a$ even, $b=1$ and $1-\frac{1}{2^{(t+7)/2}}<\varepsilon<1$},\\ 0&\text{for $a$ even and $b=2$}.\end{cases}\]
_Then there is no \([n_{t},n_{t}-t,5]\) code for \(n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-\lambda_{a,b,\varepsilon}\) and therefore_
\[s_{max}(\mathbb{F}_{2}^{t})\leq\begin{cases}2^{\frac{t+1}{2}}-2&\text{ for $t$ odd},\\ \left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-\lambda_{a,b,\varepsilon}&\text{ for $t$ even}.\end{cases} \tag{2}\]
Proof.: Because of Corollary 5.1 it is sufficient to show the non-existence of an \([n_{t},n_{t}-t,5]\) code.
Let us recall some arguments from the proof of Theorem 5.2 in [2]. Let \(C\) be an arbitrary (linear or non-linear) code of length \(n\), with minimum distance \(5\), and where, on the average, each codeword is at distance \(5\) from \(a_{5}\) other codewords. The Johnson upper bound (Theorem 1 of [11]) states
\[|C|\leq\frac{2^{n}}{s} \tag{3}\]
for
\[s=1+n+\frac{n(n-1)}{2}+\frac{1}{\lfloor n/3\rfloor}\Bigl{(}\binom{n}{3}-10 \cdot a_{5}\Bigr{)}\]
and the term \(10\cdot a_{5}\) can be estimated by
\[10\cdot a_{5}\leq\left\lfloor\frac{n-2}{3}\right\rfloor\cdot\binom{n}{2}.\]
Brouwer and Tolhuizen sharpened the estimate of \(10\cdot a_{5}\) for linear codes in the following way. Let \(C\) be additionally linear and let \(n-2=3\cdot a+b\) with \(a\in\mathbb{Z}_{\geq 0}\), \(b\in\{0,1,2\}\), then
\[10\cdot a_{5}\leq\begin{cases}a\cdot\binom{n}{2}&\text{ for $a$ odd and $b=0$},\\ (a-1)\cdot\binom{n}{2}&\text{ for $a$ odd and $b\in\{1,2\}$},\\ (a-1)\cdot\binom{n}{2}+\frac{\binom{n}{b}\cdot\binom{b+2}{2}}{\binom{b+2}{b}}& \text{ for $a$ even}.\end{cases} \tag{4}\]
Our purpose is to show that if \(C\) is an \([n_{t},n_{t}-t,5]\) code, then \(2s>2^{t+1}\) or equivalently \(s>2^{t}\). But this contradicts \(s\leq 2^{t}\) which follows from (3).
Let us therefore distinguish \(6\) cases depending on wether \(a\) is odd or even and wether \(b\) equals \(2\), \(1\) or \(0\).
**Case \(a\) odd:** If \(\mathbf{b=2}\), then \(a-1=\frac{n-7}{3}\). From (4) follows
\[10\cdot a_{5}\leq(a-1)\cdot\binom{n}{2}=\frac{n(n-1)(n-7)}{6}\]
and therefore
\[s\geq 1+n+\frac{n(n-1)}{2}+\frac{5}{2}n\]
which is equivalent to
\[2s\geq(n+3)^{2}-7. \tag{5}\]
Now we set \(\lambda_{a,b,\varepsilon}=2\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-2=\sqrt{2^{t+1}}-\frac{3}{2 }-\varepsilon\) with \(\varepsilon\in[0,1)\) and \(k=n-t\). Hence
\[2s\geq(n_{t}+3)^{2}-7 =(\sqrt{2^{t+1}}+\frac{1}{2}+1-\varepsilon)^{2}-7\] \[>2^{t+1}+\sqrt{2^{t+1}}+\frac{1}{4}-7\] \[>2^{t+1}\]
when \(t\geq 5\), and this contradicts (3).
If \(\mathbf{b}=\mathbf{1}\), then \(a-1=\frac{n-6}{3}\). From (4) follows
\[10\cdot a_{5}\leq(a-1)\cdot\binom{n}{2}=\frac{n(n-1)(n-6)}{6}\]
and therefore
\[s\geq 1+n+\frac{n(n-1)}{2}+2(n-1)\]
which is equivalent to
\[2s\geq(n+\frac{5}{2})^{2}-\frac{33}{4}. \tag{6}\]
Now we set \(\lambda_{a,b,\varepsilon}=2\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-2=\sqrt{2^{t+1}}-\frac{3} {2}-\varepsilon\) with \(\varepsilon\in[0,1)\), \(k=n-t+1\) and want to show that
\[2s\geq(\sqrt{2^{t+1}}+1-\varepsilon)^{2}-\frac{33}{4}>2^{t+1}.\]
This is true if \(0\leq\varepsilon\leq 1-\frac{1}{2^{(t-4)/2}}\) and \(t\geq 5\) because setting \(\varepsilon=1-\frac{1}{2^{(t-4)/2}}\) leads to
\[2s\geq(\sqrt{2^{t+1}}+\frac{1}{2^{(t-4)/2}})^{2}-\frac{33}{4} =2^{t+1}+\frac{2\sqrt{2^{t+1}}}{2^{(t-4)/2}}+\frac{1}{2^{t-4}}- \frac{33}{4}\] \[=2^{t+1}+8\sqrt{2}+\frac{1}{2^{t-4}}-2\frac{33}{4}\] \[>2^{t+1}\]
which contradicts (3). If \(1-\frac{1}{2^{(t-4)/2}}<\varepsilon<1\) we set \(\lambda_{a,b,\varepsilon}=1\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-1\), \(k=n-t\) and are now in the case \(a\) odd and \(b=2\). Putting these values into (5) leads to
\[2s\geq(n_{t}+3)^{2}-7>2^{t+1}\]
which contradicts (3).
If \(\mathbf{b}=\mathbf{0}\), then \(a=\frac{n-2}{3}\). From (4) follows
\[10\cdot a_{5}\leq a\cdot\binom{n}{2}=\frac{n(n-1)(n-2)}{6}\]
and therefore
\[s\geq 1+n+\frac{n(n-1)}{2}\]
which is equivalent to
\[2s\geq(n+\frac{1}{2})^{2}+\frac{7}{4}. \tag{7}\]
But setting \(\lambda_{a,b,\varepsilon}=2\), \(n=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-2\) and \(k=n-t\) does not contradict (3). Therefore we set \(\lambda_{a,b,\varepsilon}=1\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-1\), \(k=n_{t}-t\) and are now in the case \(a\) odd and \(b=1\). Putting these values into (6) leads to
\[2s\geq(n_{t}+\frac{5}{2})^{2}-\frac{33}{4}>2^{t+1}\]
for \(t\geq 3\) which contradicts (3).
**Case \(a\) even:** If \(\mathbf{b}=\mathbf{2}\), then \(a=\frac{n-4}{3}\). From (4) follows
\[10\cdot a_{5}\leq\frac{n(n-1)(n-4)}{6}\]
and therefore
\[s \geq 1+n+\frac{n(n-1)}{2}+\frac{3}{n-1}\left(\binom{n}{3}-\frac{n( n-1)(n-4)}{6}\right)\] \[=1+n+\frac{n(n-1)}{2}+n\]
which is equivalent to
\[2s\geq(n+\frac{3}{2})^{2}-\frac{1}{4}. \tag{8}\]
But setting \(\lambda_{a,b,\varepsilon}=2\), \(n=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-2\) and \(k=n-t\) does not contradict (3). Therefore we set \(\lambda_{a,b,\varepsilon}=1\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-1\), \(k=n_{t}-t\) and are
now in the case \(a\) odd and \(b=0\). But again, putting these values into (7) does not contradict (3). Hence we set \(\lambda_{a,b,\varepsilon}=0\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor\), \(k=n-t\) and are now in the case \(a\) odd and \(b=1\). Now, putting these values into (6) contradicts (3).
If \(\mathbf{b}=\mathbf{1}\), then \(a=\frac{n-3}{3}=\frac{n}{3}-1\). From (4) follows
\[10\cdot a_{5}\leq\frac{n(n-1)(n-6)}{6}+n=\frac{n(n-3)(n-4)}{6}\]
and therefore
\[s \geq 1+n+\frac{n(n-1)}{2}+\frac{3}{n}\left(\binom{n}{3}-\frac{n(n- 3)(n-4)}{6}\right)\] \[=1+n+\frac{n(n-1)}{2}+2n-5\]
which is equivalent to
\[2s\geq(n+\frac{5}{2})^{2}-\frac{57}{4}. \tag{9}\]
Now we set \(\lambda_{a,b,\varepsilon}=2\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-2=\sqrt{2^{t+1}}+-\frac{3 }{2}-\varepsilon\) with \(\varepsilon\in[0,1)\), \(k=n-t\) and want to show that
\[2s\geq(\sqrt{2^{t+1}}+1-\varepsilon)^{2}-\frac{57}{4}>2^{t+1}.\]
This is true if \(0\leq\varepsilon\leq 1-\frac{1}{2^{(t-5)/2}}\) and \(t\geq 6\) because setting \(\varepsilon=1-\frac{1}{2^{(t-5)/2}}\) leads to
\[2s\geq(n_{t}+\frac{5}{2})^{2}-\frac{57}{4} =(\sqrt{2^{t+1}}+\frac{1}{2^{(t-5)/2}})^{2}-\frac{57}{4}\] \[=2^{t+1}+\frac{2\sqrt{2^{t+1}}}{2^{(t-5)/2}}+\frac{1}{2^{t-5}}- \frac{57}{4}\] \[=2^{t+1}+16+\frac{1}{2^{t-5}}-\frac{57}{4}\] \[>2^{t+1}\]
which contradicts (3). If \(1-\frac{1}{2^{(t-5)/2}}<\varepsilon<1\) we try setting \(\lambda_{a,b,\varepsilon}=1\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-1\), \(k=n-t\) and are now in the case \(a\) even and \(b=2\). Putting these values into (8) we want to show that
\[2s\geq(n_{t}+\frac{3}{2})^{2}-\frac{1}{4}>2^{t+1}.\]
This is true if \(\varepsilon\leq 1-\frac{1}{2^{(t+7)/2}}\) because setting \(\varepsilon=1-\frac{1}{2^{(t+7)/2}}\) leads to
\[2s\geq(n_{t}+\frac{3}{2})^{2}-\frac{1}{4} =(\sqrt{2^{t+1}}+\frac{1}{2^{(t+7)/2}})^{2}-\frac{1}{4}\] \[=2^{t+1}+\frac{2\sqrt{2^{t+1}}}{2^{(t+7)/2}}+\frac{1}{2^{t+7}}- \frac{1}{4}\] \[=2^{t+1}+\frac{1}{4}+\frac{1}{2^{t+7}}-\frac{1}{4}\] \[>2^{t+1}\]
which contradicts (3). Therefore we set \(\lambda_{a,b,\varepsilon}=0\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor\), \(k=n-t\) if \(1-\frac{1}{2^{(t+7)/2}}<\varepsilon<1\) and are now in the case \(a\) odd and \(b=0\). Putting these values into (7) contradicts (3).
If \(\mathbf{b}=\mathbf{0}\), then \(a=\frac{n-2}{3}\). From (4) follows
\[10\cdot a_{5}\leq\frac{n(n-1)(n-5)}{6}+1\]
and therefore
\[s \geq 1+n+\frac{n(n-1)}{2}+\frac{3}{n-2}\left(\binom{n}{3}-\frac{n( n-1)(n-5)}{6}-1\right)\] \[=1+n+\frac{n(n-1)}{2}+\frac{3n+3}{2}\]
which is equivalent to
\[2s\geq(n+2)^{2}+1.\]
But setting \(\lambda_{a,b,\varepsilon}=2\), \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-2\) and \(k=n-t\) only contradicts (3) if \(0\leq\varepsilon\leq\frac{1}{2}\). If \(\frac{1}{2}<\varepsilon<1\) we set \(n=n_{t}=\left\lfloor\sqrt{2^{t+1}}+0.5\right\rfloor-1\), \(k=n_{t}-t\) and are now in the case \(a\) even and \(b=1\). Putting these values into (9) leads to
\[2s\geq(n_{t}+\frac{5}{2})^{2}-\frac{57}{4}>2^{t+1}\]
for \(t\geq 5\) which contradicts (3).
On the codes side, Theorem 5.3 improves for \(t\) even and \(t\geq 16\) several entries in the codes table of the MinT project [14] ([http://mint.sbg.ac.at/](http://mint.sbg.ac.at/)). Some examples are listed in Corollary 5.4. In [14], the maximal minimum distance was listed as 4 or 5, but now we know that it is 4:
**Corollary 5.4**.: _The maximal minimum distance of a linear code with the following parameters \([n,k]\) is 4:_
\[[360,344] [723,705] [1446,1426]\] \[[2895,2873] [5791,5767], [5792,5768] [11583,11557]\]
Proof.: For \(t\in\{16,18,20,22,24,26\}\) we follow Theorem 5.3, calculate \(a,b\) and \(\varepsilon\), and obtain \(\lambda_{a,b,\varepsilon}\), n and k as in the following table:
\[\begin{array}{lccccc}\hline t&\sqrt{2^{t+1}}+0.5&a&b&\varepsilon&\lambda_{a, b,\varepsilon}&[n,k]\\ \hline 16&362&119&1&\approx 0.538<0.984&2&[360,344]\\ 18&724&240&0&\approx 0.577>0.500&1&[723,705]\\ 20&1448&481&1&\approx 0.654<0.996&2&[1446,1426]\\ 22&2896&964&0&\approx 0.809>0.500&1&[2895,2873]\\ 24&5793&1929&2&\approx 0.118<0.999&2&[5791,5767]\text{ and }\\ &&&&[5792,5768]\\ 26&11585&3860&1&\approx 0.737<0.999&2&[11583,11557]\\ \hline\end{array}\]
## 6 Conclusion
We finish by giving Table 2 about the maximum size of Sidon sets and related bounds/constructions in small dimensions. The codes bound mentioned in this table arises from Proposition 4.4 (b) and Grassl's codes table [9] ([http://codetables.de](http://codetables.de)). Similarly, the codes constructions come from Proposition 4.4 (c) and Grassl's codes table.
For example, take column \(t=12\) from Table 2: The calculation of the trivial bound (1) leads to \(s_{max}(\mathbb{F}_{2}^{t})\leq 91\) and that of the new bound (2) from Theorem 5.3 leads to \(s_{max}(\mathbb{F}_{2}^{t})\leq 90\) (\(a=29\), \(b=0\), \(\varepsilon\approx 0.009\) and \(\lambda_{a,b,\varepsilon}=1\)).
Proposition 4.4 (b) gives us the codes bound, that is the smallest \(n\), such that \(d_{max}(n,n-12)=4\). A look at Grassl's codes table leads to \(s_{max}(\mathbb{F}_{2}^{t})\leq n=89\), since \(d_{max}(89,77)=4\) but \(d_{max}(88,76)=4\) or \(5\).
The codes construction uses Proposition 4.4 (c) in the following way: Finding the largest \(n\) such that \(d_{max}(n,n-12)\geq 5\) gives a sum-free Sidon set, and adding \(0\) leads to \(n+1\), which is the size of the largest known Sidon set. Again, Grassl's codes table leads to \(n=65\) and so \(s_{max}(\mathbb{F}_{2}^{t})\geq 66\), since \(d_{max}(65,53)=5\) but \(d_{max}(66,54)=4\) or \(5\). |
2307.01827 | Deconstructing Data Reconstruction: Multiclass, Weight Decay and General
Losses | Memorization of training data is an active research area, yet our
understanding of the inner workings of neural networks is still in its infancy.
Recently, Haim et al. (2022) proposed a scheme to reconstruct training samples
from multilayer perceptron binary classifiers, effectively demonstrating that a
large portion of training samples are encoded in the parameters of such
networks. In this work, we extend their findings in several directions,
including reconstruction from multiclass and convolutional neural networks. We
derive a more general reconstruction scheme which is applicable to a wider
range of loss functions such as regression losses. Moreover, we study the
various factors that contribute to networks' susceptibility to such
reconstruction schemes. Intriguingly, we observe that using weight decay during
training increases reconstructability both in terms of quantity and quality.
Additionally, we examine the influence of the number of neurons relative to the
number of training samples on the reconstructability. Code:
https://github.com/gonbuzaglo/decoreco | Gon Buzaglo, Niv Haim, Gilad Yehudai, Gal Vardi, Yakir Oz, Yaniv Nikankin, Michal Irani | 2023-07-04T17:09:49Z | http://arxiv.org/abs/2307.01827v2 | # Deconstructing Data Reconstruction:
###### Abstract
Memorization of training data is an active research area, yet our understanding of the inner workings of neural networks is still in its infancy. Recently, Haim et al. (2022) proposed a scheme to reconstruct training samples from multilayer perceptron binary classifiers, effectively demonstrating that a large portion of training samples are encoded in the parameters of such networks. In this work, we extend their findings in several directions, including reconstruction from multiclass and convolutional neural networks. We derive a more general reconstruction scheme which is applicable to a wider range of loss functions such as regression losses. Moreover, we study the various factors that contribute to networks' susceptibility to such reconstruction schemes. Intriguingly, we observe that using weight decay during training increases reconstructability both in terms of quantity and quality. Additionally, we examine the influence of the number of neurons relative to the number of training samples on the reconstructability.
## 1 Introduction
Neural networks are known to memorize training data despite their ability to generalize well to unseen test data (Zhang et al., 2021; Feldman, 2020). This phenomenon was observed in both supervised settings (Haim et al., 2022; Balle et al., 2022; Loo et al., 2023; Loo et al., 2023) and in generative models (Carlini et al., 2019, 2021, 2023). These works shed an interesting light on generalization, memorization and explainability of neural networks, while also posing a potential privacy risk.
Current reconstruction schemes from trained neural networks are still very limited and often rely on unrealistic assumptions, or operate within restricted settings. For instance, Balle et al. (2022) propose a reconstruction scheme based on the assumption of having complete knowledge of the training set, except for a single sample. Loo et al. (2023) suggest a scheme which operates under the NTK regime (Jacot et al., 2018), and assumes knowledge of the full set of parameters at initialization. Reconstruction schemes for unsupervised settings are specifically tailored for generative models and are not applicable for classifiers or other supervised tasks.
Recently, Haim et al. (2022) proposed a reconstruction scheme from feed-forward neural networks under logistic or exponential loss for binary classification tasks. Their scheme requires only knowledge of the trained parameters, and relies on theoretical results about the implicit bias of neural networks towards solutions of the maximum margin problem (Lyu and Li, 2019; Ji and Telgarsky, 2020). Namely, neural networks are biased toward KKT points of the max-margin problem (see Theorem 3.1). By utilizing the set of conditions that KKT points satisfy, they devise a novel loss function that allows for reconstruction of actual training samples. They demonstrate reconstruction from models trained on common image datasets (CIFAR10 (Krizhevsky et al., 2009) and MNIST (LeCun et al., 2010)).
In this work, we expand the scope of neural networks for which we have evidence of successful sample memorization, by demonstrating sample reconstruction. Our contributions are as follows:
* We extend the reconstruction scheme of Haim et al. (2022) to a multiclass setting (Fig. 1). This extension utilizes the implicit bias result from Lyu and Li (2019) to multiclass training. We analyse the effects of the number of classes on reconstructability, and show that models become more susceptible to sample reconstruction as the number of classes increases.
* We devise a reconstruction scheme that applies for general loss functions, assuming that the model is trained with weight decay. We demonstrate reconstruction from models trained on regression losses.
* We investigate the effects of weight decay and show that for certain values, weight decay increases the vulnerability of models to sample reconstruction. Specifically, it allows us to reconstruct training samples from a convolutional network, while Haim et al. (2022) only handled MLPs.
* We analyse the intricate relation between the number of samples and the number of parameters in the trained model, and their effect on reconstrtability. We also demonstrate successful reconstruction from a model trained on \(5\),\(000\) samples, surpassing previous results that focused on models trained on up to \(1\),\(000\) samples.
## 2 Related Works
Memorization and Samples Reconstruction.There is no consensus on the definition of the term "memorization" and different works study this from different perspectives. In ML theory, memorization usually refers to label (or, model's output) memorization (Zhang et al., 2016; Arpit et al., 2017; Feldman, 2020; Feldman and Zhang, 2020; Brown et al., 2021), namely, fitting the training set. Memorization in the _input_ domain is harder to show, because in order to demonstrate its occurrence one has to reconstruct samples from the model. Balle et al. (2022) demonstrated reconstruction of one training sample, assuming knowledge of all other training samples and Haim et al. (2022) demonstrated reconstruction of a substantial portion of training samples from a neural network classifier. Loo et al. (2023) extend their work to networks trained under the NTK regime (Jacot et al., 2018) and explore the relationship to dataset distillation. Several works have also studied memorization and samples reconstruction in generative models like autoencoders (Erhan et al., 2009; Radhakrishnan et al., 2018), large language models (Carlini et al., 2021, 2019, 2022) and diffusion-based image generators (Carlini et al., 2023; Sompalli et al., 2022; Gandikota et al., 2023; Kumari et al., 2023).
Inverting Classifiers.Optimizing a model's input as to minimize a class score is the common approach for neural network visualization (Mahendran and Vedaldi, 2015). It usually involves using input regularization (Mordvintsev et al., 2015; Ghiasi et al., 2022), GAN prior (Nguyen et al., 2016, 2017) or knowledge of batch statistics (Yin et al., 2020). Fredrikson et al. (2015), Yang et al. (2019) showed reconstruction of training samples using similar approach, however these methods are limited to classifiers trained with only a few samples per class. Reconstruction from a federated-learning setup (Zhu et al., 2019; He et al., 2019; Hitaj et al., 2017; Geiping et al., 2020; Yin et al., 2021; Huang et al., 2021; Wen et al., 2022) involve attacks that assume knowledge of training samples' gradients (see also Wang et al. (2023) for a theoretical analysis). In this work we do not assume any knowledge on the training data and do not use any priors other than assuming bounded inputs.
## 3 Preliminaries
In this section, we provide an overview of the fundamental concepts and techniques required to understand the remainder of the paper, focusing on the fundamentals laid by Haim et al. (2022) for reconstructing training samples from trained neural networks.
Theoretical Framework.Haim et al. (2022) builds on the theory of implicit bias of gradient descent. Neural networks are commonly trained using gradient methods, and when large enough, they are expected to fit the training data well. However, it is empirically known that these models converge to solutions that also generalize well to unseen data, despite the risk of overfitting. Several
works pointed to this "_implicit bias_" of gradient methods as a possible explanation. Soudry et al. (2018) showed that linear classifiers trained with gradient descent on the logistic loss converge to the same solution as that of a hard-SVM, meaning that they maximize the margins. This result was later extended to non-linear and homogeneous neural networks by Lyu and Li (2019); Ji and Telgarsky (2020):
**Theorem 3.1** (Paraphrased from Lyu and Li (2019); Ji and Telgarsky (2020)): _Let \(\Phi(\mathbf{\theta};\cdot)\) be a homogeneous 2 ReLU neural network. Consider minimizing the logistic loss over a binary classification dataset \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) using gradient flow. Assume that there exists time \(t_{0}\) where the network classifies all the samples correctly. Then, gradient flow converges in direction to a first order stationary point (KKT point) of the following maximum-margin problem:_
Footnote 2: A classifier \(\Phi\) is homogeneous w.r.t to \(\mathbf{\theta}\) if there exists \(L\in\mathbb{R}\) s.t. \(\forall c\in\mathbb{R},\mathbf{x}:\phi(\mathbf{x};c\mathbf{\theta})=c^{L}\phi( \mathbf{x};\mathbf{\theta})\)
\[\min_{\mathbf{\theta}}\frac{1}{2}\left\|\mathbf{\theta}\right\|^{2}\quad\text{ s.t. }\quad\forall i\in[n]\;\;y_{i}\Phi(\mathbf{\theta};\mathbf{x}_{i})\geq 1\;. \tag{1}\]
A KKT point of Eq. (1) is characterized by the following set of conditions:
\[\forall j\in[p],\;\;\mathbf{\theta}_{j}-\sum_{i=1}^{n}\lambda_{i} \nabla_{\mathbf{\theta}_{j}}\left[y_{i}\Phi(\mathbf{\theta};\mathbf{x}_{i})\right]=0 \text{(stationarity)} \tag{2}\] \[\forall i\in[n],\;\;y_{i}\Phi(\mathbf{\theta};\mathbf{x}_{i})\geq 1 \text{(primal feasibility)}\] (3) \[\forall i\in[n],\;\;\;\lambda_{i}\geq 0 \text{(dual feasibility)}\] (4) \[\forall i\in[n],\;\;\lambda_{i}=0\text{ if }y_{i}\Phi(\mathbf{\theta}; \mathbf{x}_{i})\neq 1 \text{(complementary slackness)} \tag{5}\]
Reconstruction Algorithm.Haim et al. (2022) demonstrated reconstructing samples from the training set of such classifiers by devising a reconstruction loss. Given a trained classifier \(\Phi(\mathbf{x};\mathbf{\theta})\), they initialize a set of \(\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{m}\) and \(\{\lambda_{i}\}_{i=1}^{m}\), and optimize \(\mathbf{x}_{i},\lambda_{i}\) to minimize the following loss function:
\[L=\underbrace{\left\|\mathbf{\theta}-\sum_{i=1}^{m}\lambda_{i} \nabla_{\mathbf{\theta}_{j}}\left[y_{i}\Phi(\mathbf{\theta};\mathbf{x}_{i})\right] \right\|}_{L_{\text{uniformity}}}+\underbrace{\sum_{i=1}^{m}\max\left\{- \lambda,-\lambda_{\text{min}}\right\}}_{L_{\lambda}}+L_{\text{prior}} \tag{6}\]
Where \(L_{\text{prior}}\) is simply bounding each pixel value at \([-1,1]\)3. The number of training samples \(n\) is unknown. However, setting \(m>2n\), where \(\{y_{i}\}\) are set in a balanced manner allows reconstructing samples with any label distribution. The \(\mathbf{x}_{i}\)'s are initialized from the Gaussian distribution \(\mathcal{N}(0,\sigma_{x}^{2}\mathbb{I})\), and \(\lambda_{\text{min}},\sigma_{x}\) are hyperparameters. We note that the homogeneity condition from Theorem 3.1 is not necessarily a practical limitation of this reconstruction scheme, as already in Haim et al. (2022) they show reconstructions from a non-homogeneous network.
Footnote 3: Formally: \(L_{\text{prior}}=\sum_{i=1}^{m}\sum_{k=1}^{d}\max\{\max\{\mathbf{x}_{i,k}-1, 0\},\max\{-\mathbf{x}_{i,k}-1,0\}\}\).
Analysing and Summarizing Results.The optimization in Eq. (6) is executed \(k\) times (for different hyperparameters) and results in \(km\) outputs (\(\{\hat{\mathbf{x}}_{i}\}_{i=1}^{km}\)) that we term _candidates_, as they are candidates to be reconstructed training samples. To quantify the success of the reconstruction process, each training sample is matched with its nearest-neighbour from the \(km\) candidates. The "quality" of reconstruction is then measured using SSIM Wang et al. (2004) (see full details in Appendix A.1).
An important corollary of the set of KKT conditions Eqs. (2) to (5) is that the parameters of the trained model only depend on gradients of samples that are closest to the decision boundary, the so-called "margin-samples" (see end of section 3.2 in Haim et al. (2022)). Therefore, a good visual summary for analysing reconstruction from a trained model is by plotting the reconstruction quality (SSIM) against the distance from the decision boundary (\(|\Phi(\mathbf{x}_{i})|\)). We also utilize such visualizations.
Assessing the Quality of Reconstructed Samples.Determining whether a candidate is a correct match for some training sample is as hard as finding a good image similarity metric. No synthetic metric such as SSIM or L2 norm can perfectly align with human perception. Perceptual similarity
metrics (e.g., LPIPS (Zhang et al., 2018)) build on top of pre-trained classifiers trained on Imagenet (Deng et al., 2009), and are not effective for the image resolution in this work (up to 32x32 pixels). We have observed heuristically that candidates with SSIM score higher than about \(0.4\) are indeed visually similar to their nearest neighbor training sample. Hence, in this work we say that a certain candidate is a _"good reconstruction"_ if its SSIM score with its nearest neighbor is at least \(0.4\). Also see discussion in Appendix A.2.
## 4 Reconstructing Data from Multi-Class Classifiers
We demonstrate that training set reconstruction can be extended to multi-class classification tasks.
### Theory
The extension of the implicit bias of homogeneous neural networks to the multi-class settings is discussed in Lyu and Li (2019) (Appendix G): Let \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\subseteq\mathbb{R}^{d}\times[C]\) be a multi-class classification training set where \(C\in\mathbb{N}\) is any number of classes, and \([C]=\{1,\dots,C\}\). Let \(\Phi(\boldsymbol{\theta};\cdot):\mathbb{R}^{d}\rightarrow\mathbb{R}^{C}\) be a homogeneous neural network parameterized by \(\boldsymbol{\theta}\in\mathbb{R}^{p}\). We denote the \(j\)-th output of \(\Phi\) on an input \(\mathbf{x}\) as \(\Phi_{j}(\boldsymbol{\theta};\mathbf{x})\in\mathbb{R}\). Consider minimizing the standard cross-entropy loss and assume that after some number of iterations the model correctly classifies all the training examples. Then, gradient flow will converge to a KKT point of the following maximum-margin problem:
\[\min_{\boldsymbol{\theta}}\frac{1}{2}\left\|\boldsymbol{\theta}\right\|^{2} \quad\text{s.t.}\quad\Phi_{y_{i}}(\boldsymbol{\theta};\mathbf{x}_{i})-\Phi_{j} (\boldsymbol{\theta};\mathbf{x}_{i})\geq 1\ \ \forall i\in[n],\forall j\in[C]\setminus\{y_{i}\}\quad. \tag{7}\]
A KKT point of the above optimization problem is characterized by the following set of conditions:
\[\boldsymbol{\theta}-\sum_{i=1}^{n}\sum_{j\neq y_{i}}^{c}\lambda_{i,j}\nabla_{\boldsymbol{\theta}}(\Phi_{y_{i}}(\boldsymbol{\theta};\mathbf{x}_{i })-\Phi_{j}(\boldsymbol{\theta};\mathbf{x}_{i}))=\boldsymbol{0} \tag{8}\] \[\forall i\in[n],\forall j\in[C]\setminus\{y_{i}\}:\ \Phi_{y_{i}}( \boldsymbol{\theta};\mathbf{x}_{i})-\Phi_{j}(\boldsymbol{\theta};\mathbf{x}_{i })\geq 1\] (9) \[\forall i\in[n],\forall j\in[C]\setminus\{y_{i}\}:\ \lambda_{i,j}\geq 0\] (10) \[\forall i\in[n],\forall j\in[C]\setminus\{y_{i}\}:\ \lambda_{i,j}=0\text{ if }\Phi_{y_{i}}( \boldsymbol{\theta};\mathbf{x}_{i})-\Phi_{j}(\boldsymbol{\theta};\mathbf{x}_{i })\neq 1 \tag{11}\]
A straightforward extension of a reconstruction loss for a multi-class model that converged to the conditions above would be to minimize the norm of the left-hand-side (LHS) of condition Eq. (8)
Figure 1: Reconstructed training samples from a multi-class MLP classifier that was trained on \(500\) CIFAR10 images. Each column corresponds to one class and shows the \(10\) training samples (_red_) that were best reconstructed from this class, along with their reconstructed result (_blue_).
(namely, optimize over \(\{\mathbf{x}_{i}\}_{i=1}^{m}\) and \(\{\lambda_{i,j}\}_{i\in[m],j\in[C]\setminus y_{i}}\) where \(m\) is a hyperparameter). However, this straightforward extension failed to successfully reconstruct samples. We therefore propose the following equivalent formulation.
Note that from Eqs. (9) and (11), most \(\lambda_{i,j}\) zero out: the distance of a sample \(\mathbf{x}_{i}\) to its nearest decision boundary, \(\Phi_{y_{i}}-\max_{j\neq y_{i}}\Phi_{j}\), is usually achieved for a single class \(j\) and therefore (from Eq. (11)) in this case at most one \(\lambda_{i,j}\) will be non-zero. For some samples \(\mathbf{x}_{i}\) it is also possible that all \(\lambda_{i,j}\) will vanish. Following this observation, we define the following loss that only considers the distance from the decision boundary:
\[L_{\text{multiclass}}(\mathbf{x}_{1},...,\mathbf{x}_{m},\lambda_{1},..., \lambda_{m})=\left\|\boldsymbol{\theta}-\sum_{i=1}^{m}\lambda_{i}\;\nabla_{ \boldsymbol{\theta}}[\Phi_{y_{i}}(\mathbf{x}_{i};\boldsymbol{\theta})-\max_{ j\neq y_{i}}\Phi_{j}(\mathbf{x}_{i};\boldsymbol{\theta})]\right\|_{2}^{2} \tag{12}\]
Eq. (12) implicitly includes Eq. (11) into the summation in Eq. (8), thereby significantly reducing the number of summands and simplifying the overall optimization problem. While the straightforward extension failed to successfully reconstruct samples, solving Eq. (12) enables reconstruction from multiclass models (see Fig. 1 and results below).
We also use the same \(L_{\lambda}\) and \(L_{\text{prior}}\) as in Eq. (6), and set \(\{y_{i}\}\) in a balanced manner (uniformly on all classes). While setting \(m=C\cdot n\) allows reconstructing any label distribution, in our experiments we focus on models trained on balanced training sets, and use \(m=2n\) which works sufficiently well. An intuitive way to understand the extension of the binary reconstruction loss Eq. (6) to multi-class reconstruction Eq. (12) is that the only difference is the definition of the _distance to nearest boundary_, which is the term inside the square brackets in both equations.
### Results
We compare between reconstruction from binary classifiers, as studied in Haim et al. (2022), and reconstruction from multi-class classifiers by using the novel loss function Eq. (12). We conduct the following experiment: we train an MLP classifier with architecture \(D\)-\(1000\)-\(1000\)-\(C\) on samples from the CIFAR10 (Krizhevsky et al., 2009) dataset. The model is trained to minimize the cross-entropy loss with full-batch gradient descent, once with two classes (\(250\) samples per class) and once for the full \(10\) classes (\(50\) samples per class). Both models train on the same amount of samples (\(500\)). The test set accuracy of the models is \(77\%/32\%\) respectively, which is far from random (\(50\%/10\%\) resp.). See implementation details in Appendix B.
To quantify the quality of our reconstructed samples, for each sample in the original training set we search for its nearest neighbour in the reconstructed images and measure the similarity using SSIM (Wang et al., 2004) (higher SSIM means better reconstruction). In Fig. 2 we plot the quality of reconstruction (in terms of SSIM) against the distance of the sample from the decision boundary \(\Phi_{y_{i}}(\mathbf{x}_{i};\boldsymbol{\theta})-\max_{j\neq y_{i}}\Phi_{j}( \mathbf{x}_{i};\boldsymbol{\theta})\). As seen, a multi-class classifier yields much more samples that are vulnerable to being reconstructed.
We examine the relation between the ability to reconstruct from a model and the number of classes on which it was trained. Comparing between two models trained on different number of classes is not immediately clear, since we want to isolate the effect of the number of classes from the size of the dataset (it was observed by Haim et al. (2022) that the number of reconstructed samples decreases as
Figure 2: Multi-class classifiers are more vulnerable to training-set reconstruction. For a training set of size \(500\), a multi-class model (_left_) yields \(101\) reconstructed samples with good quality (SSIM\(>\)\(0.4\)), compared to \(40\) in a binary classification model (_right_).
the total size of the training set increases). We therefore train models on training sets with varying number of classes (\(C\in\{2,3,4,5,10\}\)) and varying number of samples per class (\(1,5,10,50\)). The results are visualized in Fig. 2(a). As seen, for models with same number of samples per class, the ability to reconstruct _increases_ with the number of classes, even though the total size of the training set is larger. This further validates our hypothesis that the more classes, the more samples are vulnerable to reconstruction (also see Appendix C).
Another way to validate this hypothesis is by showing the dependency between the number of classes and the number of "good" reconstructions (SSIM\(>\)\(0.4\)) - shown in Fig. 2(b). As can be seen, training on multiple classes yields more samples that are vulnerable to reconstruction. An intuitive explanation, is that multi-class classifiers have more "margin-samples". Since margin-samples are more vulnerable to reconstruction, this results in more samples being reconstructed from the model.
## 5 Data Reconstruction with General Loss Functions
We demonstrate that data reconstruction can be generalized to a larger family of loss functions. [10] and Section 4 only considered a reconstruction scheme based on the implicit bias of gradient methods trained with cross-entropy loss. For other loss functions, such as the square loss, a precise characterization of the implicit bias in nonlinear networks does not exist [23]. Hence, we establish a reconstruction scheme for networks trained with explicit regularization, i.e., with weight decay. We show that as long as the training involves a weight-decay term, we can derive a reconstruction objective that is very similar to the previous objectives in Eqs. (6) and (12).
### Theory
Let \(\ell(\Phi(\mathbf{x}_{i};\boldsymbol{\theta}),y_{i})\) be a loss function that gets as input the predicted output of the model \(\Phi\) (parametrized by \(\boldsymbol{\theta}\)) on an input sample \(\mathbf{x}_{i}\), and its corresponding label \(y_{i}\). The total regularized loss:
\[\mathcal{L}=\sum_{i=1}^{n}\ell(\Phi(\mathbf{x}_{i};\boldsymbol{\theta}),y_{i}) +\lambda_{\text{WD}}\frac{1}{2}\|\boldsymbol{\theta}\|^{2}\quad. \tag{13}\]
Figure 3: Evaluating the effect of multiple classes on the ability to reconstruct. We show reconstructions from models trained with different numbers of classes and different numbers of samples per class. As seen, multiple classes result in more reconstructed samples.
Assuming convergence (\(\nabla_{\mathbf{\theta}}\mathcal{L}=0\)), the parameters should satisfy the following :
\[\mathbf{\theta}-\sum_{i=1}^{n}\ell_{i}^{\prime}\,\nabla_{\mathbf{\theta}}\Phi(\mathbf{x}_ {i};\mathbf{\theta})=0 \tag{14}\]
where \(\ell_{i}^{\prime}=\frac{1}{\lambda_{WD}}\frac{\partial\ell(\Phi(\mathbf{x}_{i} ;\mathbf{\theta}),y_{i})}{\partial\Phi(\mathbf{x}_{i};\mathbf{\theta})}\). This form (which is similar to the condition in Eq. (2)), allows us to define a generalized reconstruction loss for models trained with a weight-decay term:
\[L_{rec}(\mathbf{x}_{1},...,\mathbf{x}_{m},\lambda_{1},...,\lambda_{m})=\|\mathbf{ \theta}-\sum_{i=1}^{n}\lambda_{i}\nabla_{\mathbf{\theta}}\Phi(\mathbf{x}_{i};\bm {\theta})\|_{2}^{2} \tag{15}\]
As before, we also include the same \(L_{\text{prior}}\) as in Section 3. It is straightforward to see that \(L_{rec}\) is a generalization of the reconstruction loss in Eq. (6) (\(y_{i}\) could be incorporated into the \(\lambda_{i}\) term).
### Results and Analysis
We validate the above theoretical analysis by demonstrating reconstruction from models trained on other losses than the ones shown in Section 4 and Haim et al. (2022). We use the same dataset as in the classification tasks - images from CIFAR10 dataset with binary labels of \(\{-1,1\}\). The only difference is replacing the classification binary cross-entropy loss with regression losses (e.g., MSE).
In classification tasks, we analyzed the results by plotting the reconstruction quality (SSIM) against the sample's distance from the decision boundary (see Section 3). This showed that reconstruction is only feasible for margin-samples. However, in regression tasks, margin and decision boundary lack specific meaning. We propose an alternative analysis approach - note that smaller distance from the margin results in higher loss for binary cross-entropy. Intuitively, margin-samples are the most challenging to classify (as reflected by the loss function). Therefore, for regression tasks, we analyze the results by plotting the reconstruction quality against the loss (per training sample).
In Fig. 4 we show results for reconstructions from models trained with MSE, \(L_{2.5}\) loss (\(\ell=|\Phi(\mathbf{x};\mathbf{\theta})-y|^{p}\) for \(p\)=\(2\),\(2.5\) respectively) and Huber loss (Huber, 1992). The reconstruction scheme in Eq. (15) is the same for all cases, and is invariant to the loss used during training. Fig. 4 highlights two important observations: first, the reconstruction scheme in Eq. (15) succeeds in reconstructing large portions of the training set from models trained with regression losses, as noted from the high quality (SSIM) of the samples. Second, by plotting quality against the loss, one sees that "challenging" samples (with high loss) are easier to reconstruct. Also note that the analysis works for classification losses, namely BCE with or without weight-decay in Fig. 4). For more results see Appendix D.
Figure 4: **Reconstruction from general losses** (column) for various training set sizes (row), using Eq. (15). “Harder” samples (with higher loss) are easier to reconstruct.
On the Different Factors that Affect Reconstructability
Our goal is to gain a deeper understanding of the factors behind models' vulnerability to reconstruction schemes. In this section, we present several analyses that shed light on several important factors.
### The Role of Weight Decay in Data Reconstruction
Haim et al. (2022) assumed MLP models whose first fully-connected layer was initialized with small (non-standard) weights. Models with standard initialization (e.g., He et al. (2015), Glorot and Bengio (2010)) did not yield reconstructed samples. In contrast, the MLPs reconstructed in Haim et al. (2022) were initialized with an extremely small variance in the first layer. Set to better understand this drawback, we observed that incorporating weight-decay during training, not only enabled samples reconstruction in models with standard initialization, but often increase the reconstructability of training samples.
In Fig. 4(a)-b we show the number of good reconstructions for different choices of the value of the weight decay (\(\lambda_{\text{WD}}\)), for MLP classifiers trained on \(C\)=\(2\),\(10\) classes and \(50\) samples per class (Fig. 4(a), b resp.). We add two baselines trained _without_ weight-decay: model trained with standard initialization (black) and model with small-initialized first-layer (red). See how for some values of weight-decay, the reconstructability is _significantly higher_ than what was observed for models with non-standard initialization. By examining the training samples' distance to the boundary, one observes that using weight-decay results in more margin-samples which are empirically more vulnerable to reconstruction (see full details in Appendix E).
Reconstruction from Convolutional Neural Networks (CNNs).CNNs adhere to the assumptions of Theorem 3.1, yet Haim et al. (2022) failed to apply their reconstruction scheme Eq. (6) to CNNs. We observe that incorporating weight-decay during training (using standard initialization) enables samples reconstruction. In Fig. 6 we show an example for reconstruction from a binary classifier whose first layer is a Conv-layer with kernel size \(3\) and \(32\) output channels, followed by two fully connected layers (Conv(\(k\)=\(3\),\(C_{\text{out}}\)=\(32\))-\(1000\)-\(1\)). The weight-decay term is \(\lambda_{\text{WD}}\)=\(0.001\) (the training setup is similar to that of MLP). In Fig. 4(c) we show the reconstructability for the same CNN model trained with other values of \(\lambda_{\text{WD}}\). Note how the weight-decay term plays similar role in the CNN as in the MLP case. See full details in Appendix F.
### The Effect of the Number of Parameters and Samples on Reconstructability
Haim et al. (2022) observed that models trained on fewer samples are more susceptible to reconstruction in terms of both quantity and quality. In this section, we delve deeper into this phenomenon,
Figure 5: Using weight-decay during training increases vulnerability to sample reconstruction
Figure 6: **Reconstruction from CNN.** Training samples (red) and their best reconstructions (blue)
focusing on the intricate relationship between the number of parameters in the trained model and the number of training samples. We conduct the following experiment:
We train \(3\)-layer MLPs with architecture \(D\)-\(W\)-\(W\)-1 on \(N\) training samples from binary CIFAR10 (animals vs. vehicles), where \(W\in\{5,10,50,100,500,1000\}\) and \(N\in\{10,50,100,300,500\}\). We conduct the experiment for both classification and regression losses, with BCE and MSE loss respectively. Generalization error is \(23\%\)-\(31\%\) for BCE (classification) and \(0.69\)-\(0.88\) for MSE (regression), compared to \(50\%\)/\(0.97\) for similar models with random weights.
We reconstruct each model using Eq. (15) and record the number of good reconstructions. The results are shown in Fig. 7. Note that as \(\nicefrac{{W}}{{N}}\) increases, our reconstruction scheme is capable of reconstructing more samples, and vice versa. For example, consider the case when \(N\)=\(10\). To successfully reconstruct the entire training set, it is sufficient for \(W\) to be greater than \(50\)/\(10\) (for MSE/BCE). However, when \(N\)=\(500\), even larger models (with larger \(W\)) can only reconstruct up to \(8\%\) of the samples.
Lastly, we reconstruct from a model with \(W\)=\(10\),\(000\), trained on \(N\)=\(5\),\(000\) samples (\(5\) times larger than any previous model). While there is some degradation in the quality of the reconstructions compared to models trained on fewer samples, it is evident that our scheme can still reconstruct some of the training samples. For full results see Appendix G.
## 7 Conclusions and Future Work
We present improved reconstruction methods and conduct a comprehensive analysis of the reconstruction method proposed by Haim et al. (2022). Particularly, we extend their reconstruction scheme to a multi-class setting and devise a novel reconstruction scheme for general loss functions, allowing reconstruction in a regression setting (e.g., MSE loss). We examine various factors influencing reconstructability. We shed light on the role of weight decay in samples memorization, allowing for sample reconstruction from convolutional neural networks. Lastly, we examine the intricate relationship between the number of parameters, the number of samples, and the vulnerability of the model to reconstruction schemes. We acknowledge that our reconstruction method raises concerns regarding privacy. We consider it crucial to present such methodologies as they encourage researchers to study the potential hazards associated with training neural networks. Additionally, it allows for the development of protective measures aimed at preventing the leakage of sensitive information.
All of the above extend our knowledge and understanding of how memorization works in certain neural networks. This opens up several possibilities for future research including extending our reconstruction scheme to practical models (e.g., ResNets), exploring reconstruction from models trained on larger datasets or different data types (e.g., text, time-series, tabular data), analyzing the impact of optimization methods and architectural choices on reconstructability, and developing privacy schemes to protect vulnerable samples from reconstruction attacks.
Figure 7: **Effect of the number of neurons and number of training samples on reconstructability.** We train \(3\)-layer MLPs with architecture \(D\)-\(W\)-\(W\)-1 on \(N\) training samples from binary CIFAR10 (animals vs. vehicles), using MSE (_left_) or BCE (_right_) loss. At each cell we report the number of good reconstructions (SSIM>\(0.4\)), in both absolute numbers and as a percentage relative to \(N\). |
2308.00364 | Fountain -- an intelligent contextual assistant combining knowledge
representation and language models for manufacturing risk identification | Deviations from the approved design or processes during mass production can
lead to unforeseen risks. However, these changes are sometimes necessary due to
changes in the product design characteristics or an adaptation in the
manufacturing process. A major challenge is to identify these risks early in
the workflow so that failures leading to warranty claims can be avoided. We
developed Fountain as a contextual assistant integrated in the deviation
management workflow that helps in identifying the risks based on the
description of the existing design and process criteria and the proposed
deviation. In the manufacturing context, it is important that the assistant
provides recommendations that are explainable and consistent. We achieve this
through a combination of the following two components 1) language models
finetuned for domain specific semantic similarity and, 2) knowledge
representation in the form of a property graph derived from the bill of
materials, Failure Modes and Effect Analysis (FMEA) and prior failures reported
by customers. Here, we present the nuances of selecting and adapting pretrained
language models for an engineering domain, continuous model updates based on
user interaction with the contextual assistant and creating the causal chain
for explainable recommendations based on the knowledge representation.
Additionally, we demonstrate that the model adaptation is feasible using
moderate computational infrastructure already available to most engineering
teams in manufacturing organizations and inference can be performed on standard
CPU only instances for integration with existing applications making these
methods easily deployable. | Saurabh Kumar, Daniel Fuchs, Klaus Spindler | 2023-08-01T08:12:43Z | http://arxiv.org/abs/2308.00364v1 | Fountain - an intelligent contextual assistant combining knowledge representation and language models for manufacturing risk identification
###### Abstract
Deviations from the approved design or processes during mass production can lead to unforeseen risks. However, these changes are sometimes necessary due to changes in the product design characteristics or an adaptation in the manufacturing process. A major challenge is to identify these risks early in the workflow so that failures leading to warranty claims can be avoided. We developed Fountain as a contextual assistant integrated in the deviation management workflow that helps in identifying the risks based on the description of the existing design and process criteria and the proposed deviation. In the manufacturing context, it is important that the assistant provides recommendations that are explainable and consistent. We achieve this through a combination of the following two components 1) language models finetuned for domain specific semantic similarity and, 2) knowledge representation in the form of a property graph derived from the bill of materials, Failure Modes and Effect Analysis (FMEA) and prior failures reported by customers. Here, we present the nuances of selecting and adapting pretrained language models for an engineering domain, continuous model updates based on user interaction with the contextual assistant and creating the causal chain for explainable recommendations based on the knowledge representation. Additionally, we demonstrate that the model adaptation is feasible using moderate computational infrastructure already available to most engineering teams in manufacturing organizations and inference can be performed on standard CPU only instances for integration with existing applications making these methods easily deployable.
contextual assistant, knowledge representation, language models, semantic similarity, explainable AI
## 1 Introduction
In the automotive domain, it is common practice to specify and approve all the product design and production process characteristics before the start of mass manufacturing. Component suppliers acquire approval from the Original Equipment Manufacturer for these specifications and are expected to keep them unchanged during the mass production phase. However, there are scenarios when a product design characteristic (e.g. the material used, dimension or tolerances) or a production process (e.g. machines used, production sequence, or tooling) have to be changed after the mass production has started. Scenarios like change in the supplier of a subcomponent, unavailability of a material or need to add a manual production or inspection process are not uncommon making such changes unavoidable. Any such change can lead to unforeseen risks related to product quality and lead to warranty claims. These claims can sometimes be very expensive. Hence it is important to identify any risks originating from such changes very early during the workflow. Processes for tracking and documenting changes are common practice and are often implemented within quality control software applications. However, the processes rely heavily on the ability of the individuals responsible for requesting and approving these changes in the production plant to identify and mitigate the risks. This is exacerbated by the need to quickly find solutions and implement the changes and the variance in experience and knowhow across production plants spread all around the world. Hence, a solution that considers these human factors (Godwin & Ebiefung, 1999) to retain a simple process without compromising on the quality of risk assessment is required.
This is a scenario where an intelligent contextual assistant (Dhiman, Wachter, Fellmann, & Rocker, 2022), integrated within the quality control process can assist the change requestor in identifying the risks very early in the workflow. However, the development of such an assistance system faces two major challenges - 1) having the domain knowledge that enables the identification of risks related to design or process changes, and 2) ability to map the textual description of change to the correct risks in the presence of domain specific terms.
Here, we demonstrate how we integrated such a contextual assistant within the quality control process related to deviation management within our organization. As shown in Fig. 1, the deviation management application is a web application deployed in the cloud that enables documentation, approval and tracking of every manufacturing deviation.
As shown in Fig 2, the deviation requestor is required to provide the information related to the impacted part or assembly, a textual description of the current design or process and a textual description of the requested deviation.
The goal of the contextual assistant is to help the deviation requestor in identification of the risks and checking whether the risk evaluation sufficiently covers all the risks. As shown in Fig. 3, the contextual assistance is required to use all this information and provide help with risk identification.
Fig. 1: The deviation management application for the quality control process
Fig. 3: The contextual assistance for risk identification provided by fountain
Fig. 2: The data to be provided for the deviation request
## 2 Methods
To achieve the goals of the contextual assistant, two main components are required.
1. A representation of the domain knowledge (Van Harmelen, Lifschitz, Porter, & eds, 2008) capturing causal relations between product and process characteristics and the failures that could emanate from changes in these characteristics
2. A language model (Bengio, Ducharme, & Vincent, 2000) finetuned for the domain and the task of determining and ranking the sematic similarity between the text related to deviations entered by the user and the product and process characteristics.
Apart from these, a simple preprocessing component has been utilized to account for domain specific use of certain terms and abbreviations that cannot be easily captured by the language model due to lack of model finetuning samples and lack of variance within the available samples. A simple example in the domain presented here is the term 'cat'. The term 'cat' in our domain is used to refer to a catalyst and not to an animal as would be identified by any pretrained language model. Such terms have been identified by collecting frequently used terms referring to part names and mapping them to the names as represented in the engineering Bill of Materials (BOM) for the product. For this example, a deviation text containing 'cat' would be assigned to a part representing a 'catalyst'.
### The domain knowledge representation
It is difficult to separate the task of domain knowledge representation from its intended usage. For the purpose of developing Fountain, we avoided attempting to develop a general-purpose knowledge representation for the products covering the entire design, manufacturing, sourcing and assembly of all subcomponents and their variants. Instead, we decided to reuse the high-quality sources of information that are available for almost all products and subcomponents and that can be incrementally added. Our goal has been to design a method that can easily scale across different types of products across the automotive domain and possibly generalize it to other domains like aerospace and medical devices. Since our goal is to identify potential failures arising from changes in product design, we focus on creating our knowledge representation from standard information sources that can enable this.
Failure Modes and Effect Analysis (FMEA) is a very good source of information linking parts, processes, failures, causes, and detection and prevention mechanisms (Teng & Ho, 1996) and has been applied in many industries (Wu, Liu, & Nie, 2021). In the automotive industry, it is common practice to perform an FMEA for every new product or change to existing products or processes. In our context, we had access to two different types of FMEAs that are extensively used within our organization - Design FMEAs and Process FMEAs. We extracted and preprocessed 1193 Design FMEAs and 565 Process FMEAs. The preprocessing eliminated duplicates involving the same part/process - failure - cause chains.
The BOM provides us a hierarchical representation of all the subcomponents that together create a final product. It is common practice across industries to maintain the BOM in a Product Lifecycle Management (PLM) system. The BOM hierarchies extracted from the PLM system serve as a representation of all parts and their relationships in the final product variants and is an important component of our knowledge representation.
The D-FMEA provides the relationship between product design characteristics and failures that could emanate from the design criteria not being fulfilled. As shown in fig.4, each part in the product can be linked to one or more failure modes that can have one or more causes and effects respectively. As mentioned previously, the goal of our representation is to enable failure identification based on the textual description of the existing definition and the deviation and to provide explainability for all the recommendations using the causal chains in the representations.
The representation has been instantiated as a labelled property graph (Robinson, Webber, & Eifrem, 2015) (Angles, 2018). A labelled property graph consists of nodes (having properties in the form of key-value pairs) and relationships. The relationships are labelled and directed and can have properties. There are several commercial and open-source frameworks that enable creation of property graphs. We have used the open-source framework _redisgraph_(Pieter, et al., 2019). It provides a simple containerized deployment and suits our cloud deployment scenario without relying on a proprietary managed service. _Cypher_(Francis, et al., 2018) has been used for graph querying and the query parameters can be dynamically generated depending on the contextual assistance scenario. Fig. 5 shows a sample subset of the property graph for one part (represented by the black circle) in the BOM.
Figure 4: The D-FMEA concepts and their relationships
Figure 5: A subset of the graph showing the links between a single part (black dot), the failure modes (blue circles) and other concepts
In order to achieve the required contextual assistance, it is required to dynamically link the FMEA representation instances to the Deviation and Warranty Claim instances as shown in fig.6. Our two layered approach of knowledge representation with a layer of _Concepts_ and the second layer of instantiation helps us achieve an overall representation that is popularly referred to as a knowledge graph (Barrasa, Hodler, & Webber, 2021) (Noy, et al., 2019) (Ji, Pan, Cambria, Marttinen, & Philip, 2021). Dynamically adding and linking instances from other domains related to product quality like Lessons Learnt and 8D Methodology can follow the same method and have been explained later in the section related to possible extensions.
To dynamically create the links between the concepts related to Deviation and Warranty Claims to the FMEA concepts as shown by the dotted lines in fig.6, it is required to determine the semantic similarity between the textual inputs related to deviation and warranty claims and the FMEA Causes, Effect and Detection Mechanisms. This is achieved using a domain adapted language model as explained in section 2.2.
### The domain adapted language model
The advent of deep learning (LeCun, Bengio, & Hinton, 2015) in general, and the attention mechanism (Bahdanau, Cho, & Bengio, 2015) and the transformer architecture (Vaswani, et al., 2017) in particular, has led to tremendous advancements in natural language processing. BERT (Devlin, Chang, Lee, & Toutanova, 2018), RoBERTa (Liu, et al., 2019), XLNet (Yang, et al., 2019) and MPNet (Kaitao, Tan, Qin, Lu, & Liu, 2020) demonstrated that unsupervised pretraining followed by supervised fine tuning can enable large language models to achieve significant performance improvements on several benchmarks like STS-B (Daniel, Diab, Agirre, Lopez-Gazpio, & Specia, 2017), MNLI (Williams, Nangia, & Bowman, 2018) and MRPC (Dolan & Brockett, 2005). This has been followed by both creation of smaller models like DistilBERT (Sanh, Debut, Chaumond, & Wolf, 2019) and MiniLM (Wang, et al., 2020) achieving similar performance using distillation and much larger models with billions of parameters like GPT3 (Brown, et al., 2020), Gopher (Rae, et al., 2021) and BLOOM (Scao, et al., 2022) to achieve superior performance across a range of tasks.
The objective of dynamically linking deviation and warranty claim text to the FMEA text as mentioned in section 2.1 is an example of a semantic textual similarity task. The following two sentences that are
Figure 6: Dynamic addition and linking of nodes related to Deviation and Warranty Claims as shown by the dotted lines
very closely related in this context but sharing no common keywords demonstrate why semantics is so critical for this task:
i) _Presence of sharp areas lead to injury_ and
ii) _Cone not safe to handle_.
These provide a good example to demonstrate why a method based on BM25 (Robertson & Zaragoza, 2009) that is used in several information retrieval solutions would provide inadequate results in such a scenario. Language models like RoBERTa or MPNet provide the feasibility to effectively handle such semantic similarity tasks. However, the biggest challenge in both scenarios is the need to simultaneously feed both sentences to the model in order to obtain the similarity score. This is computationally very expensive for an information retrieval task. There have been several attempts to generate sentence or paragraph level embeddings (Kiros, et al., 2015) (Conneau, Kiela, Schwenk, Barrault, & Bordes, 2017) (Cer, et al., 2018) (Reimers & Gurevych, 2019) with increasingly improved performance on the Semantic Textual Similarity (STS) tasks. This enables generation and storage of embeddings for a large corpus and calculating semantic similarity against a query for any information retrieval task. The approach from (Reimers & Gurevych, 2019) based on the modification of pretrained BERT and RoBERTa models using a Siamese network with a pooling layer on top significantly reduces the computational needs for training such a model. We evaluated the feasibility of using such a model with domain adaptation for the needs of our contextual assistance. Despite the current trend towards extremely large models requiring special GPU clusters, one of our goals has been to evaluate methods that require low computational costs during inference (execution on CPU only compute nodes in our Kubernetes clusters) and only moderate training costs (e.g. single GPU workstations). The low computational costs during inference enables us to create a highly responsive assistance feature as user is not expected to wait for the recommendations to show up. Another important motivation for our approach is to provide all engineering design and quality conformance teams the feasibility to train and test the models on standard compute infrastructure already available to them once we provide the base software modules, opening the possibility for many further applications without the need for GPU clusters to be allocated. We expect that this would make adapting the methods proposed by us significantly easier within manufacturing organizations of all sizes. Once the benefit of the assistance feature is proven, use of larger models and GPU clusters for better performance becomes a simpler task with just the need for scaling computational power with sufficient justification for the costs.
One of the important steps before performing any domain adaptation is to evaluate whether the pretrained models can already provide sufficient performance as required for the domain. A small set of common failure modes in our domain (and in general in a product manufacturing domain) can be used to demonstrate the feasibility of using these models in a semantic similarity task. To demonstrate this, we present here the results using two groups of sentences as shown in Table 1. The sentences have been chosen to be generic and easily understandable and can be replicated across multiple products. Some sentences have been intentionally added that do not describe a failure in order to understand the limitations of the models for our application where users often describe why a change would not lead to a failure. This check has been used as an indicator of model suitability and not as a validation method. For validations a larger labelled dataset was later used.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Sentence Group 1 & Sentence Group 2 \\ \hline S1 1 & Durability requirements not fulfilled & S2 1 & Assembly fails \\ \hline S1 2 & Traceability requirements not fulfilled & S2 2 & Assembly fails before defined life \\ \hline S1\_3 & Temperature requirements not fulfilled & S2\_3 & Welding joint cracked \\ \hline S1\_4 & Acoustic requirements not fulfilled & S2\_4 & Radiant noise due to vibration \\ \hline S1\_5 & Mounting requirements not fulfilled & S2\_5 & Thermal constraints on surrounding parts \\ \hline S1\_6 & Leakage requirements not fulfilled & S2\_6 & Rust appears after a period of time \\ \hline S1\_7 & Connection requirements not fulfilled & S2\_7 & Diameter of cone is too small and requires \\ & & & rework \\ \hline S1\_8 & Visual requirements not fulfilled & S2\_8 & Thermal load is within limits \\ \hline S1\_9 & Weight requirements not fulfilled & S2\_9 & Reduced flow noticed \\ \hline S1\_10 & Flow requirements not fulfilled & S2\_10 & No impact on flow due to substitute part \\ \hline \end{tabular}
\end{table}
Table 1: Sentence groups used to evaluate embedding quality on semantic similarity task
To quickly observe the performance of models on a domain specific semantic similarity task, the following checks as mentioned in Table 2 can be used
Using such a minimalistic check as the first step provides a quick qualitative measure of the usability of the models for a domain specific semantic similarity task as in the case of linking deviations to failure causes and the failure modes. If these checks do not provide the expected results, it is unlikely that a larger validation data set would demonstrate the suitability of the models.
Embeddings were generated and semantic similarity using cosine similarity was calculated for the sentence pairs. We used pretrained models that are based on RoBERTa, DistilBERT and MPNET that have been finetuned on close on 1 billion sentence pairs on multiple datasets as mentioned in the respective model cards at the Hugging Face model hub (mentioned in appendix 1) according to the same architecture as mentioned in (Reimers and Gurevych, 2019). The results are shown in fig 7.
As can be seen, for all models, the highest cosine similarity is for the sentence pair {S1_3, S2_8} contrary to the expectation that the semantic similarity should be very low for this pair as mentioned in Table 2. This is also the case for multiple other sentence pairs like {S1_10, S2_10} where the actual semantic similarity is expected to be very low. This poses a major constraint for the usage of the models in our domain. Another observation is that the scores are too close to each other making it difficult for using thresholds to separate applicable and not applicable pairs. For example, separating {S1_4, S2_4} and {S1_4, S2_5} using the similarity scores would not be possible even though sentence pair {S1_4, S2_5} is not relevant. This makes the need for domain adaptation obvious.
As the first step in domain adaptation, we continued pretraining of the base model on domain data. The benefits of these, specially in a low resource setting, has been shown in (Gururangan et al., 2020)(Zhu et al., 2021). RoBERTa-large has been chosen as the base model and the continued pretraining was performed only for the masked language modelling task. This was followed by fine-tuning on the concatenated SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2018) datasets and evaluation on the STS tasks as proposed by (Reimers and Gurevych, 2019).
\begin{table}
\begin{tabular}{|p{142.3pt}|} \hline Sentence pairs with expected high similarity & Sentence pairs with expected low similarity \\ \hline {S1\_1, S2_2}, {S1\_1, S2\_6} & {S1\_3, S2\_8} \\ \hline {S1\_3, S2\_5} & {S1\_1, S2\_8} \\ \hline {S1\_4, S2\_4} & {S1\_10, S2\_10} \\ \hline {S1\_10, S2\_9} & {S1\_8, S2\_2}, {S1\_8, S2\_4}, {S1\_8, S2\_5}, {S1\_8, S2\_9} \\ \hline \end{tabular}
\end{table}
Table 2: Quick qualitative check for the model performance on domain specific semantic similarity task
Figure 7: Cosine similarity for the sentence pairs in the sentence groups in Table 1
This was followed by finetuning using domain specific labelled dataset. To overcome the limitation of the models previously evaluated in handling negation as demonstrated above in the examples with sentence pairs {S1_3, S2_8} and {S1_10, S2_10}, a small set of sentences with negation were added to the finetuning dataset. The results for semantic similarity score calculations using the model at different stages for the sentence groups in Table 1 can be seen in fig 8. Fig 8(a) represents the performance of the model based on RoBERTa-large as in fig 7. Fig 8(b) shows the results with domain pretraining followed by only finetuning using SNLI and MultiNLI datasets. Fig 8(c) shows the results when additional finetuning using domain specific data is performed.
The results in fig 8 demonstrate the advantages of continued pretraining of the base model followed by finetuning on the domain dataset after the model has been finetuned on the publicly available dataset. The high difference in the cosine similarity scores between similar and dissimilar concepts make it feasible to use thresholds for separation of relevant sentence pairs. The model is also effectively able to deal with negation. To emphasize the need for handling negation and demonstrate the ability of the final model to do this, another small set of sentence groups is presented in Table 2. These sentence pairs provide an additional quick qualitative measure of the usability of the models in the presence of negation.
Fig 9 shows the efficacy of the model in dealing with similarity and negation together. The low cosine similarity score for pairs {S3_2, S4_1}, {S3_4, S4_3} and {S3_8, S4_7} demonstrate the ability to handle negation. The important result to notice is that the high semantic similarities are still retained in the
\begin{table}
\begin{tabular}{|l|l|l|} \hline Sentence Group 1 & Sentence Group 2 \\ \hline S3 1 & Durability requirements not fulfilled & S4 1 & Assembly fails before defined life \\ \hline S3 2 & Durability requirements satisfied & S4 2 & Welding joint cracked \\ \hline S3 3 & Acoustic requirements not fulfilled & S4 3 & Radiant noise due to vibration \\ \hline S3 4 & Acoustic requirements are met & S4 4 & Thermal constraints on surrounding parts \\ \hline S3 5 & Leakage requirements not fulfilled & S4 5 & Sufficient sealing available \\ \hline S3 6 & No leakage problems observed & S4 6 & Rust appears after a period of time \\ \hline S3\_7 & Flow requirements not fulfilled & S4\_7 & Reduced flow noticed \\ \hline S3\_8 & Flow is as expected in design & S4\_8 & No impact on flow due to substitute part \\ \hline \end{tabular}
\end{table}
Table 3: Sentence groups to highlight the significance of the ability to handle negation
Fig 8: (a) Model based on RoBERTa-large available on Hugging Face model hub trained using the methods in (Reimers & Gurevych, 2019) on approximately 1 billion sentence pairs, (b) Model based on RoBERTa-large with continued pretraining using domain data followed by finetuning on SNLI and MultiNLI, (c) The model mentioned in (b) with additional finetuning on domain data including negation
presence of negation as the cosine similarity scores are high for the pairs {S3_6, S4_5}, {S3_8, S4_5} and {S3_8, S4_8}.
### Assistance by combining the domain representation and the language model
As mentioned in section 1, the goal of the assistance feature is to enable the users in identifying the possible failures when they initiate the workflow for a manufacturing deviation. The language model is used to identify the failure causes that are available in the domain representation based on the semantic textual similarity to the user's deviation text. This is then used to create links between the deviation and the Failure Modes for the relevant parts as shown in fig 6. Additionally, this is used to identify the past warranty claims that could have a relationship to the failures and causes identified for the particular deviation. The user has the possibility to select the failures and the warranty claims that she/he considers relevant for the particular deviation as shown in fig 10. The user feedback is used as further data for model finetuning and performance evaluation in the live system.
Figure 10: The assistance feature provided by Fountain integrated within the deviation management application.
Figure 9: (a) Model based on RoBERTa-large available on Hugging Face model hub trained using the methods in (Reimers & Gurevych, 2019) on approximately 1 billion sentence pairs, (b) Model based on RoBERTa-large with continued pretraining using domain data followed by finetuning on SNLI and MultiNLI, (c) The model mentioned in (b) with additional finetuning on domain data including negation
The users have the possibility to look at the details of the causes which could possibly lead to the respective failures as a consequence of this deviation as shown in fig 11. This provides explainability for the recommendations and helps the user in analyzing whether the failure can occur or not.
The like (thumbs up) and dislike (thumbs down) features were developed to perform anonymized user tests for the deployed application. Based on requests by the users during trials, this was additionally extended by adding the considered failure risks to the risk evaluation text with the users having the option to mention why this risk has already been considered and is not relevant and hence deviation approval can be obtained as shown in fig 12.
## 3 User study results
One of the major challenges is to validate the quality of recommendations for such an assistance system when it is integrated into another application. Unlike an ecommerce application, the number of users is small and a random selection of a small user subset who can preview and validate such an application is not feasible. Not every user has the same level of domain knowledge and expertise to validate the quality of recommendations. Another constraint in the low number of deviations that are created. The assistance system has to undergo detailed validation using a small user base and a small set of sample deviations before it can be rolled out to all the users across multiple regions and production plants. Hence, we performed tests in two stages - 1) with a set of three voluntary expert users who provided detailed results regarding the recommendations for multiple deviations and 2) anonymized tests where the results were calculated just based on like and dislike buttons with another set of expert users.
It is important to understand that not every recommendation for each deviation must be always fully applicable. This is very difficult, if at all possible, to achieve for a complex domain. Hence, we asked the expert users to rate each recommendation item as applicable or not applicable and obtained the following summary statistics - 1) number of deviations evaluated, 2) number of recommendations where
Figure 11: Causes for the shown failures can be seen to help with the analysis
Figure 12: The Failure text added to the risk evaluation when user selects it from the recommendation.
The user can add the justification text below it for further approval process.
all recommendation items were considered applicable, 3) number of recommendations where some recommendation items were considered applicable and some not applicable, and 4) number of recommendations where no recommendation items were considered useful. The result is shown in Table 4.
The results in table 4 demonstrate that for a majority of deviations created, at least some of the recommendations from the assistance system are found useful and applicable by the users. Additionally, there are deviations where all recommendations from the assistance system are found useful by the end users. This is very positive for the high specificity of the system which is important for such an application and demonstrates the effectiveness of language model adaptation.
Additionally, the next stage of anonymized tests demonstrated that 34 recommendations were considered useful and 20 not useful. This is slightly inferior to the above results, but further evaluation of the data showed that some recommendations marked not useful contradict each other. Since the data has been anonymized in a way that they cannot be traced back to users, it is not feasible to evaluate whether the same user provided this feedback while experimenting with the system. We have considered this as a limitation of this anonymization approach of feedback collection and would attempt to implement a different approach without compromising user privacy in the next user study and in the productive live system.
## 4 Extensions and future work
The results demonstrate the effectiveness of using an intelligent assistant for such an application and acceptance of end users for such a system. In near future we intend to extend this method to problem solving methods like like Eight Disciplines of Problem Solving (8D) and continuous improvement methods like Kaizen and Lessons Learnt and Best Practices. This would involve adapting the knowledge representation for additionally linking data from these applications and reuse of the domain adapted language model. Additionally, we want to enable multiple teams to train and evaluate their own domain specific language models for other engineering and manufacturing applications and benefit from our approach.
## 5 Acknowledgements
We had excellent support from our colleagues from the group Total Customer Satisfaction and Quality at FORVIA in evaluating the effectiveness of the methods and results. This close cooperation enabled us to iteratively improve the user experience and the quality of recommendations. We would like to thank our colleague Vincent Besancon for preparing the deployment infrastructure in Azure Kubernetes Service and the model training infrastructure in Azure ML. Our colleague Ashwini Thanigaveulu deserves credit for creating the scripts to extract and clean the FMEA data and our student intern Stephan Wolfgang Heinicke for extracting and translating the data related to deviations and warranty claims. We would also like to thank our colleague Brahmeshwar Reddy who made the required UI adaptations in the deviation management application to enable the assistance feature. Our colleague Daniel Saeuberlich deserves the credit for the name Fountain to depict the source of knowledge to quench the third of multiple application domains. He also helped in evaluating the correctness of the extracted FMEA data.
## Statements and declarations
This work has been carried out in the department Artificial Intelligence Technologies at FORVIA Clean Mobility, Augsburg, Germany. The FMEA, Deviation and Warranty Claim data are proprietary data.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline User & Deviations evaluated & All recommendations useful & Both useful and non- useful recommendations & No useful recommendations \\ \hline
1 & 59 & 29 & 22 & 8 \\ \hline
2 & 11 & 4 & 3 & 4 \\ \hline
3 & 7 & 0 & 4 & 3 \\ \hline \end{tabular}
\end{table}
Table 4: Result of recommendation evaluation by selected expert users
## Author contributions
Saurabh Kumar created the concept of combining the knowledge representation in the form of property graph capturing the cause-effect relations with a language model to provide the assistance feature. He did the model training and finetuning required for domain adaptation of the language model. He also wrote the code for the microservices that constitute the application (Fountain) deployed in Azure Kubernetes Service and the MLOps pipelines and visualizations for training and evaluating performance of the different models.
Daniel Fuchs provided the domain knowledge and performed the labelling required for supervised finetuning. He created the list of domain specific synonyms and provided descriptions of domain specific terms. He also performed evaluations of the recommendation quality before user tests.
Klaus Spindler provided the ideas and suggestions for effectiveness of the cause-effect chains and the usefulness in the manufacturing and product design context. He performed multiple reviews for the application.
|
2306.08294 | Artificial Neural Networks and Guided Gene Expression Programming to
Predict Wall Pressure Spectra Beneath Turbulent Boundary Layers | This study evaluates the efficacy of two machine learning (ML) techniques,
namely artificial neural networks (ANN) and gene expression programming (GEP)
that use data-driven modeling to predict wall pressure spectra (WPS) underneath
turbulent boundary layers. Different datasets of WPS from experiments and
high-fidelity numerical simulations covering a wide range of pressure gradients
and Reynolds numbers are considered. For both ML methods, an optimal
hyperparameter environment is identified that yields accurate predictions. ANN
is observed to be faster and more accurate than GEP with an order of magnitude
lower training time and logarithmic mean squared error ($lMSE$), despite a
higher memory consumption. Novel training schemes are devised to address the
shortcomings of GEP. These include (a) ANN-assisted GEP to reduce the noise in
the training data, (b) exploiting the low and high-frequency trends to guide
the GEP search, and (c) a stepped training strategy where the chromosomes are
first trained on the canonical datasets followed by the datasets with complex
features. When compared to the baseline scheme, these training strategies
accelerated convergence and resulted in models with superior accuracy ($\approx
30\%$ reduction in the median $lMSE$) and higher reliability ($\approx 75\%$
reduction in the spread of $lMSE$ in the interquartile range). The final GEP
models captured the complex trends of WPS across varying flow conditions and
pressure gradients, surpassing the accuracy of Goody's model. | Nachiketa Narayan Kurhade, Nagabhushana Rao Vadlamani, Akash Haridas | 2023-06-14T07:04:11Z | http://arxiv.org/abs/2306.08294v1 | Artificial Neural Networks and Guided Gene Expression Programming to Predict Wall Pressure Spectra Beneath Turbulent Boundary Layers
###### Abstract
This study evaluates the efficacy of two machine learning (ML) techniques, namely artificial neural networks (ANN) and gene expression programming (GEP) that use data-driven modeling to predict wall pressure spectra (WPS) underneath turbulent boundary layers. Different datasets of WPS from experiments and high-fidelity numerical simulations covering a wide range of pressure gradients and Reynolds numbers are considered. For both ML methods, an optimal hyperparameter environment is identified that yields accurate predictions. ANN is observed to be faster and more accurate than GEP with an order of magnitude lower training time and logarithmic mean squared error (\(lMSE\)), despite a higher memory consumption. Novel training schemes are devised to address the shortcomings of GEP. These include (a) ANN-assisted GEP to reduce the noise in the training data, (b) exploiting the low and high-frequency trends to guide the GEP search, and (c) a stepped training strategy where the chromosomes are first trained on the canonical datasets followed by the datasets with complex features. When compared to the baseline scheme, these training strategies accelerated convergence and resulted in models with superior accuracy (\(\approx 30\%\) reduction in the median \(lMSE\)) and higher reliability (\(\approx 75\%\) reduction in the spread of \(lMSE\) in the interquartile range). The final GEP models captured the complex trends of WPS across varying flow conditions and pressure gradients, surpassing the accuracy of Goody's model.
+
Footnote †: preprint: Accelerated GEP to Predict Wall Pressure Spectra beneath Turbulent Boundary Layers
## I Introduction
Turbulent boundary layers (TBLs) developing over the surfaces induce wall-pressure fluctuations. These dynamic loads are highly undesirable as they increase structural vibrations and noise. Fatigue failure due to aeroacoustic loads is a critical problem in wind turbine and gas turbine blades [1; 2; 3; 4; 5]. Aerodynamic noise due to attached or separated TBLs contributes significantly towards medium and high-frequency sound pressure levels (SPL) of the cabin [6; 7]. Acoustic loads due to wall-pressure fluctuations are much more severe over the skin panels of high-speed vehicles. The amplitude and frequency of these loads can be significantly higher in certain regions which can potentially damage the vehicle.
Estimating the wall-pressure fluctuations beneath TBLs is hence crucial to facilitate the structural and aerodynamic design process. These fluctuations can either be estimated from high-fidelity eddy-resolving simulations or can be directly measured from experiments. Both these approaches are prohibitively expensive. Although eddy-resolving methods like Direct Numerical Simulations (DNS) [8; 9; 10] and Large Eddy Simulations (LES) [11; 12; 13] can accurately quantify the spatiotemporal variation of acoustic loads, they are computationally expensive to cater to the highly iterative design process. On the other hand, most of the experiments in the literature are confined to measuring wall pressure spectra (WPS) of TBLs developing over simple geometries like a flat plate subjected to zero, favorable, or adverse pressure gradients [14; 15; 16; 17] (ZPG, FPG, and APG).
One of the widely used approaches to overcome these limitations is to couple the empirical models of WPS with the mean boundary layer characteristics obtained from the Reynolds Averaged Navier-Stokes (RANS) simulations to predict acoustic loads [19]. These models rely on scaling laws based on the inner and outer regions of the boundary layer to estimate the sound pressure levels. Several empirical models for WPS were developed based on the theoretical foundations of Lighthill [20; 21] and Kraichnan [22]. Blake [23] integrates the Poisson equation for pressure fluctuation beneath a TBL. As noted by Grasso _et al._[24], the theoretical model involves repeated integration in multiple dimensions, and the computational cost increases exponentially with the number of dimensions. An alternate approach involves correlating the non-dimensionalized SPL distribution with boundary layer parameters like thickness, wall-shear stress, freestream velocity, etc. This approach, as reflected in the works of Chase [25] and Howe [26] has an evolutionary advantage as the subsequent models continued to modify the same baseline empirical structure to account for newer observations. Goody [27] further modified the Chase-Howe model to include Reynolds number effects and capture the frequency dependence. Goody's model is designed to predict the WPS of TBLs under ZPG with a \(\omega^{2}\) growth at low frequencies (in line with Kraichnan-Phillips' theorem), \(\omega^{-1}\) decay in the inertial range (in line with Bradshaw prediction [28]) and \(\omega^{-5}\) decay at the high-frequencies (in line with Blake's observations [23]). Subsequent works of Kamruzzaman _et al._[29], Rozenberg _et al._[19], Hu [30] and Lee [31] extended Goody's model to account for pressure gradients. Ritos _et al._[32] recognised that the existing models fall short in predicting the WPS beneath supersonic and hypersonic flat plate TBLs under ZPG conditions and modified Goody's model to account for the compressibility effects. Wind tunnel experiments of Thomson and Rocha [33] measured wall pressure fluctuations of TBL on a flat plate under FPG. They further tuned the parameters of the universal spectrum model (which is inspired by Goody's model) to incorporate FPG effects and improve predictions at high-frequencies. Thomson and Rocha [34] compare the performance of Goody's model on both the wind
tunnel data and flight test data. They highlight that the model accurately predicts the WPS of the former while underpredicts the latter and accordingly updated Goody's model to accurately fit the flight test data. Accounting for the different flow conditions and pressure gradients, Fritsch _et al._[35] developed a semi-empirical WPS model using numerical optimization algorithms. The predictions are shown to be sensitive to the values of constants involved and the optimized set is bound to evolve with an ever-evolving dataset. All these empirical models, however, were developed based on the respective datasets and hence their accuracy suffers when employed for 'extrapolated' flow conditions.
With increasing computational power and the databases of wall pressure spectra data published in public domains, robust wall pressure spectra models can be developed by exploiting the entire database. Additionally one can identify the range of TBL parameters covered by these databases [36]. These insights can then be used by modern machine learning (ML) algorithms to solve this regression problem. Haridas and Vadlamani [37] and Dominique _et al._[36] used artificial neural networks (ANN) for this purpose and obtained better predictions than any of the empirical models discussed above. The study conducted by Haridas and Vadlamani [37] used subsonic and supersonic datasets to train their ANN model. This approach also enabled Dominique _et al._ to quantify the confidence in their predictions and identify the input space where more data will be helpful in improving the predictions. ANN being the universal function approximator [38] is indeed best suitable for the job as it can be retrained with little effort on the new datasets. The downside with ANN is twofold: (a) It results in a highly non-linear function that is difficult to translate to a conventional functional form and (b) It provides little insight into the underlying physics of the problem [37]. On the other hand, Gene expression programming (GEP), a symbolic machine learning algorithm, operates on the input variables through generations to evolve into an analytical expression that tries to approximate provided data. Dominique _et al._[39] showed the capabilities of this technique to predict the WPS models that are close to the established empirical correlations in addition to discovering the new dependencies that were not considered earlier. GEP however suffers from converge issues. Also, for a given set of input features, the algorithm produces different analytical expressions, although their structures and predictions can be very similar.
The objective of the present work is multifold. We first compare the strength and weaknesses of ANN and GEP (with an optimized hyperparameter environment) against the same input data assessing the time taken to converge, and the computational resources utilized. Subsequently, we propose strategies (like ANN-assisted training incorporating physical insights) to accelerate the training of the GEP algorithm and mitigate its convergence issues. Finally, we demonstrate the efficacy of the training strategies and the robustness of the models predicted using the guided GEP approach.
The manuscript is structured as follows: We introduce the regression problem and the relevant WPS models in Section II. Section III presents a brief description and analysis of the input dataset and a short overview of ANN and GEP. We present the optimum hyperparameter environment and compare the computational efficacy of ANN and GEP in Section IV. Strategies to accelerate GEP training to mitigate the convergence issues are addressed in Section V. Lastly, Section VI concludes the key findings of the study.
## II Characterizing Turbulent Boundary Layer and Wall Pressure Spectrum
As mentioned in the introduction, the empirical models of WPS rely on the mean boundary layer characteristics to predict acoustic loads. All the TBLs have a negligible zero pressure gradient in the wall-normal direction (\(\partial_{x}p=0\)) but can be subjected to a considerable gradient (\(\partial_{x}p\)) in the streamwise direction. Figure 1 sketches a typical velocity profile \(U(y)\) at a distance \(x\) downstream of the leading edge for a general case where the equilibrium velocity at the boundary layer edge changes with \(x\) as \(U_{\epsilon}(x)\). The local boundary layer thickness (\(\delta(x)\)) is usually defined as the wall-normal distance at which \(U=0.99U_{\epsilon}(x)\). In this study, following Dominique _et al._, we use an alternate definition for boundary layer thickness based on the pseudo-velocity [9]\(U^{*}(x,y)=-\int_{0}^{y}\Omega_{z}(x,\xi)\,d\xi\) to account for the effect of pressure gradients on the outer flow velocity. Here, \(\Omega_{z}\) is the spanwise component of the mean vorticity \(\Omega=\nabla\times\widetilde{U}\), \(\overline{U}\) being the mean velocity vector and \(\xi\) is the length in wall-normal direction.
At the same point \(x\) downstream of the leading edge, a probe can be placed on the wall to record the pressure fluctuations due to TBL. The unsteady pressure signals \(p(\tau)\) can be decomposed into its mean and fluctuating components as follows:
\[p(x,\tau)=\overline{p}(x,\tau)+p^{\prime}(x,\tau) \tag{1}\]
The point-wise power spectral density (PSD) \(\Phi_{pp}\) of these pressure fluctuations is obtained from the following equation:
\[\Phi_{pp}(x,\omega)=\int_{-\infty}^{+\infty}R(x,\tau)e^{-i\omega t}\,d\tau \tag{2}\]
Here, \(R\) is the auto-correlation function defined as \(R(x,\tau)=\int_{0}^{\infty}p^{\prime}(x,t)p^{\prime}(x,t+\tau)\,dt\), \(\Phi_{pp}(x,\omega)\) is the PSD obtained from
Figure 1: Typical flat plate boundary layer
the fast Fourier transform of \(R\), \(\omega\) is the angular frequency and \(i=\sqrt{-1}\). Several empirical models in the literature express the empirical correlation for \(\Phi_{pp}\) in terms of the mean boundary layer parameters:
\[\Phi_{pp}(\omega)=f(\omega,\delta,\delta^{*},\theta,U_{e},\rho,\nu,\tau_{w}, \partial_{x}p,c,\Pi) \tag{3}\]
Where \(\delta^{*}=\int_{0}^{\delta}(1-U(x,y)/U_{e})\,dy\) is the displacement thickness, \(\theta=\int_{0}^{\delta}U(x,y)/U_{e}(1-U(x,y)/U_{e})\,dy\) represents the momentum thickness of the TBL, and \(\tau_{w}\) is the wall shear stress at location \(x\). \(\rho\), \(\nu\), and \(c\) represent the density, dynamic viscosity, and speed of sound in the freestream respectively. A TBL typically consists of four distinct regimes in the wall-normal direction: viscous sublayer closer to the wall, intermediate buffer layer, logarithmic region influenced by large turbulent scales, and the wake region. The deviation of the velocity profile from the log law in the wake region is characterized by the wake strength parameter [40], \(\Pi\). The number of independent variables in Eq. 3 can be reduced using the Buckingham-Pi theorem. Following the work of Dominique _et al._[36], Eq. 3 is rewritten in terms of the non-dimensionalize variables as:
\[\widetilde{\Phi}_{pp}=f(\widetilde{\omega},\Delta,H,M,\Pi,C_{f},R_{T},\beta) \tag{4}\]
Where,
\[\text{Dimensionless PSD:}\;\widetilde{\Phi}_{pp}=(\Phi_{pp}U_{e})/(\tau_{w}^{2}\delta)\]
\[\text{Dimensionless angular frequency:}\;\widetilde{\omega}=(\omega\delta)/U_{e}\]
\[\text{Zagarola-Smits's parameter}\]
\[\text{Shape factor:}\;H=\delta^{*}/\theta\]
\[\text{Mach number:}\;M=U_{e}/c\]
\[\text{Wake strength parameter}\]
\[\text{Friction coefficient:}\;C_{f}=\tau_{w}/(\rho U_{e}^{2})\]
\[\text{Outer-to-inner-layer timescale ratio:}\;R_{T}=(\delta/U_{e})/(\nu/U_{\tau}^{2})\]
\[\text{Clauser parameter:}\;\beta=(\theta/(\rho U_{\tau}^{2}))\partial_{x}p\]
### Goody's Model
One of the earliest empirical models, the Chase-Howe model [25; 26], proposed a correlation of PSD (\(\Phi_{pp}\)) as a sole function of angular frequency \(\omega\). It however fails to predict the high frequency \(\widetilde{\omega}^{-5}\) drop observed in the wall pressure spectra of TBLs under ZPG. Goody [27] proposed the following changes to the Chase-Howe model: (a) Since the largest coherent structures in TBL are of the order of boundary layer thickness, \(\delta^{*}\) is replaced with \(\delta\) based scaling (b) Ratio of outer-layer to inner-layer timescales, \(R_{T}\) (a different form of Reynolds number) is incorporated into the formulation to control the extent of inertial subrange. Goody's model, given in Eq. 5, accurately predicts ZPG turbulent boundary layers that are homogeneous in the spanwise direction. Hence it is considered as the baseline semi-empirical model although it does not account for pressure gradients.
\[\frac{\Phi_{pp}U_{e}}{\tau_{w}^{2}\delta}=\frac{3(\omega\delta/U_{e})^{2}}{[( \omega\delta/U_{e})^{0.75}+0.5]^{3.7}+[1.1R_{T}^{-0.57}(\omega\delta/U_{e})]^{ 7}} \tag{5}\]
### Dominique's GEP Model
Dominique _et al._[39] trained the gene expression programming algorithm on the datasets described in Section III.1 to come up with the following WPS model:
\[\widetilde{\Phi}_{pp}=\frac{\left(5.41+C_{f}\left(\beta+1\right)^{5,41}\right) \widetilde{\omega}}{\widetilde{\omega}^{2}+\widetilde{\omega}+(\beta+1)M+( \widetilde{\omega}+3.6)\frac{\widetilde{\omega}^{4.76}}{C_{f}R_{T}^{5.83}}} \tag{6}\]
Although derived from scratch, Dominique's GEP model reasonably captures the frequency dependencies. Their model shows a \(\widetilde{\omega}^{1}\) dependence at low frequencies, in contrast to the \(\widetilde{\omega}^{2}\) dependence proposed by Kraichnan [22]. This is considered a strong trait for modeling realistic scenarios using GEP where the data does not always agree with the theoretical models.
## III Methodology
### Data Collection
The dataset used in the present work stems from the study by Dominique _et al._[36] (also referred to as the von Karman Institute (VKI) team henceforth). It consists of the variation of \(\widetilde{\Phi}_{pp}\) with \(\widetilde{\omega}\) at subsonic Mach numbers, in addition to the mean TBL characteristics (reported in Eq. 4), collected from the following sources:
* Experiments of Salze _et al._[14] on a flat plate under ZPG, APG, and FPG.
* High-fidelity computations of Deuse and Sandberg [9], Hao _et al._[10] and, Christophe _et al._[12] on the configuration of the flow over a controlled diffusion (CD) airfoil, where the TBL spectra under APG and mild FPG are recorded at different stations.
Figure 2 plots all the 117 datasets, each of which has 500 logarithmically spaced points resampled from the experiments listed above. A machine learning algorithm is otherwise oblivious to the different experiments from which the dataset has been extracted. Rather, it solves the regression problem modeled using Eq. 4 treating the input as a set of independent data points with respective features and labels.
### Data analysis
A closer look at Fig. 2 shows the dominance of the APG dataset over the FPG and ZPG. It is essential to design ML
algorithms to handle this skewness, else it treats the underrepresented cases as outliers and the model predictions suffer. A common practice to balance out skewed databases is to identify similarity clusters and assign the respective weights. One of the widely used methods involves clustering based on the Euclidean distance, \(d=\sqrt{\sum_{i=1}^{n}(P_{i}-Q_{i})^{2}}\) between two vectors (say \(\overline{P}\) and \(\overline{Q}\)) which represent the independent flow parameters listed in Eq. 3.
In the current study, we demonstrate the similarity by comparing the trends of \(\widetilde{\Phi}_{pp}\) vs \(\widetilde{\omega}\) across datasets. A typical WPS, shown in Fig. 3, comprises three distinct regimes: a low-frequency region where \(\widetilde{\Phi}_{pp}\) rises monotonically with \(\widetilde{\omega}\) to reach its peak value, an inertial subrange at mid-frequencies (typically noticeable at high Reynolds numbers) and a \(\widetilde{\omega}^{-5}\) roll-off at higher frequencies due to dissipation. However, not all the datasets in Fig. 2 show this exact trend. The range of frequencies encompassing each of the regimes also differs across datasets. Hence, the following slope vector \(\overline{m}\) is extracted for each dataset of length \(n\):
\[m_{i}=\frac{\widetilde{\Phi}_{pp,i+1}(dB)-\widetilde{\Phi}_{pp,i}(dB)}{ \widetilde{\omega}_{i+1}-\widetilde{\omega}_{i}},i=[1,n-1] \tag{7}\]
It should be noted that the raw WPS dataset is noisy and hence should be smoothed out to extract meaningful trends. The present work used the predictions of an ANN model for the same. Since all datasets have an equal number of points, \(\overline{m}\) also captures the extent of respective regimes. We can now compute the cosine similarity, which results in a value between [-1,1], to quantify the closeness between two slope vectors as follows:
\[CS=cos(\overline{m}_{i},\overline{m}_{j})=\frac{\overline{m}_{i}\cdot\overline {m}_{j}}{||\overline{m}_{i}||\times||\overline{m}_{j}||},(i,j)=[1,117] \tag{8}\]
In a 3D space, \(CS=1\) represents similar vectors, \(CS=0\) indicates orthogonal vectors and \(CS=-1\) represents vectors pointing in opposite directions. In a multidimensional space such as ours, cosine similarity is merely representative of how similar trends are to each other; from identical to very dissimilar respectively. Figure 4 plots the heatmap of the resulting cosine similarity matrix. \(CS\approx 1\) within the boxes aligned along the diagonal indicate that the trends within a given experiment are similar. Bands of \(CS\approx 1\) (highlighted using solid arrows) represent similar WPS trends observed across the experiments with different flow conditions. In contrast, bands of \(CS\approx-1\) (highlighted using dashed arrows) represent dissimilar trends. For example, the inset plots in Fig. 4 illustrate the similarity of WPS extracted at points A, B, and C. It is apparent that the WPS at B and A are similar with a \(CS\approx 0.92\) while the WPS at C is dissimilar to that of A with a \(CS\approx-0.54\). This similarity matrix helps in identifying clusters that can be used to assign weights to the datasets and potentially improve model predictions. This aspect is further discussed in Section IV.2.
### Artificial neural networks
Artificial neural network (ANNs) is a powerful function approximator that is commonly used to fit non-linear data using supervised learning. A brief description of ANN is given here and readers can refer to the works of Goodfellow _et al._[43] for further details. Figure 5 illustrates a typical ANN architecture comprising an input layer (which can be normalized for improved scaling of data), several hidden layers (to accurately
Figure 3: Low and high-frequency trends in TBL wall pressure spectrum, black dashed lines as eye guides
Figure 2: Wall pressure spectra collected by the von Karman Institute(VKI) team rescaled to Goody’s scales
capture the complexity of the non-linear function that fits the data), and an output layer. The hidden layers consist of neurons. Each neuron generates a weighted sum \(\Sigma\) from its inputs (\(\overline{w}\cdot\overline{x}\)) with respective biases \(b\). A predefined activation function (\(\sigma\), usually a non-linear function) operates on this weighted sum \(\Sigma\) to produce an output \(y\). The bias is a constant value that is added to the weighted sum of the inputs to shift the activation function. It is mathematically expressed as \(y=\sigma\Sigma=\sigma(\overline{w}\cdot\overline{x}+b)\). The error between the expected output and the prediction is minimized by computing the gradient of the objective function with respect to the weights and biases of each neuron through backpropagation [44]. This helps in adjusting the weights and biases of the individual neurons, resulting in a potentially improved model.
The current study considers the data used by the VKI team and hence the optimal hyperparameters from their study have been used to build the ANN model. The neural network has three hidden layers of ten neurons each in a fully-connected feed-forward fashion. These hidden layers are preceded by a normalization layer. In this layer, the input vector \(\overline{x}=[\widehat{\mathbf{\omega}},\Delta,H,Ma,\Pi,C_{f},R_{T},\beta]\) from Eq. 4 is normalized using the mean and standard deviation of each feature to accelerate the learning process [45]. The output layer of the model is made up of a single neuron that predicts the value, \(y=10log_{10}(\Phi_{pp}U_{e}/\tau_{w}^{2}\delta)\). The training is carried out using
Figure 4: Cosine similarity matrix of WPS trends observed in collected datasets. Christophe:1-78, Salze:79-88, Deuse: 89-101: Hao: 102-117
Figure 5: Example of an Artificial neural network build
Nadam optimizer [46] and Selu activation function [47] at a learning rate of 0.0001 with a batch size of 32 randomly chosen samples. The dataset is randomly split into training data and validation data with an 80:20 split respectively. Logarithmic mean squared error (_IMSE_), defined in Eq. 9, is used as the objective function. Training of the ANN is driven by the training loss while the early stopping is identified by monitoring the validation loss.
\[\textit{IMSE}=\frac{1}{N}\sum_{i=1}^{N}W_{i}(10log_{10}(y_{i}^{ expected})-10log_{10}(y_{i}^{predicted}))^{2} \tag{9}\]
### Gene expression programming (GEP)
Like a genetic algorithm GEP discovers mathematical expressions to best represent a dataset. We present a brief summary of the method here and the readers can refer to the article by Candida F. [48] for a detailed discussion on GEP. Figure 6 and Table 1 illustrate the mainstream GEP algorithm, the entities, and the operations involved. The algorithm starts with constructing several fixed-length strings called genes using combinations of the functions and terminals. Part of such a gene (ORF) is interpreted as an expression tree (ET) which is a graphical representation of a mathematical expression. Chromosomes (strings with multiple genes) with linking functions are used to improve upon the complexity captured by the mathematical expression derived from a single gene. The fitness of these chromosomes is then evaluated based on their ability to match the predictions of a given dataset. Chromosomes with desired fitness are retained, reproduced, and modified through genetic operators to result in different expressions altogether. Well-performing combinations are thus retained and mutated over multiple generations until the one with desired fitness is discovered.
\begin{table}
\begin{tabular}{l|c}
**Term** & **Description** \\ \hline Functional set (F) & Set of operators that a GEP inferred mathematical expression can be built out of. \\ \hline Terminal set (T) & Set of variables that a GEP inferred mathematical expression can be built out of. \\ \hline Gene & Fixed length string that is a combination of members from the functional and the terminal set. \\ \hline Head & Part of the gene that is a combination of entities from the functional set and the terminal set. \\ \hline Tail & Part of the gene made exclusively out of the terminal set to ensure that enough terminals are available to result in a valid mathematical expression. \\ \hline Open reading frame (ORF) & Part of the gene that can be read into an expression tree. \\ \hline Expression tree (ET) & Plane tree structure with each node as a member of the functional set and each leaf as a member of the terminal set that can be read into a mathematical expression. \\ \hline Chromosome/Individual & Fixed length string that consists multiple genes. \\ \hline Sub Expression Tree (Sub - ET) & Expression tree that is derived from a gene present in a chromosome. \\ \hline Prescribed Linking function & Function prescribed at the start of GEP training that combines Sub - ETs to produce the final mathematical expression from a chromosome. \\ \hline Fitness & Performance of GEP individuals quantified through objective functions. \\ \hline Selection & Process of selecting parents for the next generation from a pool of individuals while discarding the rest. \\ \hline Reproduction & Process of filling up the population back to its original volume by deriving daughters from parent individuals. \\ \hline Genetic operators & Operators that genetically modify chromosomes. \\ \hline Evolution & Process of performing selection and reproduction on a population while retaining the desired genetic characteristics identified through fitness. \\ \hline Generation & Pool of individuals that undergoes fitness evaluation at any point of time through GEP evolution. \\ \hline \end{tabular}
\end{table}
Table 1: GEP Terminology
Figure 6: (a) GEP algorithm flowchart with least-square based local optimization. (b) Entities involved in GEP
\[Fit=100\times\sqrt{\alpha\frac{lMSE}{A_{log}^{2}}+(1-\alpha)\frac{MSE}{A_{lin}^{2}}} \tag{10}\]
where,
\[MSE=\frac{1}{N}\sum_{i=1}^{N}W_{i}(y_{i}^{expected}-y_{i}^{predicted})^{2} \tag{11}\]
Here, \(A_{log}\) and \(A_{lin}\) represent the maximum amplitude of training points in the respective scales. \(\alpha\) is a weight assigned to the objectives, which is set to 0.5 thus giving equal weight to both contributions. The idea to have such a multi-objective function is derived from the works of Dominique _et al._[39]. They postulated that while the _MSE_ penalizes the error in the amplitude of the spectrum, the _lMSE_ captures error in the trends at low and high frequencies effectively.
A set of experiments were conducted for 100 individuals of different head lengths [3, 4, 5, 6], over 1001 generations, and with optimization probability 0.005 (every generation is being optimized, but very few individuals are optimized in each generation). Figure 8 plots the errors in exponent predictions (which is defined as \(|Exponent_{expected}-Exponent_{observed}|\)) with \(Fit\). It is evident that the \(R_{T}\) and high-frequency errors tend to drop with a decrease in \(Fit\), implying a closer agreement with Goody's model. In almost all the cases, \(R_{T}\) error
Figure 8: Exponent error observed in GEP solution while predicting the Goody’s model
Figure 7: Training dataset for hyperparameter study of GEP (a) With added noise (b) Without any noise along with a typical GEP solution observed at h = 4, 10001 generations and, \(P_{opt}=0.1\)
has dropped below 0.5 for _Fit_\(<\) 5 and below 1.0 for _Fit_\(<\) 6. Similar behavior is observed for high-frequency error where the error drops below 1 for _Fit_\(<\) 6. However, it is evident that the low frequency \(\tilde{\omega}^{2}\) errors are quite unpredictable, although the errors are \(\leq\) 1.0 for all the cases. This unpredictable behavior is attributed to the high noise at low frequencies, a feature of input data that is observable in Fig. 7 (a). As pointed out by the Dominique _et al_[39], _Fit_ by definition is a cumulative error criterion. Hence, a lower value of _Fit_ for a solution does not guarantee the local closeness to the parent function (Goody's model) in small individual subdomains.
The value of _Fit_ with the noisy data reported in Fig. 8 is \(\approx\)\(\geq\) 4. Subsequent studies were hence carried out with clean data (see Fig. 7 (b)) to determine if the GEP predictions improved. Figure 9 (a), shows a scatter plot with varying radius and color to demonstrate the effect of hyperparameter setup (head length, local optimization probability, generations) on the model performance (\(log(Fit)\)). Here, the color and radius of the circle represent the \(log(Fit)\) and the number of generations evolved in each trial respectively. Following are the key inferences that can be drawn from the figure:
* As expected, improved values of _Fit_ (below 1.0) can be achieved with clean data when compared to noisy data.
* _Fit_ drops with increasing head length, although with larger head lengths there is a risk of making the model too complex. Note that the _Fit_ values were still higher with increasing head lengths (up to 10) with 1001 generations (not shown in the plot). However, with further optimization, _Fit_ is observed to drop below 1 for h = 10 and 3001 generations (refer to marker A in Figure 9 (a)). This is because a higher number of generations allow for more mutations, thereby improving the _Fit_. Decreasing the head length also demands a larger number of generations which translates to more time spent in model evolution. Local optimization can circumvent this issue by minimizing the divergence within the population generated by the added power feature using the local least squares optimizer discussed in Section III.4.
* Increasing the local optimization probability improves the _Fit_ even at lower head lengths with less number of generations. Nevertheless, this considerably increases the time per generation as illustrated in Fig. 9 (b). Figure 9 (a) shows that increasing the optimization probability from 0.005 to 0.1 drops _Fit_ below 1 for a head length of 4 trained over 10001 generations (refer to marker B in Figure 9 (a)). The fact that a solution at the complexity of h = 4 produced comparable predictions to the ones observed for h = 10 shows the potential of the local optimizer. Further increasing the optimization probability to 1, dropped the required number of generations to 1001 resulting in a _Fit_ below 1, (refer to marker C in Figure 9 (a)).
* Higher number of generations results in a lower _Fit_ even at lower head lengths and optimization probabilities (refer to marker B in Figure 9 (a)).
All these insights lead to the conclusion that a user should prioritize mutations by evolving GEP over a greater number of generations. The subsequent diverging behavior due to mutations, if any, can be controlled with increased use of the local optimization loop. If both these solutions are ineffective, the user can consider increasing the head length as the GEP expressions with a smaller head length may not be complex enough to capture the features of the data. Figure 9 (c), shows the encouraging trend that the proposed setup that reduces the _Fit_ error also reduces the low frequency \(\tilde{\omega}^{2}\) error which was found to be unpredictable earlier. Figure 7 (b) illustrates the typical curve fit obtained with GEP (with \(h=4\), \(P_{opt}=0.05\) and 10000 generations) where the _Fit_ drops below 1.
Due to its unsupervised evolutionary nature, GEP can produce ill-performing solutions and there is always a possibility
Figure 9: (a) _Fit_ (log(_Fit_) as colorbar) as a function of head length (x-axis), local optimization probability (y-axis), and number of generations (area of circle). (b) Effect of optimization probability on the computational time required to run the algorithm through 200 generations for h = 4 (c) Predictable drop observed in low-frequency error at low _Fit_ values
of the algorithm not converging to the desired accuracy. The aforementioned hyperparameter tuning strategies aim to minimize such issues with convergence. As will be demonstrated in Section V, unlike ANN, data of reasonable quality (with minimal noise and outliers) should also be provided to ensure convergence of GEP.
### Weights and Objective functions
It is established in Section III that the database used in this study derives from different experiments with their own set of flow conditions. Table 2 consolidates different experiments and the corresponding number of datasets, \(N\). It is worth noting that the datasets collected within the same experiment can be mutually similar and differ greatly from those of other experiments which have different flow conditions. If every dataset is given equal weight/importance in the overall objective function, the model will be biased towards the over-represented data (for example, the data from Christophe). The rest of the minority datasets have little effect on the direction of the solution and suffer from poor model predictions even if the datasets are just as important [49]. One way to mitigate this problem is through weighting datasets to balance the objective function. The selection of weights is a user decision and an optimization problem in itself which is beyond the scope of this work. To illustrate the effect of a weighted objective function on the ANN and GEP predictions, a rather simple weighting strategy is considered where the weight is estimated as the ratio of the number of datasets in the least represented experiments (the data from Salze) to the experiment considered i.e. \(W=N_{Salze}/N_{Experiment}\). The weights assigned this way are listed in Table 2.
Both GEP and ANN were trained on the same input data with weighted and unweighted objective functions. Technically training can be done indefinitely and the loss calculated by the objective function may continue to drop till it reaches a minimum. However, after a steep drop of the \(lMSE\), Fig. 10 shows a marginal improvement in an ANN model performance with a further increase in epochs. For practical purposes, an early stopping criterion is therefore used to halt the training process. This can be done in a number of ways, say by setting an upper bound on epochs or by setting a lower bound on the loss calculated by the objective function. Here for this test, 20% of the randomly sampled training data is set aside as validation data. The calculated loss against validation data was then monitored and training was stopped if there is no improvement for at least 20 epochs, followed by restoring ANN weights (not to be confused with database weights) to the ones observed for the epoch with the least validation loss. Figure 10 also shows that the rate of accuracy gain (indicated by the rate of drop of error reported as weighted \(lMSE\)) is higher for the ANN models trained with \(lMSE\) as the objective function than the ones trained with \(Fit\).
The effect of different objective functions (\(lMSE\) and \(Fit\)) on the predictions of wall pressure spectra (\(\Phi_{pp}\)) using ANN is illustrated in Fig. 11 (a-d). It is observed that the predictions of ANN models trained with weighted/unweighted \(lMSE\) as the objective function are more accurate than the models trained with weighted/unweighted \(Fit\). Hence, we use \(lMSE\) as the objective function for the rest of the study unless otherwise mentioned. Since GEP predicts a different mathematical expression every run, 10 trials with 100 individuals each, having a head length = 4, and optimization probability 0.05 were evolved over 5000 generations with weighted/unweighted \(lMSE\) as the objective function. Figures 11 (e) and (f) plot the Predicted vs True values of PSD (\(\Phi_{pp}\)) obtained from the corresponding best-performing expressions which are listed below:
Weighted:
\[\widetilde{\Phi}_{pp}=\frac{0.46\widetilde{\omega}^{0.99}(\beta+1)^{1.5}}{ \frac{R_{T}^{0.55}\widetilde{\omega}^{1.02}}{\Pi^{2.76}+\Delta}+M\Delta^{6.75} \left(\frac{R_{T}+\widetilde{\omega}^{5.07}}{R_{T}^{5.84}}\right)} \tag{12}\]
and Unweighted:
\[\widetilde{\Phi}_{pp}=\frac{\widetilde{\omega}\left(\Pi^{6.71}+(\beta+1)^{1.4 }\right)}{\widetilde{\omega}^{0.87}\Delta^{0.98}M^{0.44}(1+M^{0.97})+\frac{ \widetilde{\omega}^{5.31}}{R_{T}^{7.07}}(\Pi^{5.31}+\widetilde{\omega} \Delta^{6.09})} \tag{13}\]
Interestingly, Fig. 11 shows that ANN and GEP models trained with unweighted objective functions resulted in
\begin{table}
\begin{tabular}{l l l} Experiment & N(Datasets) & Weight \\ \hline Salze & 10 & 1.00 \\ \hline Deuse & 13 & 0.77 \\ \hline Hao & 16 & 0.62 \\ \hline Christophe & 78 & 0.13 \\ \hline Total & 117 \\ \end{tabular}
\end{table}
Table 2: Weights assigned to different experiments
Figure 10: Example of the evolution of model performance in ANN training with (a) weighted \(lMSE\) and (b) weighted \(Fit\) as objective functions respectively.
Figure 11: Predictions of ANN model trained with weighted (a) and unweighted (b) _IMSE_ as the objective function; weighted (c) and unweighted (d) _Fit_ as the objective function. Predictions of GEP models, Eq. 12 and Eq. 13, trained with weighted, (e), and unweighted _IMSE_, (f), respectively as the objective functions. (Every tenth point is reported for plotting purposes)
Figure 12: Salze dataset predictions suffering (a) and (c); Hao dataset predictions improving (b) and (d), from unweighted ANN training. Salze dataset predictions suffering (e), and Christophe dataset predictions improving (f) from unweighted GEP training with \(IMSE\) as the objective function
a lower _lMSE_ error than those with the weighted ones. Although this is an unexpected trend, Table 3 delves into the performance of the models with weighted objective functions on the minority datasets. As intended, the prediction of datasets in the minority experiment (Salze) has indeed improved with weighted objective functions. In contrast, the accuracy of the datasets from a majority experiment (Christophe) has suffered due to the choice of weights. Figure 12 plots the wall pressure spectra of the majority and minority datasets to further demonstrate the effect of weighted/unweighted objective functions across different training schemes. The results show that the weighted objective functions significantly improve predictions of minority datasets (Salze). This trend is particularly observed for poorly converging training methods such as ANN with _Fit_ or the GEP. While the prediction accuracy of the dominant datasets (Christophe and Hao) decreases with the weighted training schemes, the spectra are not far off. On the other hand, unweighted schemes are observed to overfit the dominant datasets retaining their features and deteriorating the accuracy of the minority dataset predictions (refer to the predictions of inertial subrange in Fig 12(c) and (d)). These effects, however, become less pronounced in the case of an aggressively converging training scheme such as ANN with _lMSE_ where both the weighted/unweighted datasets result in competitive predictions.
The aforementioned discussion highlights the importance of optimizing the weights assigned to skewed datasets which will (a) minimize any bias in the predictions of ML models and (b) improve the performance of poorly converging models as it allows the training algorithm to focus more on minority datasets.
### ANN vs GEP
The present section compares the computational efficacy of GEP and ANN in predicting the wall pressure spectra. Both the algorithms are trained against the same input data and with the same objective function (weighted _lMSE_, Eq. 9). The trials are conducted on a machine equipped with a dual node Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz, with 14 cores, and an Nvidia Tesla V100 32GB GPU.
Figure 13 (a) compares the convergence of ANN and GEP models with time. It is evident that the ANN achieves superior results in a shorter duration. This is in contrast to GEP which struggles to converge to the _lMSE_ achieved by ANN even after a significantly longer training. With GEP, the best-fitting model from a pool may or may not improve over the generations thereby resulting in a'stepwise' drop in _lMSE_. The mechanism for evolving a best-fitting model in GEP is mutation and the other genetic operators, but they do not necessarily result in a better model and are therefore considered diverging in nature. As a result, the next best-fitting model can appear anytime through evolution. While there are ways to control the divergence induced by mutation, such as balancing it out with aggressive selection or using a local optimization function, in general, the'stepped' convergence still follows.
Figure 13 (b) compares the virtual memory consumption with the present implementation of both algorithms. Space consumption of GEP is dependent on the size of the population, the length of each chromosome (which are basically fixed-length strings) and, additional storage required for performing actions such as evaluation, selection and, mutation. All these operations are inexpensive when compared to the ANN training. Although the memory occupied by the trainable parameters in ANN is minimal, the training process itself involves supervised learning that requires storage of gradients, and other intermediate computations that are memory intensive. Parallel computing using GPUs does help out in reducing this requirement by allowing for larger batch sizes that reduce the required number of forward and backward passes.
Figures 14 (a) and (b) compare the training time required with a reduced data resolution. As expected, both algorithms require less training time when trained against the same input data that has a lower resolution. With ANN, five trials were conducted to account for variation in the reported training time and one can observe a monotonic drop in training time with coarser input data. This is much easier to establish in the case of ANN models since they all converge aggressively resulting in competitive fits (see Fig. 14 (c)). Performing a similar analysis with GEP involves some practical complications. Different trials of the GEP algorithm require different runtimes to arrive at models with similar performance. It highlights the need to stop GEP training after a predefined number of generations although the models obtained this way can have different performances. Figure 14 (b) reports the _lMSE_ values from the 10 trials for each resolution of the sampled data. The plot shows that cases with coarser data input, in general, require less time for training and can still result in competitive predictions. To summarize, it seems beneficial to train models (specifically GEP) with coarsely sampled data, although ensuring that the local features of the data are not
\begin{table}
\begin{tabular}{l l|l l l|l l l} \hline \hline \multirow{2}{*}{Experiment} & N & \multirow{2}{*}{Weight} & \multicolumn{2}{l|}{ANN (_lMSE_ error)} & \multicolumn{2}{l}{GEP (_lMSE_ error)} & \multicolumn{1}{l}{\multirow{2}{*}{\%}} \\ \cline{4-4} \cline{6-7} & & _lMSE_ & _lMSE_ & & _lMSE_ & _lMSE_ & _lMSE_ & _lMSE_ \\ & & & (weighted) & (unweighted) & & w.r.t & (weighted) & (unweighted) & \\ & & & training & & weighted & training & training & w.r.t \\ \hline Salze & 10 & 1.00 & 0.24 & 0.33 & +37.50 & 6.98 & 9.43 & +35.10 \\ \hline Christophe & 78 & 0.13 & 0.39 & 0.23 & -41.03 & 7.41 & 3.81 & -48.58 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of _lMSE_s in predictions across experiments
lost in the re-sampling process.
## V Guided search in gene expression programming
GEP and ANN algorithms update the model parameters through every epoch to accurately fit the data. The superior performance of ANN (in terms of speed and reliability) is attributed to training algorithms like stochastic gradient descent that guide the direction in which the ANN model parameters should be updated. On the other hand, GEP updates the models through genetic operators which rely on the evolutionary approach of trial and error. Although this approach allows for a diverse search space, it is computationally expensive. Oftentimes, with some of the known dependencies, the search space can be reduced resulting in faster convergence. This guided search approach also facilitates the discovery of complex trends in the rest of the search space. The following subsections demonstrate several guided search strategies that have been explored to accelerate GEP training and reliability.
### GEP Training Schemes
In contrast to the baseline scheme that uses the raw data, the current section introduces three additional schemes: Noise reduction with ANN filter, Omega2, and Gene4, to accelerate the GEP training.
#### Raw data scheme
This is a baseline GEP training scheme that uses the default noisy raw data and the linking function, Eq. 15, to train the GEP population.
Figure 14: Lower training times achieved with low-resolution training data - (a) ANN, (b) GEP. (c) Effect of low-resolution training data on ANN predictions for datasets with complex trends
Figure 13: (a) Model performance evolution through time, and (b) the RAM and VRAM used by the ANN and GEP models while training
#### Noise reduction with ANN filter
Noisy input datasets can misguide GEP towards sub-optimal solutions where the resulting model fails to recognize the underlying trend. The absence of feedback in mutations increases the training time required for the GEP model to converge to a global optimum. Smoothing techniques, such as kernel smoothing or moving averages, can be used to reduce the noise although these might suffer from issues like uneven noise filtering and might filter important uncertainties in parts of the data. On the other hand, Artificial neural networks are well known for their ability to handle the trade-offs between retaining information and filtering noise. Sections IV.2 and IV.3 have also demonstrated their ability to produce well-performing models of WPS with reasonable training times. Hence, we explore the strategy of ANN-filtered input data to assist the GEP algorithm.
#### Omega2 Scheme
The canonical shape of the WPS remains more or less the same across TBLs and hence a good guess of the frequency dependence of PSD in the respective regimes can be made. For example, one of the popular WPS models by Goody has three parts A, B, and C:
\[\widetilde{\Phi}_{pp}=\frac{C_{2}\widetilde{\omega}^{2}}{\left(\widetilde{ \omega}^{0.75}+C_{1}\right)^{3.7}+\left(C_{3}\widetilde{\omega}\right)^{7}}= \frac{A}{B+C} \tag{14}\]
Where,
\[C_{1}=0.5;\,C_{2}=3.0;\,C_{3}=1.1R_{T}^{-0.57}\]
The numerator \(A\) dominates at low frequencies (\(\widetilde{\omega}\to 0\)), resulting in the \(\widetilde{\omega}^{2}\) trend, which is particularly valid under zero pressure gradients (ZPG) [27]. The exponent of \(\widetilde{\omega}\), however, changes with pressure gradients as observed in Figures 2 and 3. For example, WPS of a TBL subjected to APG exhibits a \(\approx\widetilde{\omega}^{0.5}\) rise at low frequencies (see Fig. 3 from experiments of Salze _et al._[14]). While both \(B\) and \(C\) in Eq. 14 dictate the mid-frequencies, the term \(C\) governs trends at higher frequencies. Goody's model exhibits a \(\widetilde{\omega}^{-5}\) trend at higher frequencies which is consistent with those observed in Fig. 3. Unlike the other contemporary models in literature [35], Goody's model is stiff and reasonably retains its shape over a range of flow conditions.
As discussed above, the shortcomings of Goody's model which only depends on \(R_{T}\) (in addition to \(\widetilde{\omega}\)) are apparent under pressure gradients and non-canonical flows. GEP algorithm is effective in seeking additional contributing variables and accelerates the development of semi-empirical models like Goody. To assist GEP search with the inherent nonlinearity of the WPS problem, Dominique _et al._[39] prescribed the following trigenic linking function, Eq. 15 which is analogous to Goody's model:
\[Y^{GEP}=\frac{Sub-ET_{1}}{Sub-ET_{2}+Sub-ET_{3}} \tag{15}\]
Where \(Sub-ET_{i}\) represents a sub-expression interpreted from a gene in the chromosome and \(Y^{GEP}\) represents the output model.
GEP still has a tough task of working out the correct combination, especially the strong dependence on frequency (\(\widetilde{\omega}\)) across different regimes. Since the WPS at low frequencies follows a \(\widetilde{\omega}^{2}\) trend for ZPG flows, we explore the benefit of premultiplying Eq. 15 with \(\widetilde{\omega}^{2}\) to build that dependency right into the solution. The scheme further uses assistance from ANN-filtered data (modified inputs) for improved predictions. Hereafter, this strategy will be referred to as the _Omega2 scheme_ which is implemented by modifying the objective function as follows:
\[\mathit{IMSE}=\frac{1}{N}\sum_{i=1}^{N}W_{i}\left(10log_{10}\left(\widetilde{ \omega}_{i}^{2}Y_{i}^{GEP}\right)-10log_{10}Y_{i}^{true}\right)^{2} \tag{16}\]
The proposed modification facilitates the algorithm to focus on finding a better combination \(Sub-ET_{1}\) using any of the variables (\(\widetilde{\omega},\Delta,H,M,\Pi,C_{f},R_{T},\beta\) ). Table 4 lists some of the sample models obtained using the Omega2 scheme. Interestingly, for the datasets where the low-frequency trend deviates from the \(\widetilde{\omega}^{2}\) dependence, GEP is observed to predict a \(Sub-ET_{1}\) with \(\widetilde{\omega}^{i}\) (i being some index) to accurately fit the low frequencies.
#### Gene4 Scheme
Extending the omega2 approach, the _Gene4 scheme_ is aimed at further incorporating the trends at high-frequencies into the GEP models. This is achieved by modifying the original linking function in Eq. 15 with (a) an \(\widetilde{\omega}^{2}\) term in the numerator (to predict lower frequencies), (b) a \(\widetilde{\omega}^{7}\) in the denominator (to predict higher frequencies) and (c) a timescale ratio dependency \(R_{T}^{4}\) from Goody's model, a dependency also observed in other contemporary empirical models, to further assist the search. The gene4 linking function is realized using four genes as formulated below:
\[Y^{GEP}_{gene4}=\frac{\widetilde{\omega}^{2}Sub-ET_{1}+Sub-ET_{2}}{\frac{ \widetilde{\omega}^{7}}{R_{T}^{4}}Sub-ET_{3}+Sub-ET_{4}} \tag{17}\]
We have used two \(Sub-ET\)s in the numerator with only \(Sub-ET_{1}\) being pre-multiplied with \(\widetilde{\omega}^{2}\). \(Sub-ET_{2}\) allows for searching the terms that are decoupled from the \(\widetilde{\omega}^{2}\) dependence and facilitates a faster convergence than the omega2 scheme (as the algorithm does not have to look for a complex division to eliminate the effect of pre-multiplication). Table 4 lists some of the competitive models predicted using the gene4 scheme. It is interesting to note that the \(Sub-ET_{2}\) term in models 1 and 2 is independent of \(\widetilde{\omega}^{2}\) while that in model 3 is closer to \(\widetilde{\omega}^{2}\).
### Comparisons
In this section, we compare the performance of the aforementioned schemes in terms of (a) convergence rate during training and (b) model accuracy at the end of GEP evolution. For each of the four schemes, 10 trials were conducted on a pool of 100 individuals, with a head length (\(h\)) of 4 trained over 20000 generations. The input data is resampled to 25% of the original resolution and for every iteration, the population passed through the least squares optimization loop with an optimization probability of \(P_{opt}=0.05\). We probe into the temporal evolution of the training statistics with weighted _IMSE_ as the objective function.
For different training schemes, Fig. 15 (a-e) compares the temporal evolution of the resampled training statistics (see Appendix A) from the 10 trials using _IMSE_ as the objective
\begin{table}
\begin{tabular}{l l l l l} Model & \(\dfrac{\widehat{\omega}^{2}Sub-ET_{I}+Sub-ET_{2}}{\dfrac{\widehat{\omega}^{7} }{R_{T}^{T}}Sub-ET_{3}+Sub-ET_{4}}\) & _IMSE_ & \(\widehat{\omega}\to 0\) & \(\widehat{\omega}\rightarrow\infty\) \\ model1 & \(\dfrac{\widehat{\omega}^{2}\left(\dfrac{\beta^{2.45}\Pi^{5.13}}{R_{T}^{1.22}} +\dfrac{0.015}{M}\right)+\left(\dfrac{C_{f}}{M^{2.08}}+\beta^{2.84}\right) \left(\dfrac{R_{T}}{M^{0.14}}\right)}{\dfrac{\widehat{\omega}^{7}\left( \dfrac{\Delta^{2.89}H^{7.62}}{R_{T}\omega^{1.12}}\right)+\left(R_{T}^{1.29}+ \Delta^{2.03}\right)\left(M^{0.34}+\beta\right)}}\) & 10.41 & 2 & -3.88 \\ model2 & \(\dfrac{\widehat{\omega}^{2}\left(\dfrac{\Pi^{0.74}+\widehat{\omega}^{0.65}}{ \Delta^{5.02}}\right)\left(0.1R_{T}^{1.27}\right)+\left(M^{-1}\Delta^{-1}+ \beta^{1.21}\right)}{\dfrac{\widehat{\omega}^{7}}{R_{T}^{1}}14.2M^{2.96}\left( M^{1.24}+1\right)+\widehat{\omega}+5.2\widehat{\omega}^{-0.35}}\) & 10.95 & 3 & -4.35 \\ model3 & \(\dfrac{\widehat{\omega}^{2}\left(\dfrac{\beta^{2.43}\left(\Delta+\Pi\right)}{ H\Delta^{1.15}}\right)+\dfrac{\omega^{2.64}}{M\left(R_{T}+\Pi^{0.06}\right)}}{ \dfrac{\widehat{\omega}^{7}\left(\dfrac{M\Delta^{6.79}}{R_{T}^{2.39}}\right)+ \widehat{\omega}^{1.55}\left(H+\widehat{\omega}^{0.94}\right)}}\) & 12.23 & 2.64 & -4.36 \\ \end{tabular}
\end{table}
Table 5: Sample WPS models generated using gene4 scheme
\begin{table}
\begin{tabular}{l l l l l} Model & \(\widetilde{\omega}^{2}\cdot Y^{GEP}\) & _IMSE_ & \(\widetilde{\omega}\to 0\) & \(\widetilde{\omega}\rightarrow\infty\) \\ model1 & \(\widetilde{\omega}^{2}\cdot\dfrac{\widetilde{\omega}^{-1.86}\beta^{1.34}M^{-0. 55}\Delta^{-0.78}\left(R_{T}+1\right)}{\widetilde{\omega}^{1.77}+R_{T}H^{1.86}+ \widetilde{\omega}^{6.60}\left(\dfrac{\Delta}{R_{T}}\right)^{6.18}}\) & 10.61 & 0.14 & -6.46 \\ model2 & \(\widetilde{\omega}^{2}\cdot\dfrac{\widetilde{\omega}^{-2}\beta^{1.42}\left( \Pi^{3.57}\widetilde{\omega}^{0.58}+R_{T}^{1.02}\right)}{\Delta\left( \widetilde{\omega}^{5.53}\dfrac{\Delta^{5.59}}{R_{T}^{6}}+R_{T}M^{0.37}\right)}\) & 11.17 & 0.58 & -4.95 \\ model3 & \(\widetilde{\omega}^{2}\cdot\dfrac{\widetilde{\omega}^{-1.7}\beta^{1.72}M^{-0.72}}{ \widetilde{\omega}^{1.99}+\Delta\left(\dfrac{\widetilde{\omega}^{4.98}H^{8.81} }{R_{T}^{4.28}+\beta^{5.63}}+1\right)}\) & 13.67 & 0.3 & -4.68 \\ \end{tabular}
\end{table}
Table 4: Sample WPS models generated using omega2 scheme
Figure 15: Comparison of fitness distribution of the best individuals across 10 trials with \(IMSE\) as the objective function for different schemes. Evolution of following statistics: (a) \(<y>_{E}\), (b) \(median(y)\), (c) \(y_{25-75}\) (d) \(y_{max}\) (e) \(y_{min}\), of GEP trials are plotted against time. (f) Box plot comparing unweighted \(IMSE\) for the last generation, showing the scatter in \(IMSE\) from minimum to maximum across the trials. The blue box represents the trials with \(IMSE\) between the \(25-75\) percentile and the intermediate line shows the median value.)
function. These metrics include \(<y>_{E}\), \(median(y)\), \(y_{25-75}\), \(y_{max}\), and \(y_{min}\) which are briefly described in Table 6. For every trial, the best-performing model at the end of the training is recorded. Figure 15 (f) compares the _IMSE_ distributions of these best models across different schemes. Table 7 provides a quantitative overview of the relative performance of these schemes with respect to the baseline scheme, probed every 5 hours into the training period. The following inferences can be drawn from the statistics presented in Fig. 15 and Table 7:
* The gene4 scheme predicts more accurate models than other schemes throughout the evolution. When compared to the baseline scheme, Table 7 shows the superior performance of gene4 with a consistent drop of \(\approx\) 25% in its ensemble averaged _IMSE_ and \(\approx\) 20% drop in its median _IMSE_. A \(\geq\approx\) 60% drop of _IMSE_ within the interquartile range, \(y_{25-75}\) with the gene4 scheme suggests that the scheme predicts more competitive models at any instance throughout the evolution. Compared to the baseline scheme, the \(y_{max}\)_IMSE_ (worst-performing model) and \(y_{min}\)_IMSE_ (best-performing model) from the gene4 scheme trials exhibit a \(\geq\) 30% and \(\geq\) 15% lower value respectively. Trends in figure 15(a-e) also illustrate that the aforementioned improvements from the gene4 scheme are consistently observed throughout the evolution which implies that the gene4 scheme converges faster than the rest. Since the evolution of different training schemes can stop at different times, Figure 15(f) compares the _IMSE_ statistics of the last generation. Across trials, gene4 scheme has a best-performing model with \(\approx\) 30% drop in the \(y_{min}\)_IMSE_, a \(\approx\) 17% drop in the median _IMSE_ and a \(\approx\) 40% reduction in _IMSE_ within the interquartile range, \(y_{25-75}\) implying that gene4 scheme predicts more accurate models with improved reliability.
* Statistical improvements in _IMSE_ obtained using the ann-filter scheme are similar to those of the gene4 scheme albeit to a lesser extent.
* In contrast, the omega2 scheme yields either marginal or inconsistent gains over the raw data scheme throughout the training resulting in arguably worse-performing last-generation models (see Figure 15(f)). This indicates that multiplying the entire numerator (\(Sub-ET_{1}\) of Eq. 15) with \(\widetilde{\omega}^{2}\) has resulted in poorly optimized expressions. As seen in the case of model 2 in Table. 4, the algorithm has probably worked more towards decoupling the frequency terms.
Considering all the inferences, it can be concluded that the strategies involved in the development of the gene4 scheme (which include using ann-filtered input data, built-in low and high-frequency trends, and a modified linking function) are justified. Hence, the Gene4 scheme serves as a faster, and more accurate alternative to the baseline scheme while inferring models out of WPS data using the GEP algorithm.
### Validation
In this section, we evaluate the performance of the gene4 scheme on unseen data to ensure that the scheme predicts the mathematical expressions of appropriate complexity. Four datasets are discarded from each experiment from the complete database. Gene4 scheme was trained on the rest of the data with the same hyperparameter setup used in the previous section. One of the models within the interquartile range, \(y_{25-75}\), is presented below:
\[\widetilde{\Phi}pp=\frac{\widetilde{\omega}^{2}\left(\frac{\Pi^{8.05}}{\Delta }+0.22\beta^{1.54}\right)+0.69\beta^{2.66}+\Pi^{8.22}}{\frac{ \widetilde{\omega}^{7}}{R_{T}^{4}}(M^{4.07}(100(10+\Delta)))+(M^{0.44}(H^{5.81 }+\widetilde{\omega}^{2.37}))} \tag{18}\]
Equation 18 shows that the model retains the built-in frequency trends and incorporates additional complexities of the pressure gradient (\(\beta\)) and Mach number dependence. Figure 16 plots the corresponding predictions of the unseen datasets. It is evident from Figure 16(a-c) that the model presented surpasses the predictions of both Goody's model and Dominique's GEP model (which has been trained over the entire database). This ensures that the model does not overfit the data, which is in fact an encouraging step towards the development of a generalized WPS model that can reasonably predict unseen data. However, Fig. 16 (d) illustrates that, although the model performance is superior to that of Goody, there is scope to improve it further. For example, the appearance of coefficients such as 100 and 10 in this sample model shows that at times the GEP algorithm may struggle to find correct coefficients from the RNC array (Section III.4) resulting in a sub-optimal mathematical solution. However, the
\begin{table}
\begin{tabular}{l|r r r|r r r|r r r|r r r|r r r} \multirow{2}{*}{Scheme} & \multicolumn{10}{c}{Percentage _IMSE_ change w.r.t raw scheme} \\ \cline{2-13} & \multicolumn{4}{c|}{\(<y>_{E}\)} & \multicolumn{4}{c|}{\(median(y)\)} & \multicolumn{4}{c|}{\(y_{25-75}\)} & \multicolumn{4}{c|}{\(y_{max}\)} & \multicolumn{4}{c}{\(y_{min}\)} \\ \cline{2-13} & 5 hrs & 10 hrs & 15 hrs & 5 hrs & 10 hrs & 15 hrs & 5 hrs & 10 hrs & 15 hrs & 5 hrs & 10 hrs & 15 hrs & 5 hrs & 10 hrs & 15 hrs \\ \hline ann-filter & -10.4 & -14.4 & -15.8 & -18.5 & -19.0 & -13.8 & -57.6 & -49.0 & -58.6 & -2.7 & -18.8 & -24.4 & 5.4 & 4.9 & 4.6 \\ omega2 & 2.7 & -5.3 & -4.1 & -5.9 & -12.3 & -5.9 & -57.7 & -37.7 & -54.0 & 18.7 & -1.8 & -1.8 & 29.0 & -5.9 & -5.9 \\ gene4 & -27.6 & -27.0 & -26.1 & -29.5 & -25.3 & -19.2 & -76.0 & -52.2 & -59.4 & -33.9 & -36.1 & -39.4 & -14.9 & -19.7 & -24.4 \\ \end{tabular}
\end{table}
Table 7: Quantitative overview of the evolution of the relative performance of the proposed GEP training schemes with respect to the baseline raw data scheme. This relative performance is evaluated every five hours into the training phase.
same gene/\(Sub-ET_{3}\) does not hold a frequency term implying that the built-in trend by gene4 is valid, and with more mutations, GEP can find a superior alternative.
### Stepped schemes
We have further explored the strategy of stepped training schemes which is analogous to the idea of learning a language. One should learn the basic features of the language first, to understand the intricate constructions in the literature. Likewise learning the basic trends of the datasets that show a close resemblance to the canonical WPS is a priority. Once the GEP population learns to fit these priority datasets satisfactorily it is allowed to further improve the formulations by exposing it to a dataset that has similar canonical features with added complexities. In the present work, we train the GEP algorithm in two steps. In step 1, we prioritize exposing the population to Salze datasets which have the highest similarity to the canonical form of WPS. After the first step, the entire population, which also includes the best individual, was saved. In step 2,
Figure 16: Performance a sample GEP model trained with gene4 scheme on unseen datasets.
Figure 17: Box plot comparing unweighted \(IMSE\) for the last generation (\(40000^{th}\)), showing the scatter in \(IMSE\) from minimum to maximum across the trials. The blue box represents the trials with \(IMSE\) between the 25 - 75 percentile and the intermediate line shows the median value.
Figure 18: Dataset-wise predictions of a GEP model, Eq. 19, trained with the stepped training scheme and comparison with ANN predictions along with the predictions of Dominique’s GEP model and Goody’s model
the entire population saved in step 1 is exposed to the complete dataset for further training. For each step, the GEP algorithm was trained with the same hyperparameter environment detailed in Section V.2. Figure 17 shows the stepped gene4 scheme exhibiting superior accuracy and reliability over its non-stepped counterpart with \(\approx 10\%\) reduction in the median (from 12.2 to 10.9) and \(\approx 60\%\) drop in the \(lMSE\) spread of the interquartile range (from 3.0 to 1.1). Hence, it can be deduced that at least half of the models trained using the stepped gene4 scheme are competitive with each other. A sample model, with \(lMSE=10.61\) lying within the interquartile range, reads as follows:
\[\widetilde{\Phi}_{pp}=\frac{\widetilde{\omega}^{3.12}+\beta^{2.14}\widetilde{ \omega}^{2.41}+\left(\frac{C_{f}}{M^{4.42}\Delta^{2.34}}\right)\widetilde{ \omega}^{2}+\beta^{2.14}R_{T}^{0.29}}{\left(M^{3.2}\Delta^{6.35}+R_{T}\right) \frac{\widetilde{\omega}^{7.71}}{R_{T}^{5}}+\Delta^{0.9}(\widetilde{\omega}^{ 2.97}+H^{2})} \tag{19}\]
where the low and the high frequency exponent predictions are,
\[\widetilde{\omega}\to 0=\widetilde{\omega}^{3.12};\ \widetilde{\omega} \rightarrow\infty=\widetilde{\omega}^{-4.59}.\]
Figure 18 further compares the WPS predictions of the sample GEP model (Eq. 19) obtained using stepped gene4 scheme across different datasets. Its superior accuracy as compared to Goody's model and Dominique's GEP model is apparent.
Figure 19: Predictions of a GEP model, Eq. 19, trained with the stepped training scheme and comparison with ANN predictions along with the predictions of Dominique’s GEP model and Goody’s model. (Every tenth point is reported for plotting purposes)
In particular, Figures 18 (a,b,e) illustrate the high complexity captured by this model. Interestingly, its frequency trends and magnitude are in close agreement with those predicted by the ANN model. Figure 18 (c) assures that its predictions of the canonical WPS are unaffected and competitive with other models. Despite its superior performance, Fig. 18 (b), (d), and (f), show that the model fails to capture some local features and under or over-predicts the trends in certain datasets.
The scatter plot in Figure 19 compares the predicted vs true values for the entire dataset comprising different flow conditions (APG, FPG, and ZPG). The aforementioned GEP model trained with the stepped gene4 scheme significantly improved the predictions across the entire dataset resulting in a _IMSE_ of 10.6 in contrast to _IMSE_ of \(\approx 60\) with Dominique's GEP model and \(\approx 90\) with Goody's model. In particular, the scatter in the predictions of datasets with APG and FPG has considerably reduced when compared to Dominique's GEP model and Goody's model. Despite these improvements, the _IMSE_ of the proposed GEP model is still an order of magnitude higher than that of the ANN. It implies that there is further scope to develop a generalized WPS model using GEP through improvements in training strategies.
## VI Conclusions
The study presents a machine learning-based framework that uses data-driven modeling to predict wall pressure spectra (WPS) underneath turbulent boundary layers. Different datasets of WPS from experiments and high-fidelity numerical simulations covering a wide range of pressure gradients and Reynolds numbers are considered for training. This dataset however appears to be skewed and a cosine-similarity matrix has been used to quantify the visual resemblance of WPS trends across the experiments. The efficacy of two machine learning techniques, namely artificial neural networks (ANN) and gene expression programming (GEP) is evaluated. Firstly, an optimal hyperparameter environment is identified that yields the most accurate predictions for the respective ML methods. This includes assessing the effect of objective functions (\(LMSE/Fit\)) on the convergence rate of ANN models. Of these, \(Fit\) (which is a multi-objective function comprising both _IMSE_ and mean squared error) is shown to converge at a slower rate. Interestingly, it has been observed that the prediction accuracy of such weakly converging training methods can be improved by increasing the weight of minority datasets.
For a given input database, the computational resources (training time and memory consumption) of some of the best-performing ANN and GEP models are compared. In terms of accuracy, the results show the clear superiority of ANN over GEP with the logarithmic mean squared error (_IMSE_) of ANN being less than 1 while that of GEP being around \(O(10)\). The corresponding training time of ANN models is also 8 times lower (\(\approx 3\)_hours_) than that of the GEP (\(\approx 24\)_hours_), despite a higher memory consumption. Nevertheless, the advantage of GEP lies in predicting a realizable closed-form mathematical expression. In contrast to the unconventional structure of ANN models, these expressions from GEP can provide direct physical insight.
Novel training schemes are devised to address the shortcomings of GEP. These include (a) ANN-assisted GEP to reduce the noise in the training data (b) exploiting the physical trends in the spectra observed at low and high frequencies to guide the GEP search (c) a stepped training strategy where the chromosomes are first trained on the canonical datasets followed by the datasets with complex features. The first two training strategies are shown to accelerate the convergence and yield more consistent models (\(\approx 40\%\) reduction in the spread of _IMSE_ in the interquartile range) with superior accuracy (\(\approx 17\%\) reduction in the median _IMSE_). Stepped training strategy further improves upon the reliability and accuracy with an additional reduction of \(\approx 60\%\) and \(\approx 10\%\) in the aforementioned statistics respectively. The predictions of the resulting GEP models appear to have captured the complex trends of WPS while surpassing the accuracy of both Dominique's GEP model and Goody's model.
With the inclusion of the ever-evolving data repository of WPS, the authors believe that the methods and insights from the present study will facilitate the discovery of more consistent and generalized models. The goal is to discover GEP models that result in competitive predictions to the ones produced by the ANN models. The analytical nature of GEP models will allow for the generalization of WPS beyond the trained regimes.
## Acknowledgements
The authors wish to acknowledge NVIDIA for generously awarding Quadro P6000 and A100 GPU cards under the Academic Hardware Grant Program. The authors also wish to acknowledge the VKI team: Joachim Dominique, Jan Van den Berghe, Dr. Christophe Schram, Dr. Miguel Alfonso Mendez, Dr. Julien Christophe, and Dr. Richard D. Sandberg (from the University of Melbourne) for making their source code and data publicly available. Dr. Nagabhushana Rao Vadlamani also acknowledges the financial support from Science and Engineering Research Board (SERB), India under Mathematical Research Impact Centric Support (MATRICS) scheme (MTR/2022/000807).
## Author declarations
### Conflict of Interest
The authors have no conflicts to disclose.
## Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A
Here, we provide details of the resampling procedure which maps generations in GEP evolution with wall-clock time. At the end of every generation, we probed the Weighted objective function values (of the best individual in the generation) and the wall-clock time spent for the generation. Although each of the 10 trials is trained for a fixed set of generations, the training can take different wall-clock times. Hence, the statistics are re-sampled over a common time interval using a Piece-wise Cubic Hermite Interpolating Polynomial (PCHIP) interpolator [50] as illustrated in Table 8. Although this shortens the observation window to the trial with the shortest runtime to reach 20000 generations, it facilitates the comparison of statistics in wall-clock time across different trials and training schemes. This approach of choosing the shortest window is justified since the interest lies in developing methods that converge faster.
## Nomenclature
\(<y>_{E}\): Ensemble average of the batch of outputs
\(\beta\): Clauser parameter
\(\Delta\): Zagarola-Smits's parameter
\(\delta\): Boundary layer thickness
\(\delta^{*}\): Displacement thickness
\(v\): Dynamic viscosity
\(\omega\): angular frequency \(\omega=2\pi f\)
\(\Omega_{z}\): Vorticity component in \(z\) (spanwise) direction
\(\overline{p}\): Mean pressure
\(\partial_{n}\): Partial derivative with respect to \(n^{th}\) coordinate
\(\Phi_{pp}\): Power spectral density
\(\Pi\): Wake strength parameter
\(\rho\): Density at wall
\(\tau\): Time
\(\tau_{w}\): Wall shear stress
\(\theta\): Momentum thickness
|
2310.03427 | Spontaneous interstitial (anti)merons in D$_{2d}$ symmetric
Mn-Pt(Pd)-Sn-In system | Interstitial topological objects, such as skyrmions, within a natural 1-D
helix are predicted to be free from ambiguous 'skyrmion Hall effect'. The
helical ambience precipitate an additional potential that counteract the Magnus
force arising from the gyrotropic motion of skyrmion. Here, we present the
observation of $\pm$ $\frac{1}{2}$ topological charge objects (anti)merons
within the 1-D helical stripes in D$_{2d}$ symmetric
Mn$_{1.4}$Pt$_{0.9}$Pd$_{0.1}$Sn$_{1-x}$In$_{x}$ system. With the help of
Lorentz transmission electron microscopy study we demonstrate that the
pair-wise meron and antimeron chains can be spontaneously stabilized for a
critical In concentration in the system. The exchange frustration induced
proportionate fragmentation of the magnetic moment in the in-plane and
easy-axis directions acts as a basic ingredient for the formation of
(anti)merons within the helical stripe. A constrained drift motion of
(anti)merons along the stripe makes this system an ideal platform for the
realization of skyrmion Hall free track motion. Moreover, the observation of
(anti)merons in addition to the skyrmion and antiskyrmion in D$_{2d}$ materials
makes them a suitable horizon for zoo of topology. | Bimalesh Giri, Dola Chakrabartty, S. S. P. Parkin, Ajaya K. Nayak | 2023-10-05T10:07:19Z | http://arxiv.org/abs/2310.03427v1 | # Spontaneous interstitial (anti)merons in D\({}_{2d}\) symmetric Mn-Pt(Pd)-Sn-In system
###### Abstract
Interstitial topological objects, such as skyrmions, within a natural 1-D helix are predicted to be free from ambiguous'skyrmion Hall effect'. The helical ambience precipitate an additional potential that counteract the Magnus force arising from the gyrotropic motion of skyrmion. Here, we present the observation of \(\pm\)\(\frac{1}{2}\) topological charge objects (anti)merons within the 1-D helical stripes in D\({}_{2d}\) symmetric Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\) system. With the help of Lorentz transmission electron microscopy study we demonstrate that the pair-wise meron and antimeron chains can be spontaneously stabilized for a critical In concentration in the system. The exchange frustration induced proportionate fragmentation of the magnetic moment in the in-plane and easy-axis directions acts as a basic ingredient for the formation of (anti)merons within the helical stripe. A constrained drift motion of (anti)merons along the stripe makes this system an ideal platform for the realization of skyrmion Hall free track motion. Moreover, the observation of (anti)merons in addition to the skyrmion and antiskyrmion in D\({}_{2d}\) materials makes them a suitable horizon for zoo of topology.
Topologically protected magnetic whirls, such as (anti)skyrmions, (anti)merons, etc., have attracted significant research interest owing to their distinctive spin textures that possess enormous possibilities in future spintronic devices. The stabilization mechanism of these complex magnetic objects relies on the underlying magnetic interactions complying with the inherent crystal symmetry of the system. For instance, antisymmetric Dzyaloshinskii- Moriya interaction (DMI) originating from the relativistic spin-orbit coupling is an essential ingredient for the stabilization of skyrmion (skx)/antiskyrmion (askx) in bulk crystals [1, 2, 3, 4] and layered thin films [5, 6, 7, 8] system with broken inversion symmetry. On the other hand magnetic frustration [9, 10], higher order exchange interaction [14], dipolar interaction [12, 13], can stabilize skyrmions in certain centrosymmetric materials. Although independent microscopic mechanisms are responsible for the stabilization of different topological spin textures, it is interesting to study how these objects transform between two different topological states. In this direction, it has been demonstrated that application of in-plane magnetic field [16, 17] and thermal energy modification of magnetic anisotropy [18] can enable overcoming the energy barrier imposed between two topological phases. Moreover, it is also proposed that the co-existence of two distinct topological objects can have significant future prospectives [36]. Hence, material engineering is an important pathway to achieve a delicate balance between the different energy landscapes for the realization of emergent topological magnetic phases [16, 21].
The topological character of skyrmions attracts a significant research interest, in particular, to study their stabilization mechanism and electrical manipulation for the application in spintronic devices [5, 22]. However, all the topological magnetic objects with finite topological charge (n\({}_{sk}\)) showcase an unwanted'skyrmion Hall effect', irrespective of nature of the driving field. Nonetheless,
several ideas have been proposed to mitigate the intriguing perplexity for the finite n\({}_{sk}\) objects[28, 29, 30, 31]. Moreover, topological objects consisting of opposite n\({}_{sk}\) can intrinsically compensate the reverse gyro-tropic response, e.g. antiferromagnetic skx [23, 24, 25] and skyrmionium [26, 27] are believed to be exhibit zero'skyrmion Hall effect'. Interestingly, it has been demonstrated that the interstitial skx in a natural 1-D spin helical stripe is free from such ambiguity [32, 33, 34]. The confinement within the helical background accomplishes an additional potential barrier that balance the Magnus force, thereby imposing the motion along the stripe. The distinct characteristics of skx motion within helical stripes and ferromagnetic (FM) environment shift the paradigm for searching topological objects within the helical background. It has been also proposed that the meron, skyrmion, and bimerons can stabilize along with the helical stripes as natural intermediate states at the phase transition from skyrmion to helical state [35]. Later, Muller et al., have demonstrated the coexistence of skx with the broken helical stripe experimentally[32]. However, a real interstitial topological object in true helical ordered state is never observed.
The present manuscript demonstrates the first observation of natural one-dimensional helical stripes that spontaneously host a chain of merons and antimerons. In this regard, we present an extensive study on the phase stability of different topological objects in the \(D_{2d}\) symmetric system. Especially, we address the effect of exchange frustration on the ground state spin modification. To realize our goal, we consider doping of In atom in place of Sn in Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\) system. It is to be noted here that the first observation of antiskyrmions has been demonstrated in Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn, an inverse tetragonal Heusler system that belongs to D\({}_{2d}\) crystal class [4]. The underlying \(D_{2d}\) crystal symmetry restricts the Dzyaloshinskii- Moriya (DM) vector within the
tetragonal basal plane with D\({}_{x}\)= - D\({}_{y}\), where D\({}_{x}\), D\({}_{y}\) are DM vectors along x and y directions, respectively. Hence, the helical modulation are found only along [100] and [010] directions. Consequently, the askx lattice gets stabilized in the \(ab\) plane above the spin re-orientation temperature (\(T_{SR}\sim\)130 K) [4, 17, 36]. Powder neutron diffraction (ND) study shows the presence of a collinear and a non-collinear spin ordering above and below the \(T_{SR}\)[37]. Furthermore, the dipolar energy associated with the large magnetic moment of about \(\sim\) 5 \(\mu\)\({}_{B}\)/f.u. in this system also plays a vital role in controlling the shape and size of the askx in this system [38]. In our previous study it has been shown that replacing Sn by In results into a non-collinear magnetic state in Mn\({}_{1.5}\)PtIn with a small magnetic moment of \(\sim\)1.6 \(\mu\)\({}_{B}\)/f.u. [39]. Moreover, a comparison between the low temperature ND data of Mn\({}_{1.4}\)PtSn and Mn\({}_{1.5}\)PtIn sample indicates that a large magnetization can be induced in the basal plane with In doping in place of Sn. Previous theoretical study proposes that the formation of (anti)meron lattice can be energetically favorable with in-plane anisotropy [40]. Therefore, an effective change of the magnetization from out of plane (OP) to in-plane (IP) direction or compatible fractionation of the same in presence of the anisotropic DMI may showcase exotic topological phases like (anti)meron in the present system.
With the above discussed ideas, we present a scheme for the stability of different topological phases in the \(D_{2d}\) symmetric Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\) system as shown in Fig. 1. The ground state magnetic ordering obtained from the neutron diffraction (ND) study for samples with different In concentration is presented in Fig. 1a-d. The magnetic moments of Mn atoms sitting in two different sublattices are shown by blue and red arrows. For \(x=\) 0.2, Mn moments at both the magnetic sublattices align along the c-axis, resulting in only out of plane component moment M\({}_{z}\) above the
spin-reorientation transition T\({}_{SR}\) [Fig. 1a]. In general, this type of spin arrangement favors the antiskyrmion phase as shown in Fig. 1e. Below the T\({}_{SR}\), both the magnetic sublattices exhibit a small in-plane moment M\({}_{x}\), however, the major component of the moment is along the c-axis, i.e., M\({}_{z}>\)M\({}_{x}\) [Fig. 1b]. Formation of nontopological bubbles (NT) [Fig. 1f] is favorable in this region. For \(x\geq\) 0.4, a dominant non-collinear magnetic ordering prevails throughout the magnetic ordered phase [see Fig. 1c\(\&\)d]. The Mn moments at 8d position for \(x=\) 0.4 is tilted more towards the in-plane direction compared to the \(x=\) 0.2 sample [see Fig. 1b & c]. Interestingly, for \(x=\) 0.4 sample both the magnetization components are comparable, i.e M\({}_{z}\sim\)M\({}_{x}\). The formation of (anti)meron [Fig. 1g] is expected in this sample. It is worth mentioning that for all the cases, i.e. \(x=\) 0.2-0.4, the skyrmion phase can be stabilized based on the lamellae thickness and experimental protocols. For \(x=\) 0.6, the Mn moments at the 8d position align completely in the in-plane direction. In this case, M\({}_{z}<\)M\({}_{x}\) and no signature of any topological phase is noticed. The variation of effective magnetic anisotropy for different \(x\) values is shown in Fig. 1h. A local minima can be visible around \(x=\) 0.4. A detailed structural and magnetic characterization along with the ND results for the present samples is presented in the Supplementary Information [41].
## 1 LTEM results
For the real space visualization of the above-mentioned topological spin textures, we have carried out a detailed Lorentz transmission electron microscopy (LTEM) imaging using thin lamellae of Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\) samples. Figure 2a-g depicts the over focused (OF) LTEM images recorded on the (001) oriented thin plates for \(x=\) 0.2 to 0.5. The formation of triangular askx
lattice for \(x=\) 0.2 at \(T=\) 250 K (\(>\)T\({}_{SR}\)) can be seen from Fig. 2a. We also find the formation of hexagonal lattice of elliptical Bloch skx in this sample at T\({}_{SR}<\) T \(<\) 250 K [41]. A detailed temperature and field evolution of the skx and askx lattices is provided in the supplementary information [41]. Below \(T_{SR}\), we notice that the formation of nontopological bubbles (NT) is more favorable. In fact, mixed states of skx and NT are also found in the case of thicker lamellae (\(\sim\) 200 nm )[see supplementary][41]. However, it is not possible to stabilize askx phase below \(T_{SR}\). This can be understood from the fact that the dominating dipole interaction causes near degenerate energy associated with the skx, askx, and NT bubble phase in D\({}_{2d}\) symmetric materials. As it can be seen from Fig. 2b, the skx and askx phases coexists in case of \(x=\)0.3. The weak energy barrier among them can be overcome and envisaged through the application of an in-plane magnetic field, hence, a reversible transformation between skx, NT, and askx can be realized[17]. In case of \(x=\)0.4, the appearance of triangular lattice of the NT bubbles at \(T>\)\(T_{SR}\) and the conjunction of NT bubbles and skx at \(T<\)\(T_{SR}\) is shown in Fig. 2c \(\&\) d, respectively. In the present system, topologically trivial and non-trivial magnetic textures are present for In concentration up to \(x=\)0.4. For \(x=\)0.5, we do not find any magnetic contrast ( Fig. 2e).
**Spontaneous (anti)merons:** Now, we concentrate on the \(x=\)0.4 sample that exhibits a noncollinear magnetic ordering throughout the ordered phase and satisfies the criteria of \(M_{z}\sim M_{x}\). At zero magnetic field this sample shows the usual helical phase, consistent with the D\({}_{2d}\) symmetric system. Interestingly, at temperature \(T=\)112 K, nanometric size local dark and bright dots start appearing in the interstitial position of the helical stripes [Fig. 2f]. With further lowering the temperature to \(T=\)100 K (the lowest possible temperature achievable in our system), the
magnetic contrast of these dots become very prominent as can be seen in Fig. 2g. In addition, these interstitial dots appear with alternate black and white chain without any periodicity, as the distance between two pair of chains is not equal (Fig. 2g). These dark and bright dots may refer to the formation of nanometric size swirling spin textures. To confirm the nature of these interstitial spin textures, we have performed transport of intensity equation (TIE) mapping of the LTEM images taken in in-focus, under-focus and over-focus conditions. Figure 2h\(\&\)i show the in-plane magnetization mapping and spin curling of the selected dots marked in the blue and violet dashed rectangles in Fig. 2g. The nature of the local magnetic field variation clearly establish that these interstitial objects are meron and antimerons. The (anti)merons are vortex-like topological objects with a core up/down and in-plane peripheral spin distribution that carry a topological charge n\({}_{Sk}\)= \(\pm\)\(\frac{1}{2}\). They are different from half (anti)skyrmions which also hold n\({}_{Sk}\)=\(\pm\)\(\frac{1}{2}\) and are found at the edge of the thin lamellae of Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn [42]. Although it is difficult to determine the exact core spin direction of the(anti)merons from the LTEM contrast, it can be safely said that if the meron is with upward core, the antimeron must be a downward core spin as the z-component of the moment is maximum/minimum (upward/downward) at the middle of the dark and bright helical stripe. Therefore, the bright spots are meron with downward core spins, and the black spots are antimerons with upward core spins. We reach this conjecture by analogy with the LTEM contrast of the simulated spin texture, which will be discuss later.
### Field evolution of (anti)merons and their motion
In the present system the (anti)merons appear spontaneously by cooling the sample from the paramagnetic state. Therefore, it is important to understand their stability under the external magnetic field. Figure 3a-e depicts the field evolution
of the pair of meron and antimeron chains. The interstitial (anti)merons are stable up to a field of 0.07 T [see Fig. 3a-d]. For higher fields (anti)merons disappear, resulting in only the helical stripe phase [see Fig. 3e]. We observe that the (anti)meron chains are not reversible under field reversal. Interestingly, the (anti)merons chains start drifting with the application of external magnetic field. It can be seen that the (anti)meron chain with bright contrast remains almost fixed in its position, whereas a gradual shift of the meron chain with dark spots is visible. Figure 3f describes the field-dependent separation length of the pair of (anti)merons chains. Importantly, the drift is constrained along the stripe only. All the images were taken close to the pole positions so that the effect of the in-plane field is negligible in their drift motion. This study clearly demonstrates the motion of (anti)merons along the stripes.
## 2 Micromagnetic Simulation
All our experimental results show that the present Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\) samples with \(x=\) 0 to 0.4, can host different topological magnetic states depending on the temperature and magnetic field application. The stability of askx in D\({}_{2d}\) symmetric materials is well described by the Hamiltonian written as
\[\begin{array}{l}H={}-J\sum_{i}m_{i}\cdot(m_{i+\hat{a}}+m_{i+\hat{b}}+m_{i+ \hat{c}})\\ \hskip 14.226378pt+D\sum_{i}(m_{i}\times m_{i+\hat{a}}\cdot\hat{a}+m_{i}\times m _{i+\hat{b}}\cdot\hat{b})\\ \hskip 14.226378pt+K\sum_{i}(m_{c,i})^{2}-H\sum_{i}m_{c,i}\;,\end{array} \tag{1}\]
where the first and second terms are the conventional isotropic exchange and Dzyaloshinskii-Moriya interaction; the third and fourth terms are the uniaxial anisotropy and Zeeman energy,
respectively. \(\hat{a}\), \(\hat{b}\), \(\hat{c}\) are the unit vectors in the Cartesian co-ordinate system. It is well understood that the competition between anisotropic DMI and isotropic exchange interaction governs the propagation of helical state along [100] and [010] directions. The application of the magnetic field along the anisotropic magnetic axis converts the helical state into askx lattices. The large magnetization in these materials causes dominant dipolar interaction depending on the lamellae thickness and temperature[38]. Moreover, the dipolar energy can overtake the anisotropic DMI and compete with the anisotropy to stabilize elliptical skx at lower temperature region [17, 36]. The elongation direction of these elliptical objects remains to constrain along [100] and [010] only. Theoretically, the skyrmion lattice can be stabilized in an easy-plane magnetic system under an external field and can be transformed into the (anti)meron lattice state by increasing the easy-plane anisotropy [40]. Most of the system show the (anti)merons as a lattice [16], inside a domain wall[15, 18], and as isolated one in an easy plane magnetic system [44, 45].
In the present In doped samples, the magnetic moment partially aligns in the plane direction with a large magnetization component in the basal plane [see Fig. 1b-d]. To understand the origin of the (anti)merons formation in the present system, we have carried out a detailed micromagnetic simulations by implementing easy-plane anisotropy. We set up \(H=0\) to echo the experimental observation. Therefore, the effective Hamiltonian contains ferromagnetic exchange, anisotropic DMI, and easy-plane anisotropy. We take random magnetization as an initial state and relax the system to stabilize the (anti)meron pairs formed only after cooling the system to a low temperature from a paramagnetic state [see Fig. 2f \(\&\) g]. Figure 4a-c shows the magnetization configuration for different values of in-plane anisotropy constant (K\({}_{u}\)). For low values of K\({}_{u}\), some paired and
isolated antiskyrmions are formed [see Fig. 4a]. Further increasing the in-plane anisotropy in the presence of DMI mainly results in a non-collinear in-plane magnetic state. For K\({}_{u}\) = 2.0 \(\times\)10 \({}^{5}\) J/m\({}^{3}\), local meron-antimeron chains are stabilized within the non-collinear spin configurations [Fig. 4b]. We find the formation of both core up and down merons and antimerons as marked in red and blue color circles in Fig. 4b. By further increasing the K\({}_{u}\), mostly collinear in-plane magnetic state gets stabilized with some isolated chains and single (anti)meron [Fig. 4c]. The simulated results in Fig. 4b closely corroborate our experimental finding of spontaneous (anti)merons in the system. For a better correspondence of the experimental results with the simulated one, the LTEM contrast of the magnetization configuration in Fig. 4b is shown in Fig. 4d. The LTEM contrast is sensitive only to the twisting of the spins, e.g. the domain wall, vortices, etc. The bright spots appearing within the fainted bright stripes in Fig. 4d corresponds to the core down Bloch-type meron [shown within blue circle in Fig. 4e]. Similarly, core up merons appear as dark spots within the fainted dark stripes [shown in red circle of Fig. 4e]. These simulated spin textures fully corroborate our experimental results. To justify the particle-like nature of these local objects, we calculate the topological charge density (TCD) \({}^{53}\), as shown in Fig. 4f. The TCD of these localized objects follows the definition given as n\({}_{sk}\)= \(\frac{pw}{2}\), where, \(p=\) polarization of the core spin (+1 for up, -1 for down) and \(w\) is the winding number (\(w=\)+1 for vortex,\(w=\) -1 for antivortex) \({}^{43}\). Therefore, a meron-antimeron pair with opposite core spin polarity can lead to a total topological charge of +1 or - 1.
Discussions and Conclusion
In the earlier studies, Gao et al. have shown that a delicate fractionation of the magnetization results in the spontaneous formation of domain wall (anti)merons in the ferromagnetic in Fe\({}_{5-x}\)GeTe\({}_{2}\)[15]. Similarly, in the amorphous easy-plane ferrimagnetic Gd-Fe-Co system, the periodic appearance of Bloch lines within the Bloch domain wall is responsible for the formation of meron-antimeron chains [18]. In case of \(\beta\)-Mn type Co\({}_{8}\)Zn\({}_{9}\)Mn\({}_{3}\), the origin of square lattice (anti)merons is assigned to the overlapping between two orthogonal helices propagating in the plane perpendicular to the magnetic field [16]. Nonetheless, several theoretical studies have indicated the variation in magnetic anisotropy can lead to the emergence of a topological (anti)merons [40, 46].
In the present case the doping of In in place of Sn serves two purposes. First, since the magnetic Mn concentration does not change, the effect of magnetic disorder in the system can be avoided. Secondly, the addition or reduction in the valence electron count maximally impacts the magnetic interactions, which is linked to the higher power of moments, such as Ruderman-Kittel-Kasuya-Yosida (RKKY), biquadratic (\(\propto(S_{i}.S_{j})^{2}\)) [47], and topological-chiral interaction (\(\propto[S_{i}.(S_{j}\times S_{k})]^{2}\)) [48]. In a metallic system, the effective exchange strength may not be limited to the nearest neighbor site only. The itinerant electron can give rise to an oscillatory RKKY-type interaction which is extended beyond the nearest neighbor. For instance, a spin-canted state arises in Mn\({}_{2}\)Y(=Rh, Pt, Ir)Z due to the exchange frustration mediated through RKKY interaction [49]. Swekis et al. have shown that the origin of the spin re-orientation transition in Mn\({}_{1.5}\)PtSn can be associated with the exchange frustration in the system [50]. Hence, the presence of spin-canted state
in the present system can be connected to the exchange frustrations arising due to addition of extra electron to the system. As a result, the net canted magnetic moment in the OP/IP direction systematically decreases/increases with In concentration. Therefore, in the present system a gradual shift in the major magnetic component is attributed to the re-alignment of the magnetic anisotropy. The proportionate IP and OP moment in presence of anisotropic DMI in the D\({}_{2d}\) symmetric system can result in interstitial (anti)meron chains. This is also supported by our micromagnetic simulations.
In summary, we present a comprehensive study on the finding of different topological magnetic phases in the \(D_{2d}\) symmetric tetragonal Heusler system Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\). The stabilization of these topological objects are well connected with the systematic change in the magnetic structure with different degrees of non-collinearity arising due to exchange frustration in the system. The antiskyrmion phase stabilizes only when the underlying spins are collinear in nature, whereas the non-topological bubbles are more favorable in the noncollinear background spin. At a critical noncollinear magnetic state where \(M_{z}\sim M_{x}\), we find the simultaneous presence of helical stripes and (anti)meron chains. Most importantly, the drift motion of the (anti)meron chains along the helical stripe under an external magnetic field mimics the 1-D-like deflection-free movement. Therefore, our proposition of interstitial (anti)merons with \(\pm\)\(\frac{1}{2}\) topological charge can be a potential alternative for the future development of 'race track memory'.
**Experimental methods** Polycrystalline samples of Mn\({}_{1.4}\)Pt\({}_{0.9}\)Pd\({}_{0.1}\)Sn\({}_{1-x}\)In\({}_{x}\) were prepared by arc
melting the stoichiometric amount of high-purity constituent elements. The samples were kept at 750\({}^{\circ}\)C for seven days after being sealed in high vacuum quartz tubes. The structural phase of the samples was confirmed by room temperature powder X-ray diffraction (XRD) using Cu-K\({}_{\alpha}\) radiation (Rigaku). The composition and homogeneity of the samples were checked in field emission scanning electron microscopy (FESEM) and energy dispersive X-ray spectroscopy (EDS). Magnetic measurements were carried out using superconducting quantum interference device vibrating sample magnetometer (SQUID VSM) (MPMS-3). Electron back scattered diffraction (EBSD) was used to find the suitable grain for lamellae preparation from the polycrystalline samples. The transmission electron microscopy (TEM) lamellae were prepared using a 30 kV Ga ion-based Focused Ion Beam (FIB). A very low dose factor was used to thin down the lamellae to desired thickness. The (001) orientation of the lamella was confirmed by performing the selected area electron diffraction (SAED) in 200 kV JEOL TEM (JEM-F200). The magnetic contrast was observed using the Lorentz TEM (LTEM) mode. A double-tilting liquid nitrogen-based sample holder (GATAN-636) was used for temperature dependent LTEM imaging.
**Transport of intensity equation (TIE)** LTEM imaging is a versatile technique to visualize various nano- to micro-scale magnetic textures in thin lamella samples. Interaction of the electron with the in-plane magnetic field of of the sample results in the convergence(divergence) of the electron beam that produces bright(dark) contrast under different defocus conditions. If a magnetic image appears as bright in over focus (OF) condition, then it should be dark in the under focus (UF) and vice versa. Extraction of the actual spin texture is possible by solving the transport of intensity equation (TIE) from the intensity map of the series of digital images. The TIE solves the
differential equation between intensity and electrons phase have given as-
\[\frac{2\pi}{\lambda}\frac{\partial^{2}I(x,y,z)}{\partial^{2}z}=-\nabla_{xy \star}[I(x,y,z)\nabla_{xy}\phi(x,y,z)], \tag{2}\]
where, \(\lambda\) is electron wavelength, I is intensity of the electrons beam, and \(\phi\) is the phase of electrons. A series of systematic images can be acquired to map the in-plane magnetization distribution of the thin lamella. For example, an equal defocus length of UF and OF, along with an in-focus (IF) image, can be used to map out the magnetization distribution. First, we extracted the electron-phase images and then mapped out the magnetization textures using the software QPt [54].
**Micromagnetic simulation** Micromagnetic simulations are carried out using Object Oriented Micro Magnetic Framework OOMMF [51] to understand the possible mechanism of (anti)merons formation in D\({}_{2d}\) symmetric materials. We have taken a slab dimension of 250\(\times\)250\(\times\)10 nm\({}^{3}\) and the unit cell dimension of 1\(\times\)1\(\times\)1 nm\({}^{3}\). The Gilbert damping parameter (\(\alpha\)) was set at 0.5. We have fixed the exchange strength (J) = 1.2 pJ/m, and DMI strength (D)= 0.5 mJ/m\({}^{2}\), saturation magnetisation (Ms)= 7\(\times\)10\({}^{5}\) A/m, and applied magnetic field (H)= 0. The in-plane anisotropy (\(K_{u}\)) was varied in the range of 1.0-3.0\(\times\) 10\({}^{5}\) J/m\({}^{3}\) to stabilize a meron-antimeron chain within the helical-like stripes.
**Data availability**
All data used to obtain the conclusions in this paper are presented in the paper and/or the Supplementary Materials. Other data may be requested from the authors. Please direct all inquiries to A.K.N. ([email protected]). |
2303.14063 | Comment on: In vitro prediction of the lower/upper-critical biofluid
flow choking index and in vivo demonstration of flow choking in the stenosis
artery of the animal with air embolism | In a recent paper published in Physics of Fluids, Sanal Kumar et al. present
a model of transonic compressible flows based on ideal gas theory that is
irrelevant to biofluid flow and there are flaws in the general reasoning. In
addition, the experimental attempts do not show any evidence of supersonic flow
and do not provide any support for the flawed theory. In this Comment, I
discuss why this paper and other very similar ones by the same group of authors
and published in a short time frame (in Physics of Fluids and other journals)
should have been retracted. | Thomas Podgorski | 2023-03-24T15:20:09Z | http://arxiv.org/abs/2303.14063v1 | Comment on: _In vitro_ prediction of the lower/upper-critical biofluid flow choking index and _in vivo_ demonstration of flow choking in the stenosis artery of the animal with air embolism
###### Abstract
Sanal Kumar _et al._[1] present a model of transonic compressible flows based on ideal gas theory that is irrelevant to biofluid flow and there are flaws in the general reasoning. In addition, the experimental attempts do not show any evidence of supersonic flow and do not provide any support for the flawed theory.
In a recent paper, Sanal Kumar _et al._[1] propose predictions of a flow choking index that is supposed to be relevant to the understanding of various vascular strokes and diseases which in their opinion are supposed to be triggered when a critical Mach number is reached. They also provide their own interpretation of issues related to air embolisms and describe experimental and numerical results which are claimed to be supporting their theory. However, besides the fact that the logical structure of the article is fuzzy, there are several major issues with this work, from the framework and applicability of the model, to logical fallacies in its development and conclusions as well as false statements on blood properties and irrelevant or non-physical experimental results. Only the main and most salient controversial points are discussed here. Note that some of these criticisms also hold for their previous paper in this journal [2] as it is based on the exact same erroneous model.
## I Relevance of the model
The main model equations (Eqs (1-5)) which were derived in one of the authors' previous publications [3] are simply based on a mass conservation equation and the classic equation of state and expression of the speed of sound for an _ideal gas_. This assumption on which the model is based is already a major flaw since liquids, biofluids including blood can certainly not be considered as ideal gases (if it were so, following the well known \(PV=nRT\) law, our body volume would double when going from sea level to Himalayan mountains).
In other words, even considering that (in the strict sense) any material, solid or fluid, is compressible, the ideal gas equation is not even remotely an approximation of their behavior. Blood being mainly composed of water, its compressibility is close to that of water, a well-tabulated property which is around \(4.4\times 10^{-10}\) Pa\({}^{-1}\) at body temperature and around atmospheric pressure [4], with only little variations with pressure and temperature. This is \(10^{5}\) times lower that the compressibility of ideal gases at atmospheric pressure. This fully justifies that blood is safely considered nearly incompressible for all practical flow conditions with solid proof from a plethora of theoretical, numerical and experimental studies available in the literature. Equations (1-5) are therefore irrelevant to biofluids and only apply to ideal gases.
Now, even supposing that these equations could be modified by replacing the ideal gas law by the equation of state for a real fluid, the general approach which consists in looking for critical Mach numbers (i.e. suspecting that supersonic flows occur in the cardiovascular system) is bound to fail because the well documented range of blood flow velocities is between 1 mm/s (capillaries) and less than 1 m/s (arteries) everywhere in the circulatory system [5; 6] while on the other hand the speed of sound in blood, which is also well documented in textbooks [7] and widely used in medical ultrasonic techniques is close to that of water, that is about 1500 m/s. In other words, the Mach number in blood flows is always less than \(10^{-3}\), meaning that the "Sanal flow choking" theory presented in the commented paper (and other similar papers by the same group of authors), even if it were to be corrected for real fluids, is irrelevant to cardiovascular flows.
Following this clarification, it is clear that none of the _in silico_ results presented in Figures 7-11 of the paper represent realistic blood flow in arteries. No numerical simulation of blood flow in arteries using relevant values of physical parameters (viscosity, density) and boundary conditions (flow rates, pressures, velocities, channel dimensions), even including turbulence modeling [8] would lead to the flow patterns shown in Figures 7-11 where the Mach number goes up to 2. Unfortunately, none of the parameter values, boundary conditions and length scales used in these simulations are given in the article (the axial distance axis has no units in any of these figures).Only the video linked to Fig. 8 shows a scale bar, from which we note that their simulated artery has an irrelevant diameter between 10 cm and 20 cm. In addition, the inlet Mach number in these figures also has completely unrealistic values of order 1, meaning that the authors assume that blood velocity has nonsensical values of the order of kilometers per second when entering the depicted artery section. Stunningly, some of these figures are _the exact same_ as the ones they show in other papers about aircraft or rocket aerodynamics, for instance Figure 7, which according to the caption shows numerical simulations of blood flow in an artery, is exactly the same as Figure 1 in a previous paper on aircraft dynamics [9], and has also been used in another previous paper where it is supposed to apply to different cases [2]. Therefore, these simulation results which were performed with unrealistic boundary conditions and unrealistic parameter values must be totally discarded with regard to their applicability to blood flows.
As an additional note, model equations are based on circular reasoning [3]: they assume that a boundary layer develops in the channel (through unspecified mechanisms), which effectively reduces the cross-section of the flow, which because of mass conservation leads to an increase of the flow velocity, leading to equation 5 which is nothing more than a reformulation of the trivial relationship between the local hydraulic diameter and local velocity in a channel imposed by mass conservation. This triviality can obviously not predict the existence nor the thickness of this boundary layer as a function of distance along the flow axis since it is a single equation relating two unknowns (the so-called boundary layer thickness and the local average velocity). The specific heat capacities of water (blood's main constituent) are \(C_{p}=4.18\) kJ/(kg.K) and \(C_{p}=4.09\) kJ/(kg.K) at \(T=37^{\circ}\)C as can be found in any handbook [10] (values are similar for blood [11]), the heat capacity ratio is then \(\gamma=1.02\). It is indeed always close to 1 in all physiologically relevant cases contrary to what the authors want us to believe in the introduction (it only goes up to 1.03 at \(T=42^{\circ}\)C, lethal temperature). Therefore the trivial mass conservation equation (Eq. 5a) almost reduces to \(Ud^{2}\simeq\) constant and no "choking" or supersonic velocity is expected with realistic inlet velocities, pressures and channel diameters, even in the most pathological cases. Finally, equation 1 cannot represent systolic to diastolic pressure ratios since their modeling does not involve any time dependency or transient dynamics but only steady flow: it is an illicit use and combination of classic equations of compressible fluid theory taken out of context.
## II Flawed Experiments
In an attempt to support their theory (which for the reasons stated above is irrelevant to blood and is based on flawed and circular reasoning), the authors describe a confusing ensemble of experimental measurements that do not provide much valuable or relevant results and do not support their claims.
* The measurements of the heat capacity ratio (HCR) of blood provided in Table 1 are completely unphysical, ranging from 5 to 118. No material even the most exotic one exhibits such values and even for ideal gases, the equipartition theorem states that the maximum value is \(5/3\simeq 1.667\) for monoatomic gas. Values shown in Table 1 are therefore wrong.
* A confusing sentence in the methodology section says _"We observed that around 60-85\({}^{\circ}\)C, all the blood samples of human being boiloff in a non-linear fashion"_. Blood, which is made mainly of water, does not boil at these temperatures.
* Similarly, in section II, they say _"It is crystal clear from the case report of Razavi et al. that the COVID-19 patient suffers from gas embolism because the patient's temperature exceeds 37.5 \({}^{\circ}\) C"_. This entails two serious problems, the first one being that the cited publication (reference 8 in their publication list) does not make any mention of gas embolism in the reported clinical case, it is therefore pure speculation from the authors. Then, they seem to imply in this sentence (and at other instances in the paper) that blood starts boiling or evaporating at temperatures above 37.5 \({}^{\circ}\)C, Which, had it been true, would not have allowed the development of life on Earth as we know it. Fortunately, the boiling point of blood (mainly composed of water) is close to 100\({}^{\circ}\)C at atmospheric pressure. Gas embolisms occur only in two situations, either following accidental injection of gas bubbles in the circulation (i.e. during surgery) or due to degassing of dissolved gases (N\({}_{2}\), CO\({}_{2}\)) following sudden and large decompression (i.e. underwater divers ascending too quickly to the surface). The reported COVID-19 case is obviously not concerned.
* The _in vivo_ experiment of Fig. 12 is perplexing: they injected high-pressure air into a rat's artery to induce air embolism. It is hard to understand what they were trying to prove: no quantitative pressure and velocity measurements were made and the flow rate imposed by the syringe pump is not specified either. Despite the authors' claim, the numerical simulations of Fig. 10 and 11 are not correlated to the experiment at all since no parameter value corresponds to experimental ones. This cannot constitute a proof of the claims of the paper.
* Figures 17, 18 and 19 only show compressed air being blown through a plastic tube but do not demonstrate anything either qualitatively (no specific observations are described) or quantitatively since no quantitative measurements are made in these experiments. Besides, the connection between these situations and blood flow in arteries is virtually inexistent. And there is not even a demonstration of supersonic flow in these experiments.
* More generally the focus on air embolisms in the second half of the paper highlights a logical fallacy in the study: initially, the authors discuss blood flow, attempt to measure its calorimetric properties in order to support their theory, but in the end they claim to make experimental demonstrations with air.
In conclusion, the experimental investigations performed by the authors and comments based on other studies from the literature do not support the claims derived from the model, which itself is based on wrong assumptions.
|
2309.01353 | Real-time pedestrian recognition on low computational resources | Pedestrian recognition has successfully been applied to security, autonomous
cars, Aerial photographs. For most applications, pedestrian recognition on
small mobile devices is important. However, the limitations of the computing
hardware make this a challenging task. In this work, we investigate real-time
pedestrian recognition on small physical-size computers with low computational
resources for faster speed. This paper presents three methods that work on the
small physical size CPUs system. First, we improved the Local Binary Pattern
(LBP) features and Adaboost classifier. Second, we optimized the Histogram of
Oriented Gradients (HOG) and Support Vector Machine. Third, We implemented fast
Convolutional Neural Networks (CNNs). The results demonstrate that the three
methods achieved real-time pedestrian recognition at an accuracy of more than
95% and a speed of more than 5 fps on a small physical size computational
platform with a 1.8 GHz Intel i5 CPU. Our methods can be easily applied to
small mobile devices with high compatibility and generality. | Guifan Weng | 2023-09-04T04:34:58Z | http://arxiv.org/abs/2309.01353v1 | # Real-time pedestrian recognition on low computational resources
###### Abstract
Pedestrian recognition has successfully been applied to security, autonomous cars, Aerial photographs. For most applications, pedestrian recognition on small mobile devices is important. However, the limitations of the computing hardware make this a challenging task. In this work, we investigate real-time pedestrian recognition on small physical-size computers with low computational resources for faster speed. This paper presents three methods that work on the small physical size CPUs system. First, we improved the Local Binary Pattern (LBP) features and Adaboost classifier. Second, we optimized the Histogram of Oriented Gradients (HOG) and Support Vector Machine. Third, We implemented fast Convolutional Neural Networks (CNNs). The results demonstrate that the three methods achieved real-time pedestrian recognition at an accuracy of more than 95% and a speed of more than 5 fps on a small physical size computational platform with a 1.8 GHz Intel i5 CPU. Our methods can be easily applied to small mobile devices with high compatibility and generality.
## 1 Introduction
Computer vision has been used widely in a variety of applications including medical, military, industry, services, and scientific research [1]. In particular, pedestrian recognition in images and videos is a challenging task that attracts the attention of the scientific community and industry alike. It is important in a wide range of applications that intersect with many aspects of our lives: surveillance systems and airport security, autonomous driving and driver assistance systems in high-end cars [2, 3], human-robot interaction and immersive, interactive entertainments, smart homes and assistance for senior citizens that live alone, and people-finding for military applications [4]. It is also a prerequisite for tasks on mobile devices. For most mobile devices, however, there are some specific constraints that make pedestrian recognition particularly challenging, i.e., limited computational resources, limited physical size, and limited energy. For example, the drones proposed in [5] have limited space inside to fit a small battery and a small computer with low computational resources. The same is true for other types of mobile devices like drones [6, 7], and modular robots [8, 9]. These constraints make real-time pedestrian recognition on such small physical-size mobile devices a problematic challenge.
In this paper we focus on pedestrian recognition, i.e., people assuming poses that are common while standing or walking. Pedestrian recognition is complex mostly because of the high variability that characterizes the pedestrians' projections on the camera image plane. The appearance of a pedestrian on the image is influenced by the person's pose, his or her clothing, occlusions, and the atmospheric conditions that contribute to the illumination of the scene [10]. Background clutter also plays a role in making the detection difficult. That is, the diversity of pedestrians in nature makes real-time pedestrian recognition on low computational hardware a difficult challenge.
Although Convolutional Neural Networks (CNNs) became the state-of-the-art approaches with high accuracy for object recognition [11, 12] including pedestrian recognition [13], CNNs take expensive computation. A variety of studies were proposed for real-time pedestrian recognition with faster speed [7, 6, 14], however, all of them work on the GPU system that outperforms the regular CPU platform. Therefore, we propose and compare three methods that work on the CPU system with small physical size and low computation resources for real-time pedestrian recognition.
After reviewing and considering the state-of-the-art methods in real-time pedestrian recognition, this paper presents three approaches including improved LBP features and AdaBoost classifier, improved HOG features and SVM classifier, and optimized convolutional neural network [15]. First, we use an improved method based on the LBP features and AdaBoost classifier because the LBP features perform well to extract the contour features of pedestrians and the fast speed of the AdaBoost classifier. Second, this paper presents the improved HOG features and SVM classifier.
We optimized the speed of HOG features, in particular, the improved approach outperform the original method [16] with the exhaustive search. Third, We achieve a real-time CNN for pedestrian recognition by optimizing the hyper-parameters.
To assess how our methods work for real-time pedestrian recognition, we investigate the accuracy and speed under the following conditions:
1) The speed must be above 5 fps to meet the requirement that it is real-time.
2) The accuracy must be higher than 95%.
Therefore, the main contribution in this paper presents three methods for real-time pedestrian recognition on small mobile devices with low computational and small physical size hardware. The remainder of this paper is structured as follows. In section II, we describe related work in object recognition, particularly in real-time pedestrian recognition. A detailed description of our methods is described in section III. We present the experimental results and analyze and discuss these results in section IV. Last, we conclude this paper and provide an outlook on future research.
## 2 Related work
As one of the early real-time object recognition techniques P. Viola and M. Jones [17, 18] proposed the famous method of simple features and cascade AdaBoost classifier for rapid object detection. Although it worked fast on a desktop computer with Intel Pentium III, it was proposed for face detection and did not work well for pedestrian recognition. In 2005, N. Dalal and B. Triggs [16] proposed the classic solution of Histograms of Oriented Gradients (HOG) features and a Support Vector Machine (SVM) classifier for human detection. This method became popular in object recognition for human detection. The HOG features and linear SVM classifiers are fast and accurate. However, the exhaustive search component is computationally expensive so it is not fast enough on low-performance computational hardware. Therefore, we propose an optimization for faster speed on a low-performance system.
In recent years, CNNs became the state-of-the-art methods for object recognition [13], R-CNN[19], Fast R-CNN[20], Faster R-CNN[21], YOLO[22], SSD[23]. Although these methods have high accuracy, they usually take expensive computation and work on a GPU system. They are far from working on low-performance computational hardware [24, 25].
Both early and recent popular methods have been widely and successfully applied in many areas, such as pedestrian recognition, face recognition, etc. However, there is a common issue, all of them require expensive computation, hence they only work on powerful computational hardware like GPU systems. A real-time solution on the DSP-based embedded system was proposed in [26]. This solution works only on the dedicated DSP hardware platform, not on a general CPU system. An exciting solution was proposed based on HOG features and an SVM classifier for pedestrian detection at 135 fps on a desktop computer equipped with an Intel Core i7 870 and a GPU [27]. Similarly, [28] presented an implementation of vehicle recognition at 4 fps at a resolution of 1224 by 370 pixels based on the HOG feature and linear SVM classifier. However, none of these methods is fast enough on low-performance systems.
An interesting convolutional neural network for object recognition was proposed in [29]. Although this work optimized fast R-CNN on an embedded platform for real-time object recognition, it works at 1.85fps speed on the CPU and GPU system. Similarly, a low-complexity fully convolutional neural network that works on a weak GPU platform was proposed in [30] for object recognition based on YOLO. Yet, none of [30] and [29] is fast enough on a small-size CPU system.
An alternative approach was proposed in [31, 32] that implemented real-time object recognition using wireless communication between the mobile robot and the servers. [33] implemented the near real-time object recognition for drones by offloading the computation onto an off-board computation cloud. Although using a server with powerful computational resources can be a feasible solution, several applications need methods that can operate independently without a server. For instance, the communication time between the drones and servers is affected by variations in wireless bandwidths and this can form a severe bottleneck [34].
Although some of the above methods have fast speed and high accuracy, they still do not work well given the constraints related to small-size CPU computation systems. In addition, there are many studies that work on different low computational hardware for real-time object recognition. An end-to-end deep vision network model was proposed to predict possible good grasps, which works in real-time on a Baxter robot at a rate of 80 frames per second using a GPU system [35]. [36] used the exclusive Qualcomm Snapdragon Flight board embedded in a 2.4GHz processor
to implement a visual-inertial drone system for real-time moving object detection. Although this computational board has good performance and small size, it is a propriety system exclusive for the drone and the method only recognizes moving objects.
In summary, existing methods usually rely on powerful computing resources. In spite of this, some of the algorithms using convolutional neural networks and linear SVM classifiers are reasonably fast, yet not fast enough to implement real-time pedestrian recognition on a regular CPU system with low computation resources.
## 3 Methodology
This work aims to implement Real-time pedestrian recognition on low computational resources. This paper presents three approaches to achieve the requirements of the speed at 2 fps and the accuracy at 95%. LBP is a fast algorithm to extract the contour features of the objects, in particular, it is very suitable for extracting the features of pedestrians. The Adaboost classifier is a fast classifier. The method of HOG features and SVM classifier is popular in pedestrian recognition. But the speed is not fast due to the exhaustive search [37]. Thus, this paper presents the optimization of the exhaustive search in HOG and SVM.
The main requirements for our methods are that they work on the low computational hardware in real-time, and with high accuracy. To this end, we propose three methods including LBP features and Adaboost classifier, HOG features and SVM classifier, and Convolutional neural network for real-time pedestrian detection. In this section, we describe the details of the three methods and the experimental setup and dataset.
Choosing suitable hardware is a crucial and difficult task. After considering several alternatives, such as central processing units (CPU), graphic processing units (GPUs), digital signal processors (DSPs), and field programmable gate arrays (FPGAs) [38]. We have chosen the Intel NUC6i5SYK microcomputer as the hardware in this work since it offers a good trade-off between size, computing power, power dissipation, and price. In addition, Intel NUC6i5SYK is a general CPU and Linux (optional) platform with good compatibility that facilitates the portability of the algorithms to other hardware. It has physical dimensions of \(111\times 111\times 35mm\), and embeds a 1.80 GHz Intel Core i5-6260U Processor. There are few applications where the NUC does not fit for mobile devices, in particular, drones, and small robots. In the experiments, we train the classifiers on the CPU computer and test the trained classifiers on Intel NUC6i5SYK. The experiments of HOG and SVM, LBP and Adaboost are implemented based on the OpenCV library. The Convolutional neural network is implemented based on Darknet.
### Dataset
Data sets are a fundamental tool for comparing detection algorithms, fostering advances in the state-of-the-art. The INRIA pedestrian dataset is a traditional pedestrian dataset. The pedestrians in the image have relatively regular postures. Most of them are the persons walking or standing captured from different angles without much deviation from the horizontal perspective. The INRIA person data set [16] and VOC (VOC2007 [39], VOC2012[39]) are very popular in the Pedestrian recognition community, both for training detectors and reporting results. Compared to the INRIA dataset, VOC2007 and VOC2012 are more challenging. Although the VOC datasets originally provided samples and labels of 20 classes of objects, one of them is a person. we keep those labels for the class of person. The pedestrians in the images of VOC have a large variety of postures and are captured from different angles deviating severely from the horizontal perspective. To obtain a more comprehensive dataset, in this work, we combine the pedestrian data sets VOC(VOC2007, VOC2012) and INRIA as the data sets to train and test our methods. The statistics of the data sets are shown in table 1.
As shown in table 1, in this work we use 614 images from the INRIA dataset and 6095 images from the VOC dataset as the training data sets, 288 images from the INRIA dataset and 2007 images from the VOC dataset as the testing data sets. For the training process, we use the bootstrap approach to train the SVM and AdaBoost classifiers until the classifiers perform well.
### LBP features and AdaBoost classifier
Originally, the computation of the LBP feature is based on the comparisons of adjacent single pixels[40]. However, the strategy provided by OpenCV is based on the comparison between adjacent
and small batches consisting of several adjacent pixels. Since the value of a batch is the sum of adjacent pixels in rectangle areas, it can be computed by using the integral image. This is also a new strategy which promotes the speed of the algorithm and it does not affect the accuracy of detection much.
The original strategy is shown in Fig.1. We generate the binary pattern based on the single pixel directly. Each pixel around the central pixel, whose value is 112, compares with the central pixel. If it is bigger, the binary value is 1. Otherwise, the binary value is 0.
The new strategy adopted by OpenCV is shown in Fig.2. Matrix2 is filled with the summations of every 4 adjacent pixels in matrix1. The binary pattern is calculated based on the comparisons of 8 non-central summations with the central summation.
The size of the image is \(S=height\times width\). The time complexity of generating the local binary pattern for the image is \(f(8S)\) in the original strategy. In the new strategy, the time complexity for calculating the integral image is \(f(S)\) and for the generation of LBP based on summations is
\[f(\frac{S}{4}\times 8)=f(2\times S) \tag{1}\]
Thus, the total complexity is \(f(3S)\). It is smaller than the time complexity of the original strategy.
### HOG features and SVM classifier
```
Data: 5885 sample ROIs
1 ExtractROI image features InitialiseNEAT with parameters Initialise\(P_{0}\) of ANN while not \(P=P_{max}\) or \(f_{g}=f_{\tau}\) for some \(g_{i}\)do
2 Select 500 random sample ROIs
3 Evaluate\(g_{i}\in P_{G}\) Select sub-group \(g\subset P=\{g_{i},...,g_{j}\}\) for reproduction based on \(f_{g}\)
4 Recombine cross-over \(g\) and create \(g^{*}\)
5 Mutate\(g^{*}\) with \(p_{\rm mut}\) Define \(P_{G+1}\)
6 end while
7 Select\(\max_{g_{i}\in P}f_{g}\)
```
**Algorithm 1**Evolving a real-time object recognition NN.
The initialization of cache and preprocessing for feature computation of HOG is based on the fact that the range of value of pixels is fixed, which is [0,255]. Thus, the gradients of \(dx\) and \(dy\) must be in the range of [-255,255].
The computation of the 9 bins of the Histogram of Gradient is based on gradients, which are \(dx\) in the x-direction and \(dy\) in the y-direction for each pair of adjacent pixels. The gradient for each point has magnitude and angle formed by \(dx\) and \(dy\). Then, for each gradient, there must be 2 bins adjacent to it. The magnitude votes the weights for the 2 bins based on linear interpolation.
In Fig.3, O is a single point in an image and A is its magnitude. bin0 and bin1 are the 2 bins adjacent to this magnitude. Since we know the angle of the gradient, we can obtain the angle formed by bin0 and the direction of the gradient. Thus, the voted weights for the 2 bins can be computed. For each point, we need to store its 2 voted weights and the indices of the 2 bins.
\begin{table}
\begin{tabular}{l r r r r r} \hline & training & \(lab_{tr}\) & testing & \(lab_{te}\) & total \\ \hline INRIA & 614 & 1958 & 288 & 605 & 2563 \\ VOC & 6095 & 13256 & 2007 & 4528 & 17784 \\ total & 6709 & 15214 & 2295 & 5133 & 20347 \\ \hline \end{tabular}
\end{table}
Table 1: The statistics of the data sets. The training shows the number of images for the training data set. The testing shows the number of images for the testing data set. The \(lab_{tr}\) shows the number of total labels on training images. The \(lab_{te}\) shows the number of total labels on testing images.
As we mentioned above, given the \(dx\) and \(dy\) for a specific point, all the values above are fixed. And since the values of \(dx\) and \(dy\) are in the range of [-255, 255], all the data can be precomputed and stored in the hardware. Thus, these data can be pre-loaded before the computation of the HOG feature. This mechanism accelerates the process of feature computation up to 10% faster than its original speed.
Except utilizing the cache data, we also adjust the parameters for the whole process of detection by HOG+SVM. We examine and test 3 parameters for the whole process, including the size of the step of scanning the whole image, the number of levels in the image pyramid, and the size of the samples for training and detection. We try different combinations of different parameters in order to balance the accuracy and efficiency of the program.
### Convolutional Neural Network
For the experiments of HOG+SVM and LBP+Adaboost, we modify the dataset to improve the efficiency of the programs. We resize each sample provided by INRIA dataset from the size of \(96\times 160\) to \(32\times 64\) and resize the sample from VOC to \(32\times 64\) strictly despite its large variety of different sizes. Since HOG focuses on gradients and LBP focuses on the comparison of adjacent pixels, three channels of RGB have a subtle influence on the results. Thus, we transform all the RGB images into gray-scale images. This condition can also reduce the computation expense and improve the efficiency of the program.
The disadvantage of the models of LBP+Adaboost and HOG+SVM is that the exhaustive search does not consider the distortion of a sample with the original size which does not have the standard proportion of \(32\times 64\). In the experiment of HOG+SVM, we use the size of \(32\times 64\) as the standard size of samples. However, images can be distorted since we do not just consider samples with such size. We collect all the images with a person inside and resize them to \(32\times 64\) without considering the proportion.
## 4 Experiments
### The results
Evaluating the quality performance consists of 2 values, including FPPI(false positive per image) and miss rate. The results of the detections for an image can only have 3 cases: detection matching the labelled areas(true positive); detection not matching any labelled areas(false positive); the labelled areas without any detection(miss positive). We focus on the last 2 cases which are both negative results. The number of detections not matching any labelled areas is the value of FPPI. The number of labelled areas without any detection is the value of the miss rate. The final goal of
Figure 1: Gradients and adjacent bins in HOG
the pedestrian detection algorithm is to reduce them to become as low as possible. The lower they are, the better the results are.
For each image, we have labelled areas and detected areas and both of them are rectangles.
* To calculate FPPI, we need to count the detections that do not match any labelled areas in an image. When the overlap area in the labelled area and a detected area is less than 50%, the detection does not match any labelled area, which is a false positive result[41].
* To calculate the miss rate, we need to count the labelled areas without matching any detections in an image.
In order to describe the quality performance exclusively and the relationship between speed and quality performance specifically, we use 2 kinds of graphs to describe our experimental results. In this plot, the x-axis represents the average time consumed by the model, the y-axis denotes the number of occurrences of matching between the labelled area and the detection area(Fig.4). It is said that when the intersection area is bigger than half of the labelled area, it is regarded as match[41]. This plot aims to present the relationship between speed and quality performance. The higher the value is, the better the performance is.
The efficiency of LBP+Adaboost is the highest among all of the three methods. Since the computation of the feature is the fastest and the process of running a decision tree is also fast. Moreover, according to the experiment, its accuracy of detection is close to HOG+SVM. We assume that is because we do not have enough training data to explore the further differences between them. The model of Convolutional Neural Network based on darknet has the highest accuracy.
### The Improvements
The experimental results demonstrate that our methods work well for real-time pedestrian recognition on the low computational CPU platform. However, we are still inspired by many approaches to improve the speed of pedestrian recognition. In the next work, we plan to implement the following approaches for faster speed on the smaller computers with lower computation that work on the devices with more constraints.
For the model of HOG+SVM, we expect that the accuracy can be further improved by applying a Gaussian spatial window to the scale of the size of the sample, no application of a Gaussian spatial window to blocks. This is also a way to improve its efficiency. The computation of HOG has considered too many factors, including Gaussian Window Application, magnitude and angle of gradients, and different sizes of blocks, cells and samples. Some of these features are redundancy. They need to be further experimented and extracted the most critical factors of affecting the result. For example, the experiment in C4[42] has mentioned that the comparison of neighbour pixels is more important than the comparison of magnitude. Still, we do not know what kind of information extracted by HOG is the most critical to make it become such successful for human detection. If we can extract just one or two of the most significant information for feature computing, we can make a breakthrough in efficiency and accuracy.
The strategy of computing LBP feature with the basic unit is the batch the pixels instead of a single pixel. It can also be applied to the computation of HOG in further research.
The accuracy of convolutional neural networks based on darknet is better than those of other models obviously. This is mainly attributed to the strategy of generating features for convolutional neural network models. The feature of convolutional combining with max pool is more general
Figure 2: Definition of intersection area between labeled area and detection area
than other feature including HOG and LBP, which strictly relies on the shapes and outlines of the objects.
## 5 Conclusions
Since the computational capacity of the CPU is limited, we can only promote efficiency by improving some processes in the program. In this work, we investigate real-time pedestrian recognition on small physical-size computers with low computational resources for faster speed. This paper presents three methods that work on the small physical size CPUs system. First, we improved the Local Binary Pattern (LBP) features and Adaboost classifier. Second, we optimized the Histogram of Oriented Gradients (HOG) and Support Vector Machine. Third, We implemented fast Convolutional Neural Networks (CNNs). The results demonstrate that the three methods achieved real-time pedestrian recognition at an accuracy of more than 95% and a speed of more than 5 fps on a small physical size computational platform with a 1.8 GHz Intel i5 CPU. Our methods can be easily applied to small mobile devices with high compatibility and generality. In future, we will apply other AI technologies to improve this work such as knowledge graph [43, 44].
|
2310.13634 | Optimising the exchange of Majorana zero modes in a quantum nanowire
network | Determination of optimal control protocols for Majorana zero modes during
their exchange is a crucial step towards the realisation of the topological
quantum computer. In this paper, we study the finite-time exchange process of
Majorana zero modes on a network formed by coupled $p$-wave superconducting
one-dimensional nanowires. We provide scalable computational tools for
optimising such an exchange process relying on deep learning techniques. To
accomplish the scalability, we derive and implement an analytic formula for the
gradient of the quantum infidelity which measures the error in the topological
quantum gate generation in the Majorana zero modes exchange. Our optimisation
strategy relies on learning the optimised transport protocol via a neural net
which is followed by direct gradient descent fine tuning. The optimised
exchange protocols in the super-adiabatic regime discover the fact that the
Majorana zero modes must necessarily stop before crossing a junction point in
the network. We explain that this is caused by fast changes in the energy gap
of the system whenever one of the Majorana zero modes approaches a junction
point. In particular, the energy gap exhibits oscillations followed by a sharp
jump. We explain this phenomenon analytically in the regime where the Majorana
zero modes are completely localised. Finally, we study how the disorder in the
quantum nanowire affects the exchange protocols. This shows that understanding
the disorder pattern would allow one to improve quantum gate fidelity by one to
two orders of magnitude. | Tomasz Maciazek, Aaron Conlon | 2023-10-20T16:32:17Z | http://arxiv.org/abs/2310.13634v1 | # Optimising the exchange of Majorana zero modes in a quantum nanowire network
###### Abstract
Determination of optimal control protocols for Majorana zero modes during their exchange is a crucial step towards the realisation of the topological quantum computer. In this paper, we study the finite-time exchange process of Majorana zero modes on a network formed by coupled \(p\)-wave superconducting one-dimensional nanowires. We provide scalable computational tools for optimising such an exchange process relying on deep learning techniques. To accomplish the scalability, we derive and implement an analytic formula for the gradient of the quantum infidelity which measures the error in the topological quantum gate generation in the Majorana zero modes exchange. Our optimisation strategy relies on learning the optimised transport protocol via a neural net which is followed by direct gradient descent fine tuning. The optimised exchange protocols in the super-adiabatic regime discover the fact that the Majorana zero modes must necessarily stop before crossing a junction point in the network. We explain that this is caused by fast changes in the energy gap of the system whenever one of the Majorana zero modes approaches a junction point. In particular, the energy gap exhibits oscillations followed by a sharp jump. We explain this phenomenon analytically in the regime where the Majorana zero modes are completely localised. Finally, we study how the disorder in the quantum nanowire affects the exchange protocols. This shows that understanding the disorder pattern would allow one to improve quantum gate fidelity by one to two orders of magnitude.
## 1 Introduction
A topological quantum computer realises quantum gates by physically exchanging quantum quasiparticles called anyons [1]. The possibility of topological quantum computation is a very attractive prospect, because such a computer would be intrinsically robust against local noise [2]. This is because topological quantum gates do not change when the trajectories of the exchanged anyons are perturbed (mathematically, the quantum gate depends only on the homotopy class of the braid). What is more, in systems supporting anyons the quantum computations are realised within a subspace which is energetically gapped, thus any quantum state used in topological quantum computing is characterised by long decoherence times.
One of the main candidates for physical systems able to realise topological quantum computation are systems that support a particular type of anyons called Majorana zero modes (MZMs). There is strong theoretical evidence for the existence of MZMs in two-dimensional \(p\)-wave superfluids/superconductors [3, 4, 5] assisted by major experimental efforts to realise MZMs in iron-based superconductors [6, 7, 8, 9, 10, 11]. However, a practical implementation of topological quantum computation based on MZM braiding has proved to be a major challenge. It is believed that braiding might be accomplished more easily in one-dimensional architectures. In particular, MZMs can also be realised in semiconductor nanowires coupled to a superconductor [12, 13, 14, 15, 16] as well as in other condensed matter and photonics systems [17, 18] which provide experimental realisations of Kitaev's one-dimensional superconductor model [19]. There, the MZMs are localised at the endpoints of the topological regions in the nanowire and can be transported along the wire adiabatically by tuning local voltage gates distributed along the wire [14]. If several such nanowires are coupled together to form a junction (or more generally, a network), then MZM braiding can be accomplished by adiabatic transport through the junction. Remarkably, such a braiding of MZMs moving on 1D nanowire networks produces the same quantum gates as MZM braiding in 2D [14, 20]. Although the quantum gates that can be obtained by MZM braiding do not permit universal quantum computation [1, 21], their implementation would constitute a critical step towards the realisation of a universal topological quantum computer.
Our presented work concerns the general issue of optimal control of MZMs in networks 1D quantum nanowires. Similar topics have been studied before in relation to the quantum control of MZMs in a single wire [22, 23, 24, 25]. The general objective is to transport the MZMs in a given finite amount of time so that the MZM motion profiles (the MZM positions in time) maximise the quantum fidelity. Even though the MZMs are protected by the energy gap, such finite time manipulations may cause leaking of the quantum state to the higher energy levels. Although exponentially small for slow manipulations, these effects are important as they may constitute the ultimate source of errors. In order to mitigate such non-adiabatic transitions in different non-adiabatic motion regimes, the _bang-bang_[23] and _jump-move-jump_ protocols [25] have been proposed. In this paper, we work exclusively in the (super)-adiabatic regime where the evolution times are much longer than the inverse of the energy gap and the MZM velocity is smaller than the superconducting order parameter [24]. In this regime, when the MZMs are transported by small distances in a single wire, the optimal transport protocol is the simple _ramp-up/ramp-down_ protocol [22]. Although MZM transport in a single wire is well-studied, it turns out that optimising the full exchange process poses its own challenges and has its distinct features. Firstly, there is the technical difficulty of efficiently simulating long time quantum evolution and computing the gradient of the quantum fidelity for such a long-time process. To overcome this, we build a scalable machine learning system (a neural net with three hidden layers, inspired by the approach of the work [25]) where the gradient of the quantum fidelity with respect to MZM positions in time is computed analytically. This allows us to mitigate the so-called caching problem in automatic differentiation, thus significantly reducing the required amount of RAM (which would otherwise be a significant bottleneck issue, effectively allowing for simulations of only extremely small systems). Our code is openly available online [26]. Secondly, as we explain in Sections 4 and 5, the energy gap exhibits complex behaviour when one of the MZMs crosses a junction point during the exchange. In particular, the energy gap oscillates
and jumps sharply (although the amplitude of these oscillations is much smaller than the amplitude of the energy gap jump). This means that even in case when the MZMs move adiabatically when being located far away from the junction point, the time derivative of system's Hamiltonian may become large when one of the MZMs approaches the junction, making the adiabatic evolution difficult to maintain. Due to this effect, optimising the entire exchange protocol is a nontrivial task. As we show in Section 4, the optimised exchange profiles share one common feature, namely they require the MZMs to slow down and stop before crossing the junction.
Machine learning (ML) techniques have recently seen a surge of applications to condensed matter physics, see e.g. [27, 28, 29] for recent reviews. Of particular relevance in the context of our presented work is [25], where they use ML techniques to study the optimisation of shuttling a MZM along a wire. In addition in [30] and [31], ML techniques have been used to optimise the design of nanowire-based systems supporting MZMs and to predict the profile of disorder in a nanowire. We return to these topics in Section 4. What is more, reinforcement learning is applied to optimise the compilation of an arbitrary qubit gate into a sequence of elementary braiding moves [32].
This paper is structured as follows. In Section 2 we describe the theoretical setup for studying quantum control of MZMs. In Section 3 we take a closer look at quantum fidelity and compute its gradient analytically. In Section 4 we present details of the numerical optimisation protocol, present the optimised exchange profiles and discuss the effects of disorder in the nanowire. In Section 5 we explain the aforementioned jump in the energy gap and discuss its consequences for the shape of the optimised exchange profiles.
## 2 Theoretical setup
A trijunction consists of two chains (see also Figure 1): i) the horizontal chain of the length \(2N+1\) whose Hamiltonian reads
\[\begin{split}& H_{h}(t)=-\sum_{j=1}^{2N+1}\mu_{j}^{(h)}(t)\left(c_{j} ^{\dagger}c_{j}-\frac{1}{2}\right)-w\sum_{j=1}^{2N}\left(c_{j}^{\dagger}c_{j+1 }+c_{j+1}^{\dagger}c_{j}\right)\\ &+\sum_{j=1}^{2N}\left(\Delta_{h}\,c_{j}c_{j+1}+\overline{\Delta }_{h}\,c_{j+1}^{\dagger}c_{j}^{\dagger}\right)\end{split} \tag{1}\]
and ii) the vertical chain of the length \(N\) whose Hamiltonian reads
\[\begin{split}& H_{v}(t)=-\sum_{j=1}^{N}\mu_{j}^{(v)}(t)\left(d_{j} ^{\dagger}d_{j}-\frac{1}{2}\right)-w\sum_{j=1}^{N-1}\left(d_{j}^{\dagger}d_{j +1}+d_{j+1}^{\dagger}d_{j}\right)\\ &+\sum_{j=1}^{N-1}\left(\Delta_{v}\,d_{j}d_{j+1}+\overline{\Delta }_{v}\,d_{j+1}^{\dagger}d_{j}^{\dagger}\right),\end{split} \tag{2}\]
where the on-site potentials have the forms
\[\mu_{j}^{(h)}(t)=\mu_{0}-V_{j}^{(h)}(t),\quad\mu_{j}^{(v)}(t)=\mu_{0}-V_{j}^{ (v)}(t). \tag{3}\]
The Hamiltonian of the entire system is given by
\[H(t)=H_{h}(t)+H_{v}(t)+H_{h-v}, \tag{4}\]
where \(H_{h-v}\) is the coupling between the site \(N+1\) of the horizontal chain and the site \(1\) of the vertical chain. We consider the coupling of the form
\[H_{h-v}=-w\left(c_{N+1}^{\dagger}d_{1}+d_{1}^{\dagger}c_{N+1}\right)+\left( \Delta_{v}\,c_{N+1}d_{1}+\overline{\Delta}_{v}\,d_{1}^{\dagger}c_{N+1}^{ \dagger}\right). \tag{5}\]
Recall that, depending on the relationships between the coefficients \(\{\mu_{j}^{(h/v)}\}\), \(\Delta_{h/v}\), and \(w\), different regions of the trijunction may be in different phases. In particular, in the region where \(|\mu_{j}^{(h/v)}|<2w\) and \(\Delta_{h/v}\neq 0\) the system is in the topological phase with MZMs localised on the boundary of this region [19]. On the other hand, if \(|\mu_{j}^{(h/v)}|>2w\), then no MZMs appear in the corresponding region and the system is in the topologically trivial phase. In the numerical calculations in Sections 4 and 5 we take \(\Delta_{h}\equiv\Delta>0\) and \(\Delta_{v}=i\Delta\). Note that we cannot assume the superconducting order parameters to be real numbers everywhere in the system, as it would inevitably cause the existence of a \(\pi\)-junction during the exchange process causing level crossings and creating extra pairs of MZMs, see e.g. Section 5 or [14] for more details.
By changing the on-site potentials \(V_{j}^{(h)}(t)\) and \(V_{j}^{(v)}(t)\) one can control the MZMs so that they will move around the network. This can be realised experimentally in the keyboard-architecture setup for controlling MZMs [14, 33, 34] which physically corresponds to distributing voltage gates along the wire which are tuned whenever the local voltage needs to be changed. To model this, we assume \(V_{j}^{(h/v)}\) to have the shape of the sigmoid function. For instance, to place one topological region in the horizontal chain (stages \(I\) and \(IV\) of the exchange, see Figure 2), we set
\[\begin{split} V_{j}^{(h)}&=V_{0}\left(\sigma(j-x_{R }^{(I/IV)})+\sigma(x_{L}^{(I/IV)}-j)\right),\quad j=1,\ldots,2N+1,\\ V_{j}^{(v)}&=V_{0}\,\sigma(j),\quad j=1,\ldots,N, \end{split} \tag{6}\]
where \(0\leq x_{L}^{(I/IV)}<x_{R}^{(I/IV)}\leq 2N+1\) are the (approximate) positions of the MZMs, \(\mu_{0}\) is the uniform background potential satisfying \(|\mu_{0}|<2w\) and
\[\sigma(x)=\frac{1}{1+e^{-x}},\quad V_{0}>2w+\mu_{0}.\]
In order to move the left MZM to the right by some distance \(\Delta x\) in time \(T\), we parametrise \(x_{L}(s)=x_{L}+s\Delta x\), \(s=t/T\), see Figure 3. Similarly, in the configuration
Figure 1: The trijunction setup. We assume that the horizontal and vertical chains have identical hopping amplitudes, but possibly different superconducting order parameters, \(\Delta_{h}\) and \(\Delta_{v}\) respectively. Note the site labelling convention where the site \(N+1\) of the horizontal chain couples to the site \(1\) of the vertical chain.
where one of the MZMs is on the vertical chain and the other one is on the right side of the horizontal chain (stage \(II\) of the exchange), the on-site potentials read
\[V_{j}^{(v)} = V_{0}\left(\sigma(j-x_{R}^{(II)})+\sigma(j-x_{V}^{(II)}+1)\right), \quad j=1,\ldots,N\] \[V_{j}^{(h)} = V_{0}\left(\sigma(j-x_{R}^{(II)})+\sigma(N-x_{V}^{(II)}-j+1) \right),\quad j=N+1,\ldots,2N+1 \tag{7}\] \[V_{j}^{(h)} = V_{0}\,\sigma(N+1-j),\quad j=1,\ldots,N,\]
Figure 3: The MZMs are transported by shifting the positions of the sigmoid functions that determine the on-site potentials. In the figure, the on-site potentials are represented by the orange lines and the MZM amplitudes by the blue lines. The plots are largely schematic, but they represent MZMs in the Hamiltonian used in Section 4. This particular configuration of MZMs takes place in stages \(I\) and \(IV\) of the exchange. \(H_{h-v}\) represents the coupling across the junction, see Equation (5).
Figure 2: The four stages of the MZM exchange. The blue strings denote the topological regions and the red dots denote the positions of the MZMs. In each stage, the positions of the MZMs are determined by the vectors \(\mathbf{s}_{j}^{(\star)}\) of the length \(N_{T}\), where \(j=1,2\) labels the MZMs and \(\ast=I,II,III,IV\) labels the stages. The vectors \(\mathbf{s}_{j}^{(\star)}\) determine the MZM positions according to Equations (21)- (24) presented in Section 4.
where \(N+1\leq x_{R}^{(II)}\leq 2N+1\) and \(1\leq x_{V}^{(II)}\leq N\). Finally, in the configuration where one of the MZMs is on the vertical chain and the other one is on the left side of the horizontal chain (stage \(III\) of the exchange), the on-site potentials read
\[\begin{split} V_{j}^{(v)}&=V_{0}\left(\sigma(x_{L}^{ (III)}-j-N-1)+\sigma(j-x_{V}^{(III)})\right),\quad j=1,\ldots,N\\ V_{j}^{(h)}&=V_{0}\left(\sigma(x_{L}^{(III)}-j)+ \sigma(j-x_{V}^{(III)}-N-1)\right),\quad j=1,\ldots,N+1\\ V_{j}^{(h)}&=V_{0}\,\sigma(j-N-1),\quad j=N+2, \ldots,2N+1,\end{split} \tag{8}\]
where \(1\leq x_{L}^{(III)}\leq N+1\) and \(1\leq x_{V}^{(III)}\leq N\).
We work exclusively in the (super)-adiabatic regime where the evolution time \(T\) is much larger than the inverse of the energy gap of the system and the MZM velocity is smaller than the critical velocity \(v_{crit}=\Delta\), i.e.
\[v<v_{crit}=\Delta,\qquad T>\frac{2\pi}{E_{gap}}. \tag{9}\]
In the numerical calculations we make use of the Bogolyubov-de-Gennes form of the Hamiltonian (4), which is the Hermitian matrix \(H_{BdG}\) such that
\[H(t)=\frac{1}{2}\left(\begin{array}{cccc}\mathbf{C}^{\dagger}&\mathbf{D}^{ \dagger}&\mathbf{C}&\mathbf{D}\end{array}\right)\,H_{BdG}(t)\,\left(\begin{array} []{c}\mathbf{C}\\ \mathbf{D}\\ \mathbf{C}^{\dagger}\\ \mathbf{D}^{\dagger}\end{array}\right), \tag{10}\]
where \(\mathbf{C}^{T}=(c_{1}\ldots,c_{2N+1})\) and \(\mathbf{D}^{T}=(d_{1},\ldots,d_{N})\). The Bogolyubov-de-Gennes Hamiltonian can be diagonalised as
\[W(t)^{\dagger}H_{BdG}(t)W(t)=E(t), \tag{11}\]
where \(E(t)\) is the diagonal matrix containing the single-particle spectrum and \(W(t)\) is the matrix of eigenmodes, i.e. the Bogolyubov transformation diagonalising \(H_{BdG}(t)\). Recall that such a fermionic Bogolyubov transformation has the form [35]
\[W(t)=\left(\begin{array}{cc}U(t)&\overline{V}(t)\\ V(t)&\overline{U}(t)\end{array}\right), \tag{12}\]
where \(U(t)\) and \(V(t)\) are blocks of the size \((3N+1)\times(3N+1)\).
## 3 Quantum fidelity and its gradient
When exchanging the MZMs, we consider the \(p\)-wave Hamiltonian that changes in time, \(H(t)\), \(0\leq t\leq T\) such that \(H(0)=H(T)\). In order to simulate the quantum evolution, we divide the time interval into \(N_{T}\) timesteps, each timestep having the length \(\Delta t=T/N_{T}\). We approximate the quantum evolution by the Suzuki-Trotter formula in the BdG picture
\[\mathcal{O}_{ev}\approx\prod_{k=1}^{N_{T}}\exp\left(-i\Delta tH_{BdG}(t_{N_{T} -k+1})\right),\quad t_{j}=j\,\Delta t, \tag{13}\]
where we employ the convention that the evolution in the first timestep corresponds to the far-right element of the product. The quantum fidelity compares the evolved eigenmodes \(W_{ev}=\mathcal{O}_{ev}W(0)\) with the target eigenmodes \(W(T)\) from Equation (11). In particular, we define the quantum fidelity as the overlap between the Bogolyubov
vacuums corresponding to the eigenmodes of \(H(t)\) at the evolved eigenmodes \(W_{ev}(T)\). The result is given by the Onishi formula [36, 37, 35]
\[\mathcal{F}=\left|\det\left[\,U(T)^{\dagger}U_{ev}+V(T)^{\dagger}V_{ev}\right] \right|, \tag{14}\]
where \(U_{ev}\) and \(V_{ev}\) are the respective blocks of \(W_{ev}\).
Let us next briefly explain how to compute the gradient of the quantum fidelity. To this end, we use the following two identities
\[\partial_{\lambda}\left|z(\lambda)\right|=\frac{1}{\left|z\right|}\,\Re\left( \overline{z}\,\partial_{\lambda}z\right),\quad\partial_{\lambda}\det X( \lambda)=\det X\,\operatorname{tr}\left(X^{-1}\,\partial_{\lambda}X\right). \tag{15}\]
The role of the parameter \(\lambda\) in Equation (14) is played by the entries of the vectors \(\mathbf{s}_{j}^{(*)}\), \(j=1,2\) and \(*=I,II,III,IV\). Note that only the matrices \(U_{ev}\) and \(V_{ev}\) depend on such a \(\lambda\), since they come from the quantum evolution operator. What is more, they depend linearly on \(\mathcal{O}_{ev}\) as follows.
\[U_{ev}=P_{1}\mathcal{O}_{ev}W(0)P_{1}^{T},\quad V_{ev}=P_{2}\mathcal{O}_{ev}W( 0)P_{1}^{T}, \tag{16}\]
where \(P_{1}=(\mathbb{1},0)\), \(P_{2}=(0,\mathbb{1})\) with \(\mathbb{1}\) and \(0\) being matrices of the sizes \((3N+1)\times(3N+1)\). Thus, we can express the gradient of the quantum fidelity in terms of the gradient of the quantum evolution operator as
\[\begin{split}&\partial_{\lambda}\mathcal{F}=\mathcal{F}\,\Re \Big{\{}\operatorname{tr}\left[\left(U(T)^{\dagger}U_{ev}+V(T)^{\dagger}V_{ev} \right)^{-1}\right]\left(U(T)^{\dagger}P_{1}+V(T)^{\dagger}P_{2}\right)\times \\ &\times\left(\partial_{\lambda}\mathcal{O}_{ev}\right)W(0)P_{1}^ {T}\Big{\}}.\end{split} \tag{17}\]
As we mentioned earlier in this section, we are interested in computing the gradient of \(\mathcal{F}\) with respect to the positions of MZMs in each timestep. If \(\lambda\) is the position of MZM with label \(1\) in the \(k\)-th timestep, i.e. \(\lambda=s_{1,i}^{(*)}\) for some \(i\) and \(*=I,II,III,IV\) (here, to simplify the notation, \(k\) enumerates all the timesteps collectively, while \(i\) enumerates only the timesteps within a given exchange stage), then the derivative \(\partial_{\lambda}\) affects only the \(k\)-th term in the product (13), i.e.
\[\partial_{\lambda}\mathcal{O}_{ev}=\mathcal{O}_{ev}^{(k,+)}\left(\partial_{ \lambda}e^{-iH_{BdG}(t_{k},\lambda)\Delta t}\right)\mathcal{O}_{ev}^{(k,-)}, \tag{18}\]
where \(\mathcal{O}_{ev}^{(k,-/+)}\) are the evolution operators before and after the timestep \(k\)
\[\mathcal{O}_{ev}^{(k,-)}=\prod_{l=1}^{k-1}\exp\left(-i\Delta tH_{BdG}(t_{k-l} )\right),\quad\mathcal{O}_{ev}^{(k,+)}=\prod_{l=k+1}^{N_{T}}\exp\left(-i \Delta tH_{BdG}(t_{N_{T}-l+k+1})\right).\]
Finally, the task at hand boils down to computing the derivative of the exponent in the Equation (18). This is a standard problem and there exist several techniques to address it. In this work, we choose to apply the following method [38, 39, 40].
\[\partial_{\lambda}e^{-iH_{BdG}(t_{k},\lambda)\Delta t}=W(t_{k})X_{k}W(t_{k}) ^{\dagger}, \tag{19}\]
where the \((p,q)\)-th entry of \(X_{k}\) reads
\[\begin{split}& ig_{p,q}^{(k)}\,\frac{e^{-i\Delta t\,\epsilon(t_{k})_ {p}}-e^{-i\Delta t\,\epsilon(t_{k})_{q}}}{\epsilon(t_{k})_{p}-\epsilon(t_{k})_ {p}},\quad p\neq q\\ & g_{p,p}^{(k)}\,\Delta t\,e^{-i\Delta t\,E(t_{k})_{p,p}},\quad p =q,\end{split} \tag{20}\]
and \(\epsilon(t_{k})_{p}\) is the \(p\)-th diagonal entry of \(E(t_{k})\) and \(g_{p,q}^{(k)}\) is the \((p,q)\)-th entry of
\[G^{(k)}=-i\,W(t_{k})^{\dagger}\left(\partial_{\lambda}H_{BdG}(t_{k},\lambda) \right)W(t_{k}).\]
Finding the derivative \(\partial_{\lambda}H_{BdG}(t_{k},\lambda)\) is a straightforward task, because only the diagonal entries of \(H_{BdG}(t_{k})\) depend on the positions of the MZMs and the dependency has the form of the simple sigmoid function, as shown in Equations (6), (7) and (8).
## 4 Machine learning the optimised transport profiles
The strategy for optimising the transport profiles is twofold. Firstly, we use a neural net (NN) with eight sigmoid output neurons to generate the vectors \(\mathbf{s}_{j}^{(*)}\) of the length \(N_{T}/4\), where \(j=1,2\) labels the MZMs and \(*=I,II,III,IV\) labels the stages (see Figure 4). The NN architecture presented in Figure 4 has been determined as suitable for the problem at hand by trial and error iterations over different NN depths and hidden layer widths. The input of the NN, \(\tau\), is fixed to be the vector of \(N_{T}/4\) evenly spaced numbers over the interval \([0,1]\). The output vectors determine the positions of the MZMs in the respective stages according to Equations (21)-(24) below. The cost function for the neural net training is the quantum infidelity, \(1-\mathcal{F}\), (see Equation (14)) whose gradient with respect to NN's outputs we calculate analytically and subsequently backpropagate through the NN. We use the Adam optimiser [41] with the learning rate \(10^{-4}\). Secondly, after training the NN for \(100-200\) episodes, we fine tune the resulting profiles by running the gradient descent directly in the space of the vectors \(\mathbf{s}_{j}^{(*)}\) with the Adam optimiser and the learning rate \(10^{-6}\). We have found such a procedure to be most effective, because the NN is able to efficiently optimise the global shapes of the transport profiles with different layers of the NN learning the features of the curves on different scales. The smaller-scale fine tuning is most effectively done using the direct gradient descent.
Let us next specify how the positions of MZMs in Equations (6), (7) and (8) are determined by the NN output vectors \(\mathbf{s}_{j}^{(*)}\) (recall also our convention for labelling the sites of the chains in Figure 1). The exchange stages are schematically shown in Figure 2. In stages \(I\) and \(IV\) of the exchange both MZMs are located on the horizontal chain, and the potential profile is given by Equation (6). The positions of the MZMs for each time step in stage \(I\) read (note that we use vector notation where \(\mathbf{x}_{L/R}^{(*)}\) are
Figure 4: The neural net architecture that we used for optimising the MZM transport profiles. The neural net has three hidden layers of 1800, 1800 and 1200 ReLU units respectively. The eight-unit sigmoid output layer determines the MZM transport profile. The cost function is the quantum infidelity, i.e. \(1-\mathcal{F}\). The input of the NN is fixed to be the vector of \(N_{T}/4\) evenly spaced numbers over the interval \([0,1]\). The output vectors determine the positions of the MZMs in the respective stages.
vectors of the length \(N_{T}/4\) that contain the positions of the MZMs in each time step)
\[\begin{split}\mathbf{x}_{L}^{(I)}&=x_{0}+\mathbf{s}_{ 1}^{(I)}(N+1-x_{0}),\\ \mathbf{x}_{R}^{(I)}&=\mathbf{s}_{2}^{(I)}(N+1)+ \left(1-\mathbf{s}_{2}^{(I)}\right)(2N+1-x_{0}),\end{split} \tag{21}\]
where \(x_{0}\) is the smallest distance at which the MZMs are allowed to near the edge of the system. For the calculations presented in this section we took \(x_{0}=5\). In stage \(IV\) the positions of the MZMs are switched, so we have
\[\begin{split}\mathbf{x}_{L}^{(IV)}&=\mathbf{s}_{2} ^{(IV)}x_{0}+\left(1-\mathbf{s}_{2}^{(IV)}\right)(N+1),\\ \mathbf{x}_{R}^{(IV)}&=\left(1-\mathbf{s}_{1}^{(IV )}\right)(N+1)+\mathbf{s}_{1}^{(IV)}(2N+1-x_{0}).\end{split} \tag{22}\]
In stage \(II\) the MZM with label 1 is located on the vertical chain while the MZM with label 2 is located on the right half of the horizontal chain. The potential profile in this stage has the form (7), where
\[\begin{split}\mathbf{x}_{V}^{(II)}&=1-\mathbf{s}_{1 }^{(II)}+\mathbf{s}_{1}^{(II)}(N-x_{0}),\\ \mathbf{x}_{R}^{(II)}&=\mathbf{s}_{2}^{(II)}(N+1)+ \left(1-\mathbf{s}_{2}^{(II)}\right)(2N+1-x_{0}).\end{split} \tag{23}\]
In stage \(III\) the MZM with label 1 is located on the vertical chain while the MZM with label 2 is located on the left half of the horizontal chain. The potential profile in this stage has the form (8), where
\[\begin{split}\mathbf{x}_{V}^{(III)}&=\left(1- \mathbf{s}_{1}^{(III)}\right)(N-x_{0})+\mathbf{s}_{1}^{(III)},\\ \mathbf{x}_{L}^{(III)}&=\left(1-\mathbf{s}_{2}^{( III)}\right)(N+1)+\mathbf{s}_{2}^{(III)}x_{0}.\end{split} \tag{24}\]
In Figure 5 we present the results of the numerical optimisation of the exchange motion profiles. We have first pre-trained the NN to output approximate linear motion (\(\mathbf{s}_{j}^{(*)}=\tau\)) or approximate harmonic motion (\(\mathbf{s}_{j}^{(*)}=\sin^{2}(\pi\tau/2)\)). It was necessary to pre-train the NN, because initialising it with random weights resulted with exchange motion profiles with quantum fidelity numerically equal to \(0.0\) and its gradient exhibiting a large plateau. The NN training followed by direct gradient descent in the \(\mathbf{s}_{j}^{(*)}\)-space allowed us to reduce the quantum infidelity by several orders form \(1-\mathcal{F}\sim 10^{-1}\) to \(1-\mathcal{F}\sim 10^{-4}\). Crucially, the NN has learned that the MZMs have to stop before crossing the junction. This is due to the gap jump effect which we explain in Section 5. The system size is \(3N+1\) with \(N=55\) and \(\Delta_{h}\equiv\Delta=0.55\), \(\Delta_{v}=i\Delta\), \(w=2.0\), \(\mu_{0}=1.0\), \(V_{0}=30.1\), and \(x_{0}=5\). The evolution time for each stage is \(T_{stage}=250\) and consists of \(2000\) timesteps. This set of parameters puts us well into the super-adiabatic regime as \(T_{stage}>2\pi/E_{gap}\approx 15.7\) and the velocity \(v=55/250=0.22\) which is lower than the critical velocity \(v_{crit}=\Delta\).
### Exchange in nanowires with disorder
In realistic setups, there are different types of noise that may potentially affect the above results. In particular, the presence of disorder in the nanowire will make the base potential \(\mu_{0}\) noisy [42, 30, 31]. To model this, we add a Gaussian noise term to the base potential that makes \(\mu_{0}\) vary slightly from site to site
\[\mu_{0,j}=\mu_{0}+\nu_{j},\quad\nu_{j}\sim\mathcal{N}(0,\sigma_{\nu}^{2}).\]
We choose the same set parameters as in the previous section (\(\mu_{0}=1.0\)) and set the noise variance to \(\sigma_{\nu}=0.02\). We have retrained the NN models following two different scenarios. In the first scenario, we assume that we have access to the exact noise pattern which is fixed throughout the entire training process. Experimentally, this means assuming that we are able to precisely measure the disorder pattern in the given nanowire sample. Understanding the disorder in hybrid superconductor-semiconductor nanowires has been recognised as one of the key challenges in the realisation of MZMs in solid state platforms [42, 31]. There are theoretical proposals showing that this may be accomplished by using the tunnel conductance data processed by machine learning techniques [31]. In the second scenario, we assume no knowledge about the disorder, so an appropriate way of optimisation in this case is to change the noise pattern after each NN training epoch/gradient step. In machine learning this is known as the online stochastic gradient descent method [43] applied
Figure 5: The optimised exchange profiles for the system of the size \(3N+1\) with \(N=55\), \(\Delta=0.55\), \(w=2.0\), \(\mu_{0}=1.0\), \(V_{0}=30.1\), \(x_{0}=5\). The evolution time of each stage was \(250.0\) (total time \(T=1000.0\)) and consisted of \(2000\) time steps (total number of time steps \(N_{T}=8000\)). We have considered two different starting profiles: the linear motion (no stops at the junction) and the harmonic motion (with stops at the junction). The optimisation consisted of \(120\) epochs of NN training which reduced the infidelities from \(0.3668\) to \(0.0017\) for the linear motion start and from \(0.0744\) to \(0.0008\) for the harmonic motion start. The NN training was followed by \(60\) gradient descent steps directly in the space of \(\mathbf{s}_{j}^{(*)}\) vectors, further reducing the final infidelities to \(0.0005\) and \(0.0003\) respectively. This may also be compared with the exact linear and harmonic motion infidelities which are \(0.2773\) and \(0.0062\) respectively. Crucially, the NN has learned that the MZMs have to stop before crossing the junction. The harmonic start seems more suitable for optimisation than the linear start, as it leads to slightly lower infidelity after the same amount of learning epochs and the produced motion profiles are more regular.
to a sample of systems with different disorder patterns. The results of training in the above two scenarios are summarised in Table 1. The training always consists of 120 episodes of NN training with the learning rate \(10^{-4}\) and 60 direct gradient descent steps in the \(\mathbf{s}_{j}^{(*)}\)-space with the learning rate \(10^{-6}\). The results show that linear and harmonic shuttle protocols can be significantly optimised whenever the sample disorder is accurately known. However, in the case when the disorder pattern is not known, our optimisation strategy (at least with the applied number of training epochs and the learning rates) does not outperform the simple harmonic shuttle protocol. This shows that knowing the disorder pattern in the nanowire sample allows one to improve the quantum gate generation fidelity by two orders of magnitude.
Furthermore, we have compared the average performance of the different trained models on a sample of 30 disorder patterns drawn from the Gaussian distribution with the variance \(\sigma_{\nu}=0.02\). The results presented in Table 2 confirm our previous conclusions that understanding the disorder pattern in the nanowire is necessary for accomplishing high fidelity quantum gate generation. We also conclude that the harmonic shuttle protocol with stops at the junction is a reasonable choice of MZM exchange protocol for systems with unknown disorder and a suitable starting point for optimisation when the disorder pattern is known.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{Start} & \multirow{2}{*}{No training} & \multicolumn{3}{c}{Training} \\ & & No noise & Constant noise & Variable noise \\ \hline Linear & \(0.277\) & \(5\cdot 10^{-4}\) & \(6\cdot 10^{-4}\) & \(0.008\) \\ Harmonic & \(0.018\) & \(3\cdot 10^{-4}\) & \(9\cdot 10^{-4}\) & \(0.023\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The infidelities resulting from MZM transport profile optimisation for different types of noise and different starting shuttle protocols. The training consists of 120 episodes of NN training with the learning rate \(10^{-4}\) and 60 direct gradient descent steps in the \(\mathbf{s}_{j}^{(*)}\)-space with the learning rate \(10^{-6}\). The system parameters are the same as specified in Figure 5. The variance of the disorder noise is \(\sigma_{\nu}=0.02\). The learning protocol adapts efficiently to a known constant noise pattern, but fails to outperform the simple harmonic shuttle protocol (with stops at the junction) when the noise is unknown, i.e. allowed to vary during learning.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{Start} & \multirow{2}{*}{No training} & \multicolumn{3}{c}{Training} \\ & & No noise & Constant noise & Variable noise \\ \hline Linear & \(0.265\pm 0.016\) & \(0.010\pm 0.003\) & \(0.015\pm 0.004\) & \(0.010\pm 0.002\) \\ Harmonic & \(0.016\pm 0.006\) & \(0.027\pm 0.007\) & \(0.033\pm 0.007\) & \(0.022\pm 0.005\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The infidelities for the different trained models averaged over a sample of 30 disorder patterns drawn from the Gaussian distribution with the variance \(\sigma_{\nu}=0.02\). Other system parameters are the same as specified in Figure 5. The training consists of 120 episodes of NN training with the learning rate \(10^{-4}\) and 60 direct gradient descent steps in the \(\mathbf{s}_{j}^{(*)}\)-space with the learning rate \(10^{-6}\). The results show that when the disorder is unknown, the simple harmonic shuttle protocol with stops at the junction or models trained from the linear motion start may be a suitable choice. In the case of the harmonic start, we can also clearly see the effects of overfitting when the model is fitted to a particular disorder pattern, but applied to a sample of systems with varying disorder.
## 5 The jump of the energy gap near the junction point
As we have pointed out in Section 4 and Figure 5, in the optimised exchange protocols the MZMs are stopping at the junction. This is due to the sharp drop of the energy gap which starts when one of the MZMs overlaps with the junction point (site \(N+1\) in our labelling convention). Qualitatively, the energy gap behaves in a complicated way when one of the MZMs approaches the junction point. In particular, the gap starts to oscillate when the transported MZM approaches the junction and then drops sharply when the MZM passes the junction (see Figure 6). Consequently, the time derivative of the system's Hamiltonian becomes large in this situation and the only way to mitigate this and maintain the approximate adiabaticity of the time evolution is for the MZMs to slow down and effectively stop before crossing the junction point. In this section, we explain the drop in the energy gap in the completely localised regime, i.e. \(|\Delta|=w\) and \(\mu=0\). The oscillations seem to be more difficult to explain analytically as they are present only in the settings where the MZMs have some nonzero localisation length.
To explain the energy gap, we consider two \(p\)-wave chains of equal lengths (chain \(L\) and chain \(R\)) that are initially decoupled. Both chains have the parameters \(w=|\Delta|\)
Figure 6: The behaviour of the energy gap for the system of the size \(3N+1\) with \(N=60\) for two different sets of parameters. The blue line labelled as _sharp localisation_ corresponds to \(\Delta=1.8\), \(w=2.0\), \(\mu_{0}=0.0\), \(V_{0}=30.1\), \(x_{0}=5\) where the MZMs are sharply localised. The orange line labelled as _training_ corresponds to \(\Delta=0.55\), \(w=2.0\), \(\mu_{0}=1.0\), \(V_{0}=30.1\), \(x_{0}=5\) which is the same set of parameters that was used for NN training in Section 4. One can see different types of energy gap oscillations, depending on how sharply the MZMs are localised.
and \(\mu=0\). Their hamiltonians read
\[\begin{split}& H_{X}=-\Delta\sum_{j=1}^{N-1}\left(c_{X,j}^{\dagger}c _{X,j+1}+c_{X,j+1}^{\dagger}c_{X,j}\right)\\ &+\Delta\sum_{j=1}^{N-1}\left(e^{i\phi_{X}}\,c_{X,j}c_{X,j+1}+e^ {-i\phi_{X}}\,c_{X,j+1}^{\dagger}c_{X,j}^{\dagger}\right),\quad\Delta>0.\end{split} \tag{25}\]
where \(X=L,R\). To simplify the calculations, we will assume that \(\phi_{L}=0\) and \(\phi_{R}\equiv\phi\in[0,2\pi[\). Recall that the \(L/R\) Hamiltonians can be diagonalised in the Majorana representation [19]
\[\gamma_{2j-1}^{(X)}=e^{-i\phi_{X}/2}c_{X,j}^{\dagger}+^{i\phi_{X}/2}c_{X,j}, \quad\gamma_{2j}^{(X)}=i\left(e^{-i\phi_{X}/2}c_{X,j}^{\dagger}-e^{i\phi_{X}/2 }c_{X,j}\right). \tag{26}\]
Then, we have
\[H_{X}=i\Delta\sum_{j=1}^{N-1}\gamma_{2j}^{(X)}\gamma_{2j+1}^{(X)}. \tag{27}\]
In particular, the four MZMs \(\gamma_{1}^{(X)}\), \(\gamma_{N}^{(X)}\), \(X=L,R\) are the edge modes that do not enter \(H_{X}\). The energy gap of this system is equal to
\[E_{gap,uncoupled}=\Delta. \tag{28}\]
Let us next couple site \(N\) of chain \(L\) with site \(1\) of chain \(R\) using the coupling term (5), i.e.
\[H_{L-R}=-\Delta\left(c_{L,N}^{\dagger}c_{R,1}+c_{R,1}^{\dagger}c_{L,N}\right)+ \Delta\left(e^{i\phi}\,c_{L,N}c_{R,1}+e^{-i\phi}\,c_{R,1}^{\dagger}c_{L,N}^{ \dagger}\right), \tag{29}\]
so that the Hamiltonian of the entire system reads \(H=H_{L}+H_{R}+H_{L-R}\). The coupling Hamiltonian in terms of the Majorana operators reads
\[H_{L-R}=i\Delta\left(\sin\left(\frac{\phi}{2}\right)\,\gamma_{2N-1}^{(L)}\gamma _{1}^{(R)}+\cos\left(\frac{\phi}{2}\right)\,\gamma_{2N}^{(L)}\gamma_{1}^{(R)} \right). \tag{30}\]
Thus, in order to diagonalise the entire system, it is enough to diagonalise the part
\[H_{eff}=H_{L-R}+i\Delta\,\gamma_{2N-2}^{(L)}\gamma_{2N-1}^{(L)}=\frac{i}{2}\Delta\times\]
\[\left(\begin{array}{cccc}\gamma_{2N-2}^{(L)}&\gamma_{2N-1}^{(L)}&\gamma_{2N }^{(R)}\end{array}\right)\left(\begin{array}{cccc}0&1&0&0\\ -1&0&0&\sin\frac{\phi}{2}\\ 0&0&0&\cos\frac{\phi}{2}\\ 0&-\sin\frac{\phi}{2}&-\cos\frac{\phi}{2}&0\end{array}\right)\left(\begin{array} []{c}\gamma_{2N-2}^{(L)}\\ \gamma_{2N-1}^{(L)}\\ \gamma_{2N}^{(R)}\end{array}\right). \tag{31}\]
The above matrix can be diagonalised analytically and the resulting eigenenergies read \(\pm\Delta\sqrt{1\pm\sin\frac{\phi}{2}}\). Thus, we have
\[\frac{E_{gap,coupled}}{E_{gap,uncoupled}}=\sqrt{1-\sin\frac{\phi}{2}}\leq 1. \tag{32}\]
In particular, when \(\phi=\pi\) the two chains form the so-called \(\pi\)-junction where the gap closes and the two MZMs remain localised at the junction points despite the presence of the coupling. This remains true also outside the completely localised regime [14]. However, when \(\phi\neq\pi\), then the Majoranas \(\gamma_{2N}^{(L)}\) and \(\gamma_{1}^{(R)}\) no longer have zero energy and the entire system has just two MZMs localised at the endpoints of the connected
\(L-R\) chain. As we can see in Figure 6, the qualitative features of the energy gap jump remain true even outside the completely localised regime where the Equation (32) provides a rough estimate for the amplitude of the gap jump. There, we have \(\phi=\pi/2\), so if the MZMs were perfectly localised, the gap in the middle of stage \(II\) of the exchange would be equal to \(\sqrt{1-1/\sqrt{2}}\approx 0.54\) times the gap in the middle of stage \(I\).
## 6 Discussion and conclusions
In this work, we have studied the problem of optimising the finite-time control protocols for Majorana zero mode exchange. To this end, we have derived an analytic formula for the gradient of quantum fidelity which allowed us to build a scalable deep learning system for the control protocol optimisation. We have worked in the super-adiabatic regime and focused on the exchange of two MZMs on a trijunction consisting of \(3N+1\) sites. We have observed that the optimised exchange protocols were characterised by stopping of the MZMs a the junction point. We have explained this stopping effect by the behaviour of the energy gap which exhibits a sharp jump when one of the MZMs approaches the junction. Our optimised protocols improve the fidelity of quantum gate generation by two orders of magnitude when compared with the simple harmonic motion shuttle protocol. However, adding unknown disorder to the nanowire causes our protocols to lose their robustness due to the overfitting. This might be remedied to some extent by applying the learning via online stochastic gradient descent for a larger number of learning epochs, possibly with decaying learning rate. This shows that understanding the disorder pattern in a nanowire is necessary for accomplishing high fidelity quantum gate generation and passing the error correction thresholds.
A natural direction of generalising our results would be to consider the exchange of two MZMs in a system consisting of the total of four MZMs (two separate topological regions). This would allow us to directly simulate a topological qubit. However, this would require further optimisation of our code as well as having access to more powerful computational resources. This is because calculating the gradient of quantum fidelity, even with the analytic formula at hand, still requires significant computational resources. For the trijunction consisting of 166 sites (\(N=55\)) and time evolution of 8000 time steps, evaluating the gradient with our current implementation required around 100 Gigabytes of RAM and took about 45 minutes when using 28 CPU cores of an HPC node (so, 120 epochs of NN training takes about four days). Realising a similar calculation for a system of four MZMs which would consist of \(200-300\) sites would take a few times more resources since calculating the gradient requires several steps (such as matrix diagonalisation) which scale polynomially with the system size.
Nevertheless, our presented results do apply to systems with more than two MZMs whenever only two MZMs localised at the edges of the same topological region are exchanged and the remaining MZMs are sufficiently separated. Such exchanges are also crucial elements of quantum gate generation algorithms [1, 44].
Another possible extension of our work would concern the proximity coupled nanowires with induced \(s\)-wave superconductivity [45, 46]. On the technical level, this would also require more computational resources, since including spin makes the Hamiltonian twice as large. Since \(p\)-wave superconductors are a limiting case of \(s\)-wave superconductors, we anticipate similar effects concerning the stopping of MZMs at the
junction point due to the presence of an analogous energy gap jump.
The authors would like to thank Luuk Coopmans for helpful discussions, especially for suggesting us to analytically calculate the quantum infidelity gradient. We also thank Domenico Pellegrino for useful discussions.
|
2301.05717 | The ZX-Calculus is Canonical in the Heisenberg Picture for Stabilizer
Quantum Mechanics | In 2008 Coecke and Duncan proposed the graphical ZX-calculus rewrite system
which came to formalize reasoning with quantum circuits, measurements and
quantum states. The ZX-calculus is sound for qubit quantum mechanics. Hence,
equality of diagrams under ZX-equivalent transformations lifts to an equality
of corresponding equations over matrices. Conversely, in 2014 Backens proved
completeness, establishing that any derivation done in stabilizer quantum
mechanics with matrices can be derived graphically using the ZX-calculus. A
graphical rewrite system that is both confluent and also terminates uniquely is
called canonical: Applying alternate sequences of rewrites to the same initial
diagram, a rewrite system is confluent whenever all resulting diagrams can be
manipulated to establish graphical equivalence. Here we show that a reduced
ZX-rewrite system is already confluent in the Heisenberg picture for stabilizer
quantum mechanics. Moreover, any application of a subset of ZX-rewrites
terminates uniquely and irrespective of the order of term rewrites in the
Heisenberg picture for stabilizer quantum mechanics. The ZX-system is hence
Heisenberg-canonical for stabiliser quantum mechanics. For a stabilizer circuit
on $n$-qubits with $l$ single-qubit gates and $g$ two-qubit gates, the circuit
output can be derived graphically in the Heisenberg picture using no more than
$(\frac{1}{2}\cdot g+l)\cdot n$ graphical rewrites, thereby providing a
graphical proof of the Gottesman-Knill theorem. Finally, we establish that each
stabilizer state described by a Clifford circuit gives rise to a non-negative
parent Hamiltonian with $n+1$ terms and a one-dimensional kernel spanned by the
corresponding stabilizer state. Such parent Hamiltonians can be derived with
$\mathcal{O}(t\cdot n)$ graphical rewrites for a low energy state prepared by a
$t$-gate Clifford circuit. | J Biamonte, A Nasrallah | 2023-01-13T19:00:01Z | http://arxiv.org/abs/2301.05717v1 | # The ZX-Calculus is Canonical in the Heisenberg Picture for Stabilizer Quantum Mechanics
###### Abstract
In 2008 Coecke and Duncan proposed the graphical ZX-calculus rewrite system which came to formalize reasoning with quantum circuits, measurements and quantum states. The ZX-calculus is sound for qubit quantum mechanics. Hence, equality of diagrams under ZX-equivalent transformations lifts to an equality of corresponding equations over matrices. Conversely, in 2014 Backens proved completeness, establishing that any derivation done in stabilizer quantum mechanics with matrices can be derived graphically using the ZX-calculus. A graphical rewrite system that is both confluent and also terminates uniquely is called canonical. Applying alternate sequences of rewrites to the same initial diagram, a rewrite system is confluent whenever all resulting diagrams can be manipulated to establish graphical equivalence. Here we show that a reduced ZX-rewrite system is already confluent in the Heisenberg picture for stabilizer quantum mechanics. Moreover, any application of a subset of ZX-rewrites terminates uniquely and irrespective of the order of term rewrites in the Heisenberg picture for stabilizer quantum mechanics. The ZX-system is hence Heisenberg-canonical for stabiliser quantum mechanics. For a stabilizer circuit on \(n\)-qubits with \(l\) single-qubit gates and \(g\) two-qubit gates, the circuit output can be derived graphically in the Heisenberg picture using no more than \((\frac{1}{2}\cdot g+l)\cdot n\) graphical rewrites, thereby providing a graphical proof of the Gottesman-Knill theorem. Finally, we consider the application of this tool as a graphical means to derive parent Hamiltonians which can be used as penalty functions in variational quantum computation. Hence, we establish that each stabilizer state described by a Clifford circuit gives rise to a non-negative parent Hamiltonian with \(n+1\) terms and a one-dimensional kernel spanned by the corresponding stabilizer state. Such parent Hamiltonians can be derived with \(\mathcal{O}(t\cdot n)\) graphical rewrites for a low energy state prepared by a \(t\)-gate Clifford circuit.
Keywords:ZX-calculus rewrite system confluence Gottesman-Knill theorem tensor networks quantum circuits
Graphical reasoning has remained part of quantum information science since the early days of the field [1; 2; 3; 4]. More recently, the graphical language [5] has become central to tensor network methods [6; 7; 8; 9; 10; 11] whereas the formalisation of systematic graphical reasoning systems are now widely considered in research domains related to categorical quantum mechanics [12; 13] as well as areas related to tensor networks [6; 7; 8; 9; 10; 11].
One contemporary approach to graphical reasoning is the so called, Coecke-Duncan ZX-calculus [14; 15], which formalizes graphical reasoning of typical quantum circuits, measurements and states appearing in quantum information processing [14; 15; 16; 17; 18; 19; 20; 21]. The ZX-calculus has been used as a tool to study a range of tasks related to quantum computing e.g. quantum circuit optimization [22; 23; 24], quantum circuit equivalence checking [25; 26] and in the design of quantum error correcting codes [27; 28]. Diagrammatic manipulations using the ZX-calculus have also been automated by Kissinger and others [18; 19].
A graphical system is given by an axiomatic collection of graphical identities between diagrams. Each diagram is called a term whereas the fundamental identities are called, one-step equivalency transformations. Typically in quantum circuits, rewrites are symmetric (aka reversible) and hence an equality symbol is used to denote the graphical equivalence relating terms. Non-symmetric rewrite systems are also possible. For example, one would replace term equalities with one-step arrows: a restriction we make in our main proof.
Let \(S\) be a set of terms (diagrams) and let \(\rightarrow\) be the set of one-step equivalency transforms on \(S\). For \(A\in S\) we denote by \(\{A\rightarrow\}\) the set of diagrams in \(S\) resulting from single step transformations on \(A\). We will now consider the equivalency class (aka closure) of \(\rightarrow\):
Definition 1 (Arrow closures): We denote by \(\xrightarrow{\star}\) the reflexive, transitive closure of \(\rightarrow\).
Hence, we write \(A\xrightarrow{\star}B\) to mean that there exists a sequence of single step transformations to arrive at \(B\) starting from \(A\). Now we will consider the closure of terms:
Definition 2 (\(\star\)-closure): The reflexive, transitive \(\star\)-closure of term \(A\) in a set of diagrams \(S\) is given as:
\[\{A\xrightarrow{\star}\}\equiv\{B\in S|A\to B\}. \tag{1}\]
By reflexive we mean that there exists an identity on all terms \(A\to A\). Transitive means that \(A\to B\) and \(B\to C\) implies the existence of a map \(A\to C\).
Definition 3 (Symmetric \(\star\)-closure): The reflexive, transitive symmetric \(\star\)-closure of term \(A\) in a set of diagrams \(S\) is given as:
\[\{A\xleftrightarrow{\star}\}\equiv\{B\in S|A\leftrightarrow B\}. \tag{2}\]
Here symmetric means that all maps have an inverse. Namely \(A\to B\) implies a map \(B\to A\).
Given a ZX-diagram \(A\), we denote the corresponding matrix representative as \([A]\), called the standard interpretation or flattening of \(A\). We say that a diagram
\(B\) is derivable from \(A\) only when ZX-equivalent transformations can be used to manipulate \(A\) into \(B\). One would then write \(A\xrightarrow{\star}B\). In other words, the \(\star\)-closure of \(A\) contains a transformation(s) which derives \(B\). As the ZX-system is sound, this implies equality of the underlying matrices \([A]=[B]\). More formally:
Definition 4 (Soundness): Let \(A\) be a ZX-diagram, then
\[\forall\ Q\in\{A\xleftrightarrow{\star}\}\implies![R], \tag{3}\]
where \([R]\) is the flattening of any diagram in \(\{A\xleftrightarrow{\star}\}\).
Hence, all diagrams in \(\{A\xleftrightarrow{\star}\}\) correspond to a unique matrix \([R]\) given by the flattening of any diagram in \(\{A\xleftrightarrow{\star}\}\) and called the symmetric \(\star\)-closure invariant of \(A\). This is soundness. Note that two \(\star\)-closures need not have different \(\star\)-invariants. For uniqueness in the case of stabiliser quantum mechanics, we need to recall the theory of Backens [17]. Indeed, completeness is defined as the converse to soundness: if two-well formed equations can be proven to be equal using equations over matrices, then their diagrams related by ZX-equivalent transformations. More formally:
Definition 5 (Completeness--Backens [17]): Assuming stabilizer quantum mechanics, given a well formed equation deriving matrix \([A]\):
\[[A]\implies!\{A\xleftrightarrow{\star}\}. \tag{4}\]
As mentioned in the abstract, the ZX-calculus is sound for qubit quantum mechanics [14, 15], and complete for stabilizer quantum mechanics [17]. Note that the union of soundness and completeness induces a grading on the ZX-system. The \(\star\)-closures are mutually disjoint and hence, the disjoint union of all \(\star\)-closures is 1-1 with the set of all ZX-diagrams. This is what Backens proved [17]. (See also the pseudo normal from of stabilizer ZX-diagrams given by Duncan and Pedrix in [29]).
Backens theory was adapted by Jeandel, Perdrix and Vilmart who showed completeness of a ZX-type calculus for graphically representing real valued matrices [30]. Duncan and Pedrix also proved completeness in their formulation of real-valued stabiliser quantum mechanics [31]. Approaches related to the extension of these results include universal extension of stabiliser ZX-calculus [14, 15, 32, 20, 21]. Whereas Backens showed completeness of the ZX-calculus for single qubit Clifford gates plus \(\pi/4\) T-gates [20] and a two-qubit Clifford+T extension was considered by Coecke and Wang [33], incompleteness of the ZX-calculus for the Clifford+T quantum mechanics was proven in [34]. Earlier incompleteness results for the ZX-calculus can be found in [35].
In this work, we will restrict ourselves to the Heisenberg picture of stabilizer quantum mechanics. Whereas in the standard setting of ZX-diagrams, the \(\star\)-closures partition the set of terms into mutually disjoint equivalency classes, in the Heisenberg picture for stabilizer quantum mechanics, there is indeed a unique normal form. There are several definitions appear in the literature related to rewrite termination [36]. We will firstly consider a weak form of termination.
Definition 6 (Weak termination): \(\forall A\in S\) there exists a \(D\in S\) such that \(D\in\{A\xrightarrow{\star}\}\) and \(D=\{D\xrightarrow{\star}\}\).
When we write \(D=\{D\xrightarrow{\star}\}\) we mean that \(D\) is exactly equal to its one element star closure. We then call \(D\) terminal. Of course, we are dealing with a restriction of the ZX-rules. To show that the system weakly terminates, it is enough to show that a rewrite sequence exists which terminates uniquely.
Definition 7 (Efficient termination): Let \(A\) be a \(t\)-term Heisenberg stabilizer ZX-diagram on \(n\)-qubits and let \(D\) note \(A\)'s terminal element, termination is efficient whenever there exist a sequence of rewrites \(A\xrightarrow{\star}D\) in \(\mathcal{O}(\text{poly}(t\cdot n))\) steps.
To prove confluence of the ZX-system in the Heisenberg picture for stabiliser quantum mechanics, we considered the property of weak termination of a subset of the ZX-rewrite system. Informally this shows that there exists a unique terminal diagram that results from the application of a subset of the ZX-rewrite rules, independent of the order.
Definition 8 (Confluence): For all \(A,B,C\in S\) such that \(A\xrightarrow{\star}B,C\), there exist a \(D\in S\) such that \(B,C\xrightarrow{\star}D=\{D\xrightarrow{\star}\}\).
Hence, we show that a subset of equivalence transforms partitions all ZX-stabiliser diagrams disjointly into directed graphs. As we establish that the stabilizer ZX-calculus is already confluent and (weakly) terminal in the Heisenberg picture, we show this by proving that any application irrespective of the order of a restricted form of rewrites results in a unique term. From these results, we establish that the rewrite system is already canonical:
Definition 9 (Canonical rewrite system [37]): A rewrite system is equivalently,
1. [label=()]
2. canonical (aka convergent),
3. weak-terminal and confluent.
Consequentially, confluence and terminality in the Heisenberg picture lead to a graphical variant of the Gottesman-Knill theorem [38].
We continue by recalling the standard mathematical framework of qubit quantum mechancis in SS 1. SS 2 presents a summary of Clifford gates and their properties: we recall the building blocks and the notation for the ZX-calculus in SS 3. Note that whenever possible we tailor this notation to match quantum circuits. In SS 4 we recall the axiomatic rules for the ZX-rewrite system and we derive the rules we will use in the Heisenberg picture. We present our main results in SS 5: namely that the stabilizer ZX-calculus is confluent, terminal and hence canonical in the Heisenberg picture. Finally, we show that these results allow one to recover the Gottesman-Knill theorem graphically. Then we apply these tools to graphically derive parent Hamiltonians for stabiliser states. These can serve as penalty functions for variational quantum computation.
## 1 Standard mathematical framework of qubit quantum mechanics
The graphical ZX-system would replace and augment the standard matrix-based approach to describe quantum information processing. As such, we will recall the standard definitions of qubit quantum mechanics.
We consider quantum computation with collections of two-level quantum systems acted on by unitary quantum gates.
Definition 10 (Qubit state space): The \(n\)-qubit state space is defined as the complex vector space \(V_{n}=[\mathbb{C}^{2}]^{\otimes n}\cong[\mathbb{C}]^{2^{n}}\) where a qubit state is a normalized vector in this space.
We will make use of the following conventions:
1. We let \(\ket{\psi}\in V_{n}\) represent a quantum state.
2. We let \(\bra{\psi}\in V_{n}^{*}\), where \((\ket{\psi})^{\dagger}=\bra{\psi}\) represent a costate or effect.
3. We denote by \(\mathcal{L}(V_{n})\) the space of linear maps from \([\mathbb{C}^{2}]^{\otimes n}\) to itself.
4. Quantum gates are given by unitary operators: \[U\in\mathcal{U}_{\mathbb{C}}(2^{n})\equiv\left\{U\in\mathcal{L}(V_{n})|UU^{ \dagger}=\mathds{1}\right\}.\] (5)
5. Hamiltonians are given as: \[\begin{split} A\in\text{herm}_{\mathbb{C}}(2^{n})& \equiv\left\{A\in\mathcal{L}(V_{n})|A=A^{\dagger}\right\}\\ &=\text{span}_{\mathbb{R}}\left\{\bigotimes_{i=1}^{n}\sigma_{i}^{ \alpha_{i}}\ |\ \alpha_{i}=0,1,2,3\right\}.\end{split}\] (6)
6. The n-qubit basis is defined as \(\mathcal{B}_{n}=\{\ket{0},\ket{1}\}^{\otimes n}\), \(\text{span}\{\mathcal{B}_{n}\}\cong V_{n}\), where \(\ket{0}=(1,0)^{\top}\) and \(\ket{1}=(0,1)^{\top}\).
7. Single qubit basis vectors are given as \(\ket{0},\ket{1}\in\mathcal{B}_{1}\), with \(\text{span}\{\ket{0},\ket{1}\}\cong V_{1}\).
8. The \(2^{n}\) basis vectors are orthonormal and satisfy, \(\bra{l|m}=\delta_{lm}\).
Definition 11 (Pauli matrices): Let us denote the Pauli matrices as follows:
\[\begin{split}\sigma^{0}\equiv\mathds{1}=|0\rangle\!\langle 0|+|1 \rangle\!\langle 1|,\quad\sigma^{1}\equiv X=|0\rangle\!\langle 1|+|1\rangle\! \langle 0|,\\ \sigma^{2}\equiv Y=\imath(|1\rangle\!\langle 0|-|0\rangle\! \langle 1|),\quad\sigma^{3}\equiv Z=|0\rangle\!\langle 0|-|1\rangle\! \langle 1|.\end{split} \tag{7}\]
These form a basis for matrices in \(Mat_{\mathbb{C}}(2)\).
The Pauli \(X\) matrix is sometimes called the NOT-gate it induces the mapping \(\ket{0}\stackrel{{ X}}{{\longleftrightarrow}}\ket{1}\). The Pauli matrices satisfy
\[X^{2}=Y^{2}=Z^{2}=\mathds{1}, \tag{8}\]
and
\[XYZ=\imath\mathds{1}. \tag{9}\]
Furthermore we have that:
Definition 12 (Pauli group): The Pauli matrices form a group over multiplication with elements: \[\mathcal{P}_{n}=\left\{e^{\imath\theta\frac{\pi}{2}}\bigotimes_{i=1}^{n}\sigma _{i}^{\alpha_{i}}\ |\ \theta,\alpha_{i}=0,1,2,3\right\}.\] (10)
Informally, the normalizer of \(\mathcal{P}_{n}\) is the set of unitary operations that leaves \(\mathcal{P}_{n}\) invariant under element wise conjugation. These are known as Clifford gates.
## 2 Clifford gates
The controlled-NOT (aka Feynman) gate is a central building block for quantum information processing. The gate and its building blocks have been used extensively: it appears in classical [39] and quantum diagrammatic reasoning [14; 15], in categorical quantum mechanics [14; 15] and in tensor networks [40; 41; 42; 43] and tensor contractions for #SAT counting problems [44].
Let us recall the standard generating set of all Clifford group circuits.
Definition 13 (Clifford gates): The following gates generate any Clifford circuit [38; 45]:
1. the controlled-NOT gate3 Footnote 3: The notation \(C_{n}^{l}(X)\) is a controlled-X gate, where the \(l^{th}\) and \(m^{th}\) qubits are the control and target qubits respectively. \[C(X)=|0\rangle\!\langle 0|\otimes\mathds{1}+|1\rangle\!\langle 1|\otimes X;\] (11)
2. the Hadamard gate \[H=\frac{1}{\sqrt{2}}(X+Z);\] (12)
3. the phase gate \[P=|0\rangle\!\langle 0|+\imath|1\rangle\!\langle 1|;\] (13)
4. the Pauli gates and their products from Definition 11.
Remark 1: The gates (i-iv) are presented graphically in Figure 1. These gates (i-iv) sequence to generate any Clifford circuit.
Remark 2: The standard properties of single qubit Clifford gates follow:
1. \(HXH=Z\) and \(HZH=X\),
2. \(PXP^{\dagger}=Y\) and \(PZP^{\dagger}=Z=P^{2}\).
Definition 14 (Clifford group): The Clifford group is defined as the normalizer of the Pauli group as:
\[\mathcal{C}_{n}=\left\{g\in\mathcal{U}(2^{n})\ |\ {gpg}^{\dagger}\in\mathcal{P}_{n}, \forall p\in\mathcal{P}_{n}\right\}. \tag{14}\]
We will now consider the controlled-NOT gate in further detail.
Figure 1: Clifford gates: (i) controlled-X, (ii) Hadamard, (iii) phase and (iv) arbitrary Pauli gates.
## 3 Tensor network building blocks and the ZX-calculus
The building blocks of the controlled-NOT gate will now be separately considered. Firstly one will consider the black dot which copies binary inputs (0 and 1) as:
\[0\to 0,0 \tag{15a}\] \[1\to 1,1. \tag{15b}\]
In the diagrammatic tensor network language, the COPY-gate is
and graphically, equation (15a) and (15b) become
The next building block preforms the exclusive OR operation (XOR). Given two binary inputs (say \(a\) and \(b\)), the output (\(a\oplus b=a+b-2ab\)) is 1 iff exactly a single input is 1 (that is, addition modulo 2). The gate is drawn as:
The XOR gate allows one to realize any linear Boolean function. Let \(f:\{0,1\}^{n}\rightarrow\{0,1\}\). We consider indeterminates \(x_{1}x_{2}\ldots x_{n}\). Then \(f(x_{1},x_{2},\ldots,x_{n})\) is linear over \(\oplus\) if it can be written as:
\[f=c_{1}x_{1}\oplus c_{2}x_{2}\oplus\cdots\oplus c_{n-1}x_{n-1}\oplus c_{n}x_{ n}, \tag{16}\]
where
\[\mathbf{c}\stackrel{{\mbox{\tiny def}}}{{=}}(c_{1},c_{2},\ldots,c_{n-1},c_{n}) \tag{17}\]
is any \(n\)-long Boolean string. Hence, there are \(2^{n}\) linear Boolean functions and note that negation is not allowed. When negation is allowed a constant \(c_{0}\oplus f\) is added (mod 2) to (16) and for \(c_{0}=1\), the function is called affine. In other words, negation is equivalent to allowing constant 1 as:
\[\begin{array}{c}\includegraphics[width=142.26378pt]{figs/2011.eps}&=& \includegraphics[width=142.26378pt]{figs/2011.eps}\end{array} \tag{18}\]
which sends Boolean variable \(x\) to \(1-x\). Using the polarity representation of \(f\),
\[\hat{f}(\mathbf{x})=(-1)^{f(\mathbf{x})}=1-2f(\mathbf{x}). \tag{19}\]
We note that linear Boolean functions index the columns of the \(n\)-fold tensor product of \(2\times 2\) Hadamard matrices (that is, \(H^{\otimes n}\) where the \(i\)-\(j\)th entry of each \(2\times 2\) is \(\sqrt{2}H_{ij}\stackrel{{\text{\tiny def}}}{{=}}(-1)^{i\cdot j}\)). In particular:
\[\tikzfig{eq:H_i}\] (rule C1)
By rule C1 one can think of XOR as being a copy operation in another basis. We send binary \(0\) to \(\ket{0}\) and \(1\) to \(\ket{1}\). Then XOR acts as a copy operation:
\[\frac{1}{\sqrt{2}}\ket{+}\rightarrow\ket{+,+}, \tag{20a}\] \[\frac{1}{\sqrt{2}}\ket{-}\rightarrow\ket{-,-}, \tag{20b}\]
using \(H^{2}=\mathds{1}\), \(\ket{+}\stackrel{{\text{\tiny def}}}{{=}}H\ket{0}\) and \(\ket{-}\stackrel{{\text{\tiny def}}}{{=}}H\ket{1}\).
A simplistic methodology to connect quantum circuits with indexed tensor networks starts with defining of two tensors in terms of components.
\[\tikzfig{eq:H_i}\] (a)
Using the Sengupta-Biamonte form [46], for (a) we have,
\[\begin{split}\delta^{i}_{\ jk}&=\frac{1}{2}(i+j+k)^ {2}-\frac{3}{2}(i+j+k)+1\\ &=\frac{1}{2}(i+j+k-2)(i+j+k-1),\end{split} \tag{21}\]
where the indices \(i\), \(j\), \(k\in\{0,1\}\). In other words, the following contractions evaluate to unity.
\[\tikzfig{eq:H_i} \tag{22}\]
COPY tensor can be written as:
\[\text{COPY}=\sum_{i,j,k}\delta^{i}_{\ jk}|jk\rangle\!\langle i|=|00\rangle\! \langle 0|+|11\rangle\!\langle 1|. \tag{23}\]
Likewise using the Sengupta-Biamonte form [46], for (b) we have,
\[\begin{split}\left[\oplus\right]^{q}_{\ rs}&=- \frac{2}{3}(q+r+s)^{3}+3(q+r+s)^{2}-\frac{10}{3}(q+r+s)+1\\ &=-\frac{2}{3}(q+r+s-3)(q+r+s-1)(q+r+s-\frac{1}{2}).\end{split} \tag{24}\]
Where the following contractions evaluate to unity (the XOR tensor is fully symmetric, hence the three rightmost contractions are identical by wire permutation).
The XOR tensor can be expressed as:
\[\text{XOR}=\sum_{q,r,s}[\oplus]^{q}_{\ rs}|q\rangle\!\langle rs|=|0\rangle\left( \langle 00|+\langle 11|\rangle+|1\rangle\left(\langle 01|+\langle 10|\right)\right. \tag{24}\]
Then the Feynman gate (\(C(X)\)) is given as the following tensor contraction,
\[\sum_{m}\delta^{ij}_{\ m}[\oplus]^{m}_{\ qr}=\text{C}(\text{X})^{ij}_{qr}, \tag{25}\]
where we raised an index on \(\delta\). All quantum circuits can be broken into their building blocks and thought of as indexed tensor contractions in this way. Typically a tensor network is written in abstract index notation. We are concerned with the case where such equations can be replaced with wire diagrams (ZX-wire diagrams) entirely.
### ZX-calculus
As mentioned in the introduction, the ZX-calculus has been used in a variety of areas e.g. measurement based quantum computation and quantum circuit optimization [22, 23, 24], quantum circuit equality validation [26], and quantum error correcting codes [27, 28]. ZX-diagrams are equipped with a set of graphical rules that allows the graphical reasoning of linear maps.
The building blocks of the ZX-rewrite system [14, 15] are given as follows:
\begin{tabular}{|p{56.9pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Name** & **Diagrammatic** & **Braket/Operator Notation** & **Abstract Index Notation** \\ \hline Identity & & \(\mathds{1}=|0\rangle\!\langle 0|+|1\rangle\!\langle 1|\) & \(\delta^{r}_{s}\) \\ \hline Cup & & \(\sqrt{2}\,|\phi^{+}\rangle=\left(|00\rangle+|11\rangle\right)\) & \(\delta^{rs}\) \\ \hline Cap & & \(\sqrt{2}\,\langle\phi^{+}|=\left(\langle 00|+\langle 11|\right)\) & \(\delta_{rs}\) \\ \hline Swap & & \(\frac{1}{2}(\mathds{1}\otimes\mathds{1}+X\otimes X+Y\otimes Y+Z\otimes Z)\) & \(\delta^{r}_{p}\delta^{s}_{q}\) \\ \hline Hadamard & & \(H=|+\rangle\!\langle 0|+|-\rangle\!\langle 1|\) & \(H^{r}_{s}\) \\ \hline X vertices & \(n\) & \(m\) & \(X^{n}_{m}(\alpha)=|+\rangle^{\otimes m}\,\langle+|^{\otimes n}+e^{i\alpha}\,|- \rangle^{\otimes m}\,\langle-|^{\otimes n}\) & \(X^{r_{1}r_{2}\cdots r_{n}}_{s_{1}s_{2}\cdots s_{m}}(\alpha)\) \\ \hline Z vertices & \(n\) & \(m\) & \(Z^{n}_{m}(\alpha)=|0\rangle^{\otimes m}\,\langle 0|^{\otimes n}+e^{i\alpha}\,|1 \rangle^{\otimes m}\,\langle 1|^{\otimes n}\) & \(Z^{r_{1}r_{2}\cdots r_{n}}_{s_{1}s_{2}\cdots s_{m}}(\alpha)\) \\ \hline \end{tabular}
where \(\alpha=[0,2\pi)\), \(|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)\), and \(|-\rangle=\frac{1}{\sqrt{2}}(|0\rangle-|1\rangle)\).
_Remark 3_.: One can derive the Pauli and Clifford gates from the table above:
1. The Pauli \(Z\), which is recovered by setting \((n=1,m=1,\alpha=\pi)\) in \(Z_{m}^{n}(\alpha)\), \[Z\equiv Z_{1}^{1}(\pi)=|0\rangle\!\langle 0|-|1\rangle\!\langle 1|.\] (26)
2. The Pauli \(X\), which is recovered by setting \((n=1,m=1,\alpha=\pi)\) in \(X_{m}^{n}(\alpha)\), \[X\equiv X_{1}^{1}(\pi)=|+\rangle\!\langle+|-|-\rangle\!\langle-|=|0\rangle\! \langle 1|+|1\rangle\!\langle 0|.\] (27)
3. The Pauli \(Y\), \[\imath Y=X_{1}^{1}(\pi)\cdot Z_{1}^{1}(\pi).\] (28)
4. the Phase gate \(P\), which is recovered by setting \((n=1,m=1,\alpha=\frac{\pi}{2})\) in \(Z_{m}^{n}(\alpha)\), \[P\equiv Z_{1}^{1}\left(\frac{\pi}{2}\right)=|0\rangle\!\langle 0|+\imath|1 \rangle\!\langle 1|.\] (29)
5. The COPY tensor, which is recovered by setting \((n=1,m=2,\alpha=0)\) in \(Z_{m}^{n}(\alpha)\), \[\text{COPY}\equiv Z_{2}^{1}(0)=|00\rangle\!\langle 0|+|11\rangle\!\langle 1|.\] (30)
6. The XOR tensor, which is recovered by setting \((n=2,m=1,\alpha=0)\) in \(X_{m}^{n}(\alpha)\), \[\text{XOR}\equiv X_{1}^{2}(0)=|+\rangle\!\langle++|+|-\rangle\!\langle--|.\] (31)
7. The controlled-X gate, \(C(X)\), which is the concatenation of COPY and XOR, \[C(X)=|0\rangle\!\langle 0|\otimes\mathds{1}+|1\rangle\!\langle 1|\otimes X.\] (32) For single qubit gates we adopt the following graphical notations: **Definition 15**: **(Z gate).** _The dot on wire_ _denotes the \(Z\) gate which correspond to (26)._ **Definition 16**: **(X gate).** _The plus on wire_ _denotes the \(X\) gate which correspond to (27)._ **Definition 17**: **(Y gate).** _The Pauli \(Y\) gate is represented graphically as a box with \(Y\) in it. We have that \(ZX=\imath Y\) which is represented graphically as:_ _and we also have \(XZ=-\imath Y\) that follows graphically as:_ _where the \(\imath,-\imath\) in the diamonds represents the complex scalars \(\imath,-\imath\)._
The ZX-rewrite system considers the interact of the building blocks above. This system subsumes the rewrites used in the present work. However, we consider only stabiliser states and as such, only formulate the rules required to complete our proofs.
## 4 Stabilizer states
We will now consider the basic definitions of stabiliser states. Whereas the rewrite rules used here readily follow from (or are subsumed by) the ZX-rewrites, several of our rules also follow from the definition of stabilizers.
Definition 18 (Stabilizer state): An \(n\)-qubit state \(\ket{\psi}\) is a stabilizer state of a subgroup \(\mathcal{S}\subseteq\mathcal{P}_{n}\) if for all \(P\in\mathcal{S}\), \(P\ket{\psi}=(+1)\ket{\psi}\).
Example 1 (Single qubit stabilizers):
Example 2: The Bell state is a stabilizer state.
\[\sqrt{2}\ket{\phi^{+}}=\sum_{a,b}(a\oplus\neg b)\ket{a,b}=\ket{00}+\ket{11}, \tag{33}\]
where \(\neg b=1-b\). The abelian group that corresponds to this state is
\[\mathcal{S}=\left\{\mathds{1}\otimes\mathds{1},X\otimes X,-Y\otimes Y,Z\otimes Z \right\}. \tag{34}\]
### Derivation of the rewrite rules
The axiomatic rules of the ZX-rewrite system consists of a set of equations that describe how a diagram might be transformed into another. We will state some helpful graphical rules derived from the COPY and XOR tensors which will from some axiomatic rules for the rewrite system.
We will derive basic circuit identities and state them graphically, as is common in quantum circuits [47]. Here we will adopt a tensor network approach and state identities that are also tensor symmetries [6]. The rules presented in this section are used to derive templates in the Heisenberg picture, see SS 4.3.
Remark 4: The COPY-tensor has stabilizer generators \(X_{i}X_{j}X_{k}\), \(Z_{i}Z_{j}\) for \(i\neq j\neq k\in\{1,2,3\}\) a qubit index. For example,
\[Z_{i}Z_{j}(\ket{000}+\ket{111})=\ket{000}+\ket{111}. \tag{35}\]
The following will present these identities graphically (Proposition 1).
Proposition 1 (Stabilizers of COPY): _The following equations are the Pauli stabilizers of COPY._
Note that rule ST1 follows from spiders [14; 15], rule ST2 follows from rule K1a below. rule ST3 are template identities that can be derived by substituting rule ST1 in rule ST2 and by using (rule Y1) and (rule Y2).
Note that various other identities can be derived for stabilizer tensors, such as the following. Let \(\ket{\psi}\) be the 3-qubit GHZ-state we have
1. \(Z_{i}\ket{\psi}=Z_{j}\ket{\psi}\), which follows graphically as: (rule P1)
2. \(X_{i}\ket{\psi}=X_{j}X_{k}\ket{\psi}\), which follows graphically as: (rule K1a)
We use the XOR tensor to derive 2 more rules. Let \(\ket{\phi}=\ket{000}+\ket{011}+\ket{101}+\ket{110}\) be the state that represents the XOR tensor with all wires bent to one direction, we have
1. \(X_{i}\ket{\phi}=X_{j}\ket{\phi}\), graphically it's represented as: (rule P2)
2. \(Z_{i}\left|\phi\right\rangle=Z_{j}Z_{k}\left|\phi\right\rangle\), graphically it's represented as: (rule K1b)
**Proposition 2** (\(Z^{2}=X^{2}=H^{2}=\mathds{1}\)).: _These identities are given graphically as follows:_
(rule I1)
(rule C3)
**Definition 19** (Hopf law).: _The Hopf law is defined graphically as follows [14, 15]:_
(36)
**Proposition 3** (\(C(X)^{2}=\mathds{1}\)).: _Using the Hopf law (rule 36) we derive graphically the identity \(C(X)^{2}=\mathds{1}\) as follow:_
(rule I2)
### Heisenberg picture
We consider the following unitary operator as:
\[U_{t}=e^{-\imath t\mathcal{H}},\hskip 28.452756ptU_{t}U_{t}^{\dagger}=\mathds{1}, \tag{37}\]
with \(\mathcal{H}\in herm_{\mathbb{C}}(2^{n})\). The time evolution of a quantum state \(\left|\varphi_{0}\right\rangle\) is
\[\left|\varphi_{t}\right\rangle=e^{-\imath t\mathcal{H}}\left|\varphi_{0}\right \rangle,\hskip 28.452756pt\forall t\left\langle\varphi_{t}|\varphi_{t}\right\rangle =1. \tag{38}\]
This is called Schrodinger's picture. It results in gate sequences applied to input states such as \(\prod_{l=1}^{p}U_{t}\left|0\right\rangle^{\otimes n}\), for \(U_{l}\in\mathcal{U}_{\mathbb{C}}(2^{n})\).
A second time evolution formalism is called Heisenberg's picture. Suppose we have a quantum system in state \(\left|\psi\right\rangle\), we apply unitary \(U\) (\(UU^{\dagger}=\mathds{1}\))
\[UN\left|\psi\right\rangle=UNU^{\dagger}U\left|\psi\right\rangle. \tag{39}\]
Then the evolution of operator \(N\) is given by
\[N\to UNU^{\dagger}. \tag{40}\]
1. We want to follow the evolution of a number of \(N\)'s to reconstruct the evolution of \(\left|\psi\right\rangle\).
2. Evolution in (40) is linear so we will follow a complete basis of \(n\times n\) matrices.
We call \(\mathcal{P}_{n}\) the Pauli group. As mentioned before it contains \(4\cdot 4^{n}\) elements. These elements are tensor products of \(X,Y,Z,\mathds{1}\) and with prefactors \(\pm 1,\pm\imath\). There's a multiplicative group homomorphism
\[MN\to UMNU^{\dagger}=(UMU^{\dagger})(UNU^{\dagger}),\]
so we can follow just a generating set of the group. A good one for the Pauli group is \(\{X_{1},...,X_{n},Z_{1},...,Z_{n}\}\).
The set of operators that leave \(\mathcal{P}_{n}\) fixed under conjugation form the normalizer called the Clifford group \(\mathcal{C}_{n}\). This group \(\mathcal{C}_{n}\) is much smaller than the unitary group on \(n\)-qubits, \(\mathcal{U}_{\mathbb{C}}(2^{n})\), yet contains many operations of interest.
Remark 5: The transformation of a Pauli string \(P\in\mathcal{P}_{n}\) in the Heisenberg picture as \(P\to UPU^{\dagger}\) is restricted to unitaries from the Clifford group \(U\in\mathcal{C}_{n}\).
### Heisenberg rules of the stabilizer ZX-calculus
We define a set of rules that we derive form the transformation of the Pauli group generators under Clifford gate conjugation. We shall see how the Pauli group generators,
\[\{Z_{1},\ldots Z_{n},X_{1},\ldots X_{n}\}, \tag{41}\]
evolve under conjugation by \(H\), \(P\), \(C(X)\). To simulate the evolution of (41) there are \(2n\) tensor contractions. Each can be done graphically. We can establish the following:
\[H\colon Z\to X,\ \ \ \ H\colon X\to Z. \tag{42}\]
\[P\colon X\to Y,\ \ \ \ P\colon Z\to Z. \tag{43}\]
\[\begin{array}{rl}C(X)\colon X\otimes\mathds{1}\to X\otimes X,&(i)\\ \mathds{1}\otimes X\to\mathds{1}\otimes X,&(ii)\\ X\otimes X\to X\otimes\mathds{1},&(iii)\\ Z\otimes\mathds{1}\to Z\otimes\mathds{1},&(iv)\\ \mathds{1}\otimes Z\to Z\otimes Z,&(v)\\ Z\otimes Z\to\mathds{1}\otimes Z,&(vi)\\ Z\otimes X\to Z\otimes X,&(vii)\\ X\otimes Z\to-Y\otimes Y.&(viii)\end{array} \tag{44}\]
**Proposition 4** (Computational and \(\pm\)-Basis Change).: _The l.h.s of (42) is given graphically as follows:_
(rule H1)
_Remark 6_.: In what follows, we denote by \(\overset{(-)}{=}\) the one-step equivalency transform between two diagrams, where \((-)\) denotes the name of the rule applied.
**Proposition 5**.: _From Proposition 4 and rule C3 we recover the r.h.s of (42)._
(rule H2)
**Proposition 6**.: _The l.h.s of (43) is given graphically as:_
(45)
_and the r.h.s is given as:_
(46)
**Proposition 7**.: _The relationship (44) is derived graphically as follow:_
_(i)._
(rule R1)
(rule R2)
_(iv)._\(C(X)\colon Z\otimes\mathds{1}\to Z\otimes\mathds{1}\)__
_(v)._\(C(X)\colon\mathds{1}\otimes Z\to Z\otimes Z\)__
_(vi)._\(C(X)\colon Z\otimes Z\to\mathds{1}\otimes Z\)__
_(vii)._\(C(X)\colon Z\otimes X\to Z\otimes X\)__
_(viii)._\(C(X)\colon X\otimes Z\to-Y\otimes Y\)__
Proof: \(\forall N\in S\) (set of stabilizers of the state \(\left|\psi\right\rangle\)) we have
\[N\left|\psi\right\rangle=\left|\psi\right\rangle,\]
then
\[U\left|\psi\right\rangle=UN\left|\psi\right\rangle=UNU^{\dagger}\ U\left|\psi \right\rangle,\]
which means \(UNU^{\dagger}\) is a stabilizer of \(U\left|\psi\right\rangle\). Hence, stabilizers transform covariantly \(N\to UNU^{\dagger}\).
Example 3: We will now consider an extended example related to Bell state stabilizers. Let us now verify by tensor contraction that
\[\{X\otimes X,-Y\otimes Y,Z\otimes Z\},\]
are indeed stabilizers. The following circuit produces the Bell state \(\left|\phi^{+}\right\rangle\).
\[\begin{array}{c}\includegraphics[scale=0.5]{figure/1-1}\end{array}\]
Such that,
\[\left|\phi^{+}\right\rangle=\frac{\left|00\right\rangle+\left|11\right\rangle }{\sqrt{2}}=u\left|00\right\rangle. \tag{47}\]
We begin by application of Proposition 8. First, consider stabilizers of the initial state, as follows.
\[Z\otimes Z\left|00\right\rangle=\left|00\right\rangle, \tag{48}\]
\[Z\otimes\mathds{1}\left|00\right\rangle=\left|00\right\rangle, \tag{49}\]
\[\mathds{1}\otimes Z\left|00\right\rangle=\left|00\right\rangle. \tag{50}\]
Hence, the stabilizers of \(\left|00\right\rangle\) form the Abelian group (51).
\[\{\mathds{1},Z\otimes\mathds{1},\mathds{1}\otimes Z,Z\otimes Z\}. \tag{51}\]
Acting on the initial state \(\left|00\right\rangle\) with \(u\) yields:
\[\begin{array}{c}u\left|00\right\rangle=\left|\phi^{+}\right\rangle,\\ N\left|00\right\rangle=\left|00\right\rangle.\end{array} \tag{52}\]
From Proposition 8 we know that,
\[\begin{array}{c}N\left|00\right\rangle=\left|00\right\rangle,\\ u\left|00\right\rangle=uNu^{\dagger}\left|\phi^{+}\right\rangle.\end{array} \tag{53}\]
Hence, Proposition 8 asserts that if (51) is a stabilizer of the initial state, then (54) is a stabilizer of the final state \(u\,|00\rangle\).
\[\left\{\mathds{1},u(Z\otimes\mathds{1})u^{\dagger},u(\mathds{1}\otimes Z)u^{ \dagger},u(Z\otimes Z)u^{\dagger}\right\}. \tag{54}\]
We then must determine e.g. \((Z\otimes Z)u^{\dagger}=u^{\dagger}\sigma^{\prime}\). This is done graphically as follows:
Which simply shows that \(Z\otimes Z\) is a stabilizer of \(|\phi^{+}\rangle\), as expected. We consider now on recovering the stabilizer group (54). Consider then evolution of \(Z\otimes\mathds{1}\). We arrive at the following rewrites:
Which arrives graphically at \(X\otimes X\) being a stabilizer of \(|\phi^{+}\rangle\). We consider now the evolution of \(\mathds{1}\otimes Z\),
Which arrives graphically at \(Z\otimes Z\) being a stabilizer of \(|\phi^{+}\rangle\). Finally the evolution of \(Z\otimes Z\),
Which arrives graphically at \(-Y\otimes Y\) being a stabilizer of \(|\phi^{+}\rangle\). Hence we recover the set of stabilizers of \(|\phi^{+}\rangle\), \(S_{|\phi^{+}\rangle}=\{\mathds{1},X\otimes X,-Y\otimes Y,Z\otimes Z\}\).
## 5 Confluence and Termination in the Heisenberg picture
In this section we show that the ZX-rewrite system is confluent, terminal and hence canonical in the Heisenberg picture for stabilizer quantum mechanics. As a corollary we present a graphical proof of the Gottesman-Knill theorem. We will show first that the Heisenberg stabilizer ZX-rewrites system is weakly terminal by showing the existence of a terminal element that is equal to its \(\star\)-closure for any term. Moreover, we show that reaching the terminal element using the Heisenberg rules would take at most \(\mathcal{O}(\operatorname{poly}(t))\) steps for a \(t\)-gate stabalizer evolution in the Heisenberg picture. Furthermore, changing the order of rule applications does not modify the terminal element. This unique path to a single terminating element implies that
the Heisenberg stabilizer rewrite system is confluent. Hence the rewrite system is already canonical.
Let \(P,\tilde{P}\in\mathcal{P}_{n}\) be two Pauli strings, and let \(U\in\mathcal{C}_{n}\) be a Clifford circuit. We denote by \(A\) the ZX-diagram representing \([UPU^{\dagger}]\), and by \(D\) the ZX-diagram representing \([\tilde{P}]\), such that \([UPU^{\dagger}]=[\tilde{P}]\).
Proposition 9 (Weak termination of the Heisenberg stabilizer ZX-rewrite system): _For every Pauli string mapped under Clifford conjugation (\(A\in\{A\xrightarrow{\star}\}\)) there exist a unique terminal element (\(D=\{D\xrightarrow{\star}\}\)) that is derivable using the rules of the rewrite system._
Proof: The proof follows from noting that \(D\) is derivable from \(A\) under the Heisenberg rules, thus \(D\) is in the \(\star\)-closure of \(A\), \(D\in\{A\xrightarrow{\star}\}\). Moreover, a Pauli string is represented graphically as a countable collection of the diagrams representing the \(\mathds{1}\), \(X\), \(Y\), and \(Z\) gates, thus it is represented uniquely in the rewrite system. Hence, \(D\) is equal to its \(\star\)-closure (\(D=\{D\xrightarrow{\star}\}\)), as there is no Heisenberg rules that reduces \(D\) any further.
Proposition 10 (Graphical rewrite upper bound): _Given a Pauli-string under Clifford conjugation of an \(n\)-qubit circuit with \(t\)-gates, the rewrite system terminates producing a Pauli-string in \(\mathcal{O}\left(t\cdot n\right)\) steps._
To establish Proposition 10, we consider two cases. The first is for single-qubit Clifford gates acting by conjugation on Pauli operators. The second is the case of the 2-qubit Clifford gate (\(C(X)\)), acting by conjugation on 2-qubit Pauli strings.
Proof: First case: \(U\) only contains a single 1-qubit Clifford gate. The only Pauli terms that will be affected are the terms where \(U\) is positioned. Using the rules defined in SS 4.3, the number of graphical rewrites in this case is 1, moreover if there is \(l\) gates in the circuit the number of rewrites is \(l\). Hence if the Clifford circuit is acting on \(n\) qubits with \(l\) 1-qubit gates, the number of graphical rewrites is upper bounded by \((l\cdot n)\).
Second case: \(U\) has only the 2-qubit Clifford gate. Using the rules derived in SS 4.3 and Appendix 0.A, the number of graphical rewrites is 1. For \(g\) gates in the circuit, the number of graphical rewrites will be \(g\). Hence, for a Clifford circuit acting on \(n\) qubits with \(g\) 2-qubit gates, the number of graphical rewrites is upper bounded by \(\frac{1}{2}\cdot g\cdot n\). This establishes the desired upper bound as, \((\frac{1}{2}\cdot g+l)\cdot n\).
Proposition 9 established that for a given Pauli string under Clifford conjugation there exist a unique terminal element that is equal to its own \(\star\)-closure.
Remark 8: Ambiguity in the graphical calculation shows that there exist different sequences of Heisenberg rule applications that converges to the same terminal element, which implies that the system is confluent. It follows that, the Heisenberg stabilizer ZX-rewrite system is already canonical.
Irrespective of the order of Heisenberg rule application, the system converges to the unique terminal element, hence:
Proposition 11 (Confluence of the Heisenberg stabilizer ZX-rewrite system): _The stabilizer ZX-rewrite system is confluent in the Heisenberg picture._
The Gottesman-Knill follows directly as in SS 4.3 we showed how the generators of the Pauli group are mapped graphically under Clifford conjugation, and that irrespective of the path of graphical calculation the number of rewrites is upper bounded by \((\frac{1}{2}\cdot g+l)\cdot n\).
Corollary 1 (Graphical Proof of the Gottesman-Knill Theorem): _The Heisenberg evolution of an initial state \(\left|0^{n}\right\rangle\) acted on by a \(t\)-gate Clifford circuit is determined graphically by not more than \((\frac{1}{2}\cdot g+l)\cdot n\) rewrites, thereby recovering the Gottesman-Knill theorem [38]._
## 6 Applications to Penalty Functions
In this section we aim to use the stabilizer ZX-calculus as a tool to derive Hamiltonian penalty functions. We will make use of the telescoping construction [48] to retrieve the Hamiltonian that has as its lowest energy state a stabiliser state. A telescoping construction is a Hamiltonian that encodes as its ground state the output state of a given quantum circuit. As an application, we retrieve the parent Hamiltonian of \(n\)-qubits Greenberger-Horne-Zeilinger state (GHZ state). The telescoping construction reads as:
Definition 20 (Telescopic construction [48]): For a given quantum circuit \(U\), the non-negative telescoping Hamiltonian is written as \(H_{\text{teles}}=UP_{0}U^{\dagger}\geq 0\) where \(P_{0}\) is a sum of projectors defined as
\[P_{0}=\sum_{j=1}^{n}\left|1\right\rangle\!\left\langle 1\right|_{j}=\frac{n}{2 }\left(\mathds{1}-\frac{1}{n}\sum_{j=1}^{n}Z_{j}\right). \tag{55}\]
To incorporate the gate sequence we consider (55) as the initial Hamiltonian, with low energy state \(\left|0^{n}\right\rangle\). We will act on (55) with a sequence of gates \(\prod_{l=1}^{p}U_{l}\) corresponding to the circuit being simulated as
\[h(k)=\left(\prod_{l=1}^{k\leq p}U_{l}\right)P_{0}\left(\prod_{l=1}^{k\leq p}U_ {l}\right)^{\dagger}\geq 0 \tag{56}\]
which is isospectral with (55).4 It follows that \(h(k)\) is non-negative \(\forall 1\leq k\leq p\) and that:
Footnote 4: I.e. \(P_{0}\left|x\right\rangle=\left|x\right|_{1}\left|x\right\rangle\) for \(x\in\{0,1\}^{\times n}\) and \(\left|\cdot\right|_{1}\) the Hamming weight.
\[\ker\{h(p)\}=\operatorname{span}\{\Pi_{l=1}^{p}U_{l}\left|0^{n}\right\rangle\}. \tag{57}\]
Proposition 12 (Clifford Penalty Functions [48]): _Let \(\Pi_{l=1}^{p}U_{l}\left|0^{n}\right\rangle\) be a \(p\)-gate quantum circuit preparing state \(\left|\psi\right\rangle\) on \(n\)-qubits and containing \(\mathcal{O}(\text{poly}(\ln n))\) non-Clifford gates. Then there exists a non-negative Hamiltonian \(H\) on n-qubits with \(\text{poly}(p,n)\) cardinality, gap \(\Delta\) and \(\ker\{H\}=\text{span}\{\Pi_{l=1}^{p}U_{l}\left|0^{n}\right\rangle\}\). In particular, if \(\left|\phi\right\rangle\) is such that_
\[0\leq\left\langle\phi\right|H\left|\phi\right\rangle<\Delta \tag{58}\]
_it follows that_
\[1-\frac{\left\langle\phi\right|H\left|\phi\right\rangle}{\Delta}\leq\left| \left\langle\phi\right|\psi\right\rangle\left|^{2}\leq 1-\frac{\left\langle\phi \right|H\left|\phi\right\rangle}{\max\{H\}}. \tag{59}\]
The Hamiltonian described in Proposition 12 is given exactly in Definition 20. To understand the utility of this, let us recall the following:
Definition 21 (Operator cardinality [48]): Let \(H=\sum_{k}h_{k}\bigotimes_{j=1}^{n}\sigma_{j}^{\alpha_{j}(k)}\) for coefficients \(h_{k}\) and Pauli strings \(\bigotimes_{j=1}^{n}\sigma_{j}^{\alpha_{j}(k)}\). Then \(\left|H\right.\mid_{\text{card}}=\sum_{k}(h_{k})^{0}\).
We then note that the telescopic construction gives rise to an \(n\)-term penalty function
Lemma 1 (Clifford Gate Cardinality Invariance [48]): _For \(C\) a Clifford gate and \(h\in\text{span}_{\mathbb{R}}\left\{\bigotimes_{l=1}\sigma_{l}^{\alpha_{l}} \mid\alpha_{l}=0,1,2,3\right\}\), \(\left|h\right|_{\text{card}}=\left|ChC^{\dagger}\right|_{\text{card}}\)._
Example 4 (Parent Hamiltonian of the GHZ state): The 3-qubit GHZ state is given as:
\[\left|\text{GHZ}\right\rangle=G\left|000\right\rangle=\frac{1}{\sqrt{2}} \left(\left|000\right\rangle+\left|111\right\rangle\right), \tag{60}\]
where \(G=\left(H\otimes\mathds{1}\otimes\mathds{1}\right)\cdot\left(C(X)\otimes \mathds{1}\right)\cdot\left(\mathds{1}\otimes C(X)\right)\). Graphically (60) reads as:
Using the telescoping construction, Definition 20, we write the Hamiltonian that has the GHZ state as its ground state as: \(H_{\text{GHZ}}=GP_{0}G^{\dagger}\). Graphically it reads as:
We retrieve \(H_{\text{GHZ}}\) by doing the following graphical calculations:
\[H_{\text{GHZ}}= \tag{61}\]
Using the Heisenberg rules the first term in (61) is rewritten as:
\[\begin{array}{c}\includegraphics[scale=0.4]{10.eps}\end{array} \tag{62}\]
which is equal to, \(\frac{1}{2}\mathds{1}-\frac{1}{2}(X+X+X)\). Finally, following the same calculation of (62) we get,
\[H_{\rm GHZ}=\frac{3}{2}\mathds{1}-\frac{1}{2}(X\otimes X\otimes X+Z\otimes Z \otimes\mathds{1}+\mathds{1}\otimes Z\otimes Z) \tag{63}\]
This expression can be generalized for arbitrary \(n\) qubits:
\[H_{\rm GHZ}^{(n)}=\frac{1}{2}\left(1-\bigotimes_{j=1}^{n}X_{j}\right)+\frac{1} {2}\sum_{j=1}^{n}\left(1-Z_{j}Z_{j+1}\right). \tag{64}\]
where \(j\) is the qubit index.
As a penalty function, (64) is non-negative and has a one-dimensional kernel spanned by the \(n\)-qubit GHZ state. Returning back to Proposition 12, the circuit which prepares the GHZ state defines the parent Hamiltonian (64).
## 7 Conclusion
Soundness underpins any graphical language, be it rudimentary template rewrite systems [49; 50; 51] or the ZX-calculus [14; 15]. In general, whereas every template is proven by matrix equality, not every template can be derived graphically. Indeed, the ZX-calculus is generally known to be incomplete [35]. Furthermore, the ZX-calculus remains incomplete for the Clifford+T subset of quantum mechanics [34], it is however complete for certain restrictions and modifications. Completeness has been shown for stabiliser quantum mechanics [17], real-valued stabiliser quantum mechanics [31] and when representing some general matrix equations [30]. Conceptually one can understand completeness and soundness as a disjoint partitioning of ZX-diagrams. Each disjoint set is related under ZX-equivalent rewrites.
With these results established, little attention has been given to the confluence of restricted forms of the ZX-system. Interestingly, the stabilizer ZX-rewrite system is confluent and also terminal in the Heisenberg picture. This a strong condition, that the ZX-system is Heisenberg-canonical. The next steps include universal completion by considering the addition of T-gates [30] as well as further derivation of Hamiltonian penalty functions (such as the telescopes [48] for use in variational quantum computation) graphically. One can conceptualize these results as partitioning Heisenberg ZX-diagrams into a disjoint set of directed graphs, each terminating at a Pauli term under Heisenberg ZX-rewrites.
## 8 Acknowledgements
AN acknowledges support from the research project No. 014/20, _Leading Research Center: Quantum Computing_. The authors thank Soumik Adhikary and Richik Sengupta for feedback.
|
2308.11269 | Quantum-Inspired Machine Learning: a Survey | Quantum-inspired Machine Learning (QiML) is a burgeoning field, receiving
global attention from researchers for its potential to leverage principles of
quantum mechanics within classical computational frameworks. However, current
review literature often presents a superficial exploration of QiML, focusing
instead on the broader Quantum Machine Learning (QML) field. In response to
this gap, this survey provides an integrated and comprehensive examination of
QiML, exploring QiML's diverse research domains including tensor network
simulations, dequantized algorithms, and others, showcasing recent
advancements, practical applications, and illuminating potential future
research avenues. Further, a concrete definition of QiML is established by
analyzing various prior interpretations of the term and their inherent
ambiguities. As QiML continues to evolve, we anticipate a wealth of future
developments drawing from quantum mechanics, quantum computing, and classical
machine learning, enriching the field further. This survey serves as a guide
for researchers and practitioners alike, providing a holistic understanding of
QiML's current landscape and future directions. | Larry Huynh, Jin Hong, Ajmal Mian, Hajime Suzuki, Yanqiu Wu, Seyit Camtepe | 2023-08-22T08:29:09Z | http://arxiv.org/abs/2308.11269v2 | # Quantum-Inspired Machine Learning: a Survey
###### Abstract
Quantum-inspired Machine Learning (QML) is a burgeoning field, receiving global attention from researchers for its potential to leverage principles of quantum mechanics within classical computational frameworks. However, current review literature often presents a superficial exploration of QML, focusing instead on the broader Quantum Machine Learning (QML) field. In response to this gap, this survey provides an integrated and comprehensive examination of QML, exploring QMLs diverse research domains including tensor network simulations, dequantized algorithms, and others, showcasing recent advancements, practical applications, and illuminating potential future research avenues. Further, a concrete definition of QML is established by analyzing various prior interpretations of the term and their inherent ambiguities. As GML continues to evolve, we anticipate a wealth of future developments drawing from quantum mechanics, quantum computing, and classical machine learning, enriching the field further. This survey serves as a guide for researchers and practitioners alike, providing a holistic understanding of QMLs current landscape and future directions.
Quantum-inspired Machine Learning, Quantum Machine Learning, Quantum Computing, Quantum Algorithms, Machine Learning, Tensor Networks, Dequantized Algorithms, Quantum Circuit Simulation
## 1 Introduction
The field of Quantum-Inspired Machine Learning (QiML) has seen substantial growth, garnering interest from researchers globally. A specialized subset of Quantum Machine Learning (QML), QiML focuses on developing classical machine learning algorithms inspired by principles of quantum mechanics within a classical computational framework, commonly referenced as the "classical-classical" quadrant of QML categorization as shown in Figure 1. QiML represents a multifaceted research domain, with investigations pushing to exceed conventional, classical state-of-the-art results, or exploring the expressivity provided by quantum formulations.
To situate QiML within the context of QML, we briefly expound upon the latter. QML, more broadly, sits at the fascinating intersection of quantum computing and machine learning. The dominant research field concerns the "classical-quantum" domain, and explores the use of quantum hardware to accelerate and enhance machine learning strategies. Here, two challenges present in classical machine learning are addressed. First, the increasing size and complexity of datasets in many fields have created computational challenges that classical machine learning struggles to manage efficiently. Secondly, quantum computing offers the potential to solve complex problems that are currently infeasible with classical computation methods [1]. Practical evaluation of QML algorithms on actual quantum hardware, however, is currently limited by factors such as the limited number of qubits, high error rates in quantum gates, difficulty in maintaining quantum states (decoherence), and challenges associated with quantum error correction [2]. As a result, the QML landscape has been primarily shaped by theoretical considerations, with recent advancements in noisy-intermediate scale quantum (NISQ) devices providing an early, empirical glimpse into the potential of full-scale quantum computing [3]. As such, the true extent and impact of QML on the machine learning landscape remains an ongoing research topics.
QiML has evolved in tandem with QML research. Instances of often cited research domains include tensor network quantum simulations and dequantized algorithms [4, 5]. However, in contrast with QML, discoveries in QiML are frequently backed by numerical evidence, facilitated by the independence from quantum hardware constraints, thereby enabling easier quantitative evaluations compared to other QML subsets. While QiML research is flourishing, current survey literature often neglects this field, with a larger focus given to QML as a whole. Often, QiML is only briefly mentioned or treated superficially [6, 7, 8, 5, 6],
Fig. 1: Four approaches to QML [4], based on whether quantum or classical data/processing is used. QiML methods describe the “CC” mode (blue shading).
[9], [10]. Practical use cases of QiML, their applications, and comparative analyses with standard classical benchmarks often remain unexplored. This points to a crucial need for a standalone, in-depth review of QiML as a distinct field.
Responding to this literature gap, our survey aims to provide a comprehensive, integrated discussion on the various facets of QiML. We aim to provide an accessible and comprehensive overview of how QiML is used in practice, detailing its recent advancements and giving readers an understanding of the field's progression. The reader should note that while exploring QiML methods from the lens of quantum mechanics and categorizing methods based on sources of inspiration would be of interest, this survey approaches the field from an applications perspective. The contributions of this survey are to provide an overview of the progression of QiML and its research directions in recent years, and to identify the future directions of QiML research. Specifically, they are:
* To highlight and classify existing QiML methods;
* To establish a concrete definition of QiML, accounting for its multi-directional research trends;
* To discuss the practical applications of these methods, specifically identifying the tasks to which QiML techniques have currently been applied;
* To discuss the limiting factors of QiML in practice, and;
* To explore and discuss the potential future directions of QiML research.
## 2 Quantum-Inspired Machine Learning
In this section, we dissect the QiML term into its constituent parts -- "quantum-inspired" and "machine learning" -- for a comprehensive understanding. Following this, we unpack the QiML term itself, integrating our analysis of its individual components. Our goal is to address inconsistencies in past literature, identify common threads, and propose a precise definition for QiML. This serves as a reliable compass, guiding future research in this dynamic field.
### _"Quantum-Inspired"_
The term "quantum-inspired" was introduced by Moore and Narayanan [11] in the context of computing for the first time in 1995. The term was used to differentiate between two types of computational methods: "pure" quantum computation and quantum-inspired computation. The former is firmly rooted in quantum mechanical concepts, such as standing waves, interference, and coherence, and can only be executed on a quantum computer. On the other hand, "quantum-inspired" computing refers to practical methods that have been derived from these concepts. These methods do not require a quantum computer, but rather utilize classical computers or algorithms to simulate quantum effects and achieve computational advantages. The categorization of these two types of methods was significant at the time, since quantum computing methods were not yet practically realizable due to the technological inability of implementing stable quantum systems with robust error correction [12]. The potential of quantum computing was known and acknowledged, as well as the challenges of practical implementation; many pure quantum algorithms still currently operate at a theoretical level [2]. This led to research efforts in exploring the utilization of quantum mechanics in classical computing.
Han and Kim were among the pioneers giving name to quantum-inspired algorithms, and proposed the "quantum-inspired evolutionary algorithm" (QIEA). Extending upon prior works [13], [14], QIEs describe evolutionary algorithms that are inspired by quantum mechanical concepts, specifically employing "Q-bits" and "Q-gates" to model populations and evolutionary processes [15]. These are the classical analogues of quantum qubits and quantum gates; Q-bits serve as probabilistic representations that maintain population diversity among individuals through the linear superposition of states, while Q-gates act as variational operators, driving individuals towards an optimal solution by modifying the probability distributions associated with Q-bits. Since these operate on classical computers, quantum phenomena is not observed, e.g. state collapse does not occur when the Q-bit state is "measured". Nevertheless, significant performance improvements were observed when compared to its classical counterpart, highlighting the benefits of using quantum-inspired methods. QIEs form a subset of quantum-inspired metaheuristics, which are optimization techniques developed for finding approximate or near-optimal solutions inspired by quantum mechanics principles but are implemented on classical computers. This research area has demonstrated notable advancements in both performance and computational efficiency compared to classical methods over recent decades [16], [17], [18], [19]. Quantum-inspired Genetic Algorithms (QGA) [13], [20] also belong to this category, employing Q-bits for probabilistic solution encoding and Q-gates for genetic operations like crossover and mutation. These features enable QGAs to maintain greater population diversity [15]. Quantum-inspired Particle Swarm Optimization (QPSO) [21] represents particles as Q-bits, allowing for a probabilistic representation of the solution space instead of fixed positions and velocities. By maintaining particles in a superposition of states, the search space diversity and exploration capabilities are enhanced. Similarly, quantum-inspired ant colony optimization (QACO) [22] utilizes Q-bits and Q-gates to improve the classical ACO's pheromone update mechanism. The traversal of the solution space via these mechanisms has been shown to augment exploration and exploitation capabilities [23], [24]. Quantum Simulated Annealing (QSA) [25] employs Monte Carlo methods to efficiently simulate classical annealing processes. Additional quantum concepts are also utilized, such as quantum random walks for efficient exploration of the energy landscape, quantum phase estimation for optimizing the annealing schedule, and quantum tunneling to escape local minima.
It is important to note that the research goals in developing quantum-inspired methods can vary. Typically, the objectives include achieving faster and more stable convergence, enhancing the effectiveness of the solution search, or a combination of both [26].
Some works also introduce unique additions to their specification of "quantum-inspired". Moore and Narayanan [11] further characterized quantum-inspired computing algorithms by stipulating that their output need only be
verified by classical computing methods, and not requiring a quantum computer for the task. Manju and Nigam [17] constrain quantum-inspired computing to methods for solving engineering problems with cyclic or recurrent behavior. Further, some studies emphasize the use of quantum bits or Q-bits as the defining aspect of being quantum-inspired [26, 27], while others characterize different quantum concepts to be the core underpinning of the "quantum inspired" definition. These minor distinctions in defining "quantum inspired" do not significantly impact the overall understanding of the term, as the core concept remains consistent with the generalization outlined earlier.
### _"Machine Learning"_
Machine learning, as a field, has evolved significantly since its inception, with its definition and scope adapting to reflect advancements in techniques and computational capabilities. Initially conceptualized in the 1950s as the concept of "programming computers to learn from experience" by Arthur Samuel [28], machine learning has witnessed several transformative milestones. Throughout these developments, the core idea of learning from data to make predictions or decisions has persisted, although the specific models, techniques, and learning paradigms have diversified over time. Several foundational definitions of machine learning have been offered, each emphasizing various aspects of what constitutes the field. For example, [29] highlights the importance of learning from experience and improving performance over time, thus underscoring the iterative and adaptive nature of machine learning. Goodfellow et al. [30] position machine learning as a sub-field of artificial intelligence, concerned with building algorithms that rely on a collection of examples of some phenomenon to be useful. A few authors define machine learning rather as a broad collection of algorithms that learn patterns over feature spaces [31, 32]. Clear commonalities have been agreed upon, such as the importance of learning from patterns inherent within data [29, 30] via automatic processes [31, 33] without explicit programming [34], and the ability to improve performance based on the experience or data it is exposed to. While the specific techniques and approaches within machine learning have evolved and diversified, its fundamental definition has remained consistently cohesive and integral to the field.
### _"Quantum-Inspired Machine Learning"_
Both quantum computing and machine learning have gained tremendous popularity in the past couple of decades with significant advances made compared to when they were first introduced. More recently, researchers have turned their attention to an inner subset at the intersection of these fields; "quantum-inspired machine learning" (QiML). The interpretation of this term by researchers, however, has varied significantly in the literature. Hence, this subsection aims to explore different perspectives and approaches taken by researchers in their attempts to define "quantum-inspired machine learning," shedding light on the nuances and challenges involved in characterizing this rapidly developing field. We will thus also argue for a concrete description of the quantum-inspired machine learning term, which will better promote clarity, and assist in characterizing methodologies within this, and related domains.
The terms "quantum-inspired", or "quantum-like" machine learning have, in early reviews, focused on describing optimization techniques inspired by quantum phenomena, and run on classical computers [32], likely in the absence of parameterized, iterative pattern recognition models more akin to classical machine learning methods. Other authors have corroborated this idea, with more explicit mention of machine learning rather than optimization [6, 7, 9, 10]. These definitions are consistent with the expected combination of the individual terms "quantum-inspired" and "machine learning". In recent years, the umbrella of QiML has been extended to include tensor network machine learning models that parallel classical methodologies [35, 36], as well as "dequantized" algorithms, which aim to develop classical analogs of quantum algorithms with comparable computational advantages [4]. Although tensor network techniques agree with the concept of quantum inspiration (given that tensor decompositions aspire to model complex quantum wavefunctions), the classification of dequantized algorithms as "quantum-inspired" may not wholly align with prevailing definitions of QiML. In the case of dequantized algorithms, the "inspiration" derives not from quantum mechanics itself but from the scrutiny of claims of quantum supremacy, or rather by the quantum algorithms themselves. In this sense, the relationship to quantum mechanics is indirect; the focus is on understanding the potential of quantum algorithms in classical settings.
While the prevailing definitions of QiML offer some flexibility and accommodate a variety of quantum applications to machine learning, they may also inadvertently lead to imprecise categorizations. Notably, certain techniques might be labeled as QiML when they may not accurately belong to this domain. Consider, for example, a quantum kernel used in a quantum support vector machine or a quantum variational circuit, methods typically relying on quantum circuit implementation, that has been simulated on classical hardware, as in [37, 38] using tools like Qiskit [39] or PennyLane [40]. These algorithms are now implemented on classical hardware and do not necessarily require quantum hardware, yet are often considered part of the broader quantum machine learning (QML) context, not QiML. The main reason for this distinction is not immediately clear, but may be related to the nature of the algorithms themselves, and that quantum circuit implementations have been colloquial considered as QML, regardless of implementation device. Another explanation could be tied to the efficiency of implementation: if the translation of a quantum computing process to a classical setting is not intrinsically efficient, regardless of the circuit or input size, and does not scale beyond the limits of classical computation, the technique could be more appropriately classified as QML.
Addressing these inconsistencies is crucial for a more accurate understanding of the evolving landscape of QiML, as it will allow researchers to effectively build upon previous work, avoid confusion, and foster more precise communication within the academic community. We now attempt to pin down an appropriate definition of QiML. In doing so, the terms QML and QiML warrant clarification to better understand their roles in the interdisciplinary area between
quantum computing and machine learning. Broadly, QML represents the integration of quantum computing and machine learning principles, forming an umbrella term that includes related concepts within this intersection, one of which is QiML. To gain a deeper understanding of the broader QML landscape, we examine a typology introduced by Schuld and Petruccione [4], depicted in Figure 1. This typology categorizes QML based on whether the data source is a classical (C) or quantum (Q) system and whether the data processing device is classical (C) or quantum (Q). Four distinct categories emerge, each highlighting a different aspect of the interplay between quantum computing and machine learning:
* **CC:** Classical data and classical processing. This category is where QiML primarily resides. Methods in this category are inspired by quantum mechanics but still use classical data and processing.
* **QC:** Quantum data and classical processing. In this category, machine learning techniques are used to analyze quantum data or measurement outcomes from quantum systems and experiments.
* **CQ:** Classical data and quantum processing. Quantum computing is utilized to process conventional data, often with the objective of developing quantum algorithms for data mining.
* **QQ:** Quantum data and quantum processing. This category explores processing quantum data with quantum devices, either by inputting experimental measurements into a quantum computer or using a quantum computer to simulate and subsequently analyze the behavior of quantum systems.
These four categories have been corroborated and utilized in various studies within the field of quantum machine learning, as demonstrated in the literature [2, 10, 41]. Some researchers have also proposed a contemporary categorization scheme that reflects the evolving landscape of QML [7, 10]. Here, QML is divided into three distinct categories:
* **Quantum Machine Learning**: all quantum adaptations of classical ML algorithms that necessitate quantum computation for their execution.
* **Quantum-inspired Machine Learning**: the integration of quantum computing concepts to enhance traditional machine learning algorithms, without requiring actual quantum computation.
* **Hybrid Classical-Quantum Machine Learning**: the fusion of classical and quantum algorithms, aiming to optimize performance and minimize learning costs by exploiting the strengths of both approaches.
These perspectives are consistent with prior QiML definitions, while also specifying classical computation as a crucial component. To consolidate this aspect with conventional understanding, and also accommodate for the existence of dequantized algorithms, we now propose a concrete definition of "quantum-inspired machine learning".
**Definition 1:** Quantum-Inspired Machine Learning (QiML) refers to machine learning algorithms that draw inspiration from principles of quantum mechanics or quantum computing constructs, but do not necessitate quantum processing and can be executed on classical hardware.
This definition encapsulates the following pivotal aspects:
* The foundational principles of machine learning:
* Inspiration from quantum phenomena or quantum computing, including quantum algorithms, thus acknowledging the significance of dequantized algorithms within QiML;
* The capacity for problem representation and computation on classical hardware, thereby incorporating the ability to simulate quantum hardware.
Our aim is to provide clarity and guidance for future research in this rapidly evolving field. This definition acknowledges the growing intersection between quantum computing and machine learning and fosters a more focused and constructive discourse within the QiML landscape.
## 3 Selection Criteria
This review aims to collate and analyze recent advancements in the rapidly evolving field of quantum-inspired machine learning. We focus on contemporary studies; only studies published in the recent years are considered (2017-2023). A systematic search strategy was employed to retrieve literature from multiple databases, including Google Scholar, and other academic search engines. The search was performed using the specific key phrases, in order to refine the search space, as these research areas have been considered QiML in the literature:
* "quantum-inspired",
* "dequantized algorithms",
* "tensor networks",
* "variational quantum algorithms"
We combine each of these terms with "machine learning" to ensure the retrieval of publications specifically relevant to QiML. We also exclude terms that may capture works rooted in combinatorial optimization, using the key phrase "-'combinatorial optimization" and other related search terms such as "-'heuristic algorithms", "-'optimization algorithms", "-'combinatorial algorithms", with additional manual vetting of papers that bypass this filter -- in this review, we will consider such methods disjoint from machine learning, and more aligned with the field of metaheuristics. Initial search returned 2,300 results. We also use the key phrases "dequantized algorithms", "tensor networks" and "variational quantum algorithms" in order to refine the search space, as these research areas have been considered QML in the literature.
Categorizing the selected studies was based on the following criteria. First, papers are categorized based on the types of techniques involved in accomplishing machine learning tasks. It focused on the various quantum-inspired algorithms, methodologies, and models used in these studies and their unique aspects that contribute to the advancement of machine learning tasks. Secondly, the applicability and practical implementation of these quantum-inspired methods is of importance. As a fast-growing field, it is essential to discern not only the theoretical advancements but also where these methods have been applied in empirical experimentation. Identifying such applications provides insights
into the current state of quantum-inspired machine learning and its potential in solving complex problems across various domains; works that demonstrate these real-world applications and emphasize the practical benefits and challenges of quantum-inspired techniques were given priority. Works that introduced novel methods, algorithms, or theoretical insights into QiML were also particularly valued.
## 4 Current QiML Techniques
We classify works in QiML based on their underlying methodologies and purposes. Three overarching categories have been identified: "Dequantized Algorithms" (Section 4.1), "Tensor Networks" (Section 4.2), and "Quantum Variational Algorithm Simulation" (Section 4.3). Methods that do not fail into these categories are grouped and labeled as "Other QiML Methods" (Section 4.4). Within each category, methods can be further grouped by their application domains, with the exception of Dequantized Algorithms, which are grouped via method due to lack of practical application domains. Figure 3 presents these categorizations in an organizational chart.
### _Dequantized Algorithms_
In recent years, algorithms termed as "dequantized" have been developed. These algorithms aim to determine whether the speedups claimed by quantum machine learning algorithms are genuinely attributed to the inherent power of quantum computation or are merely a byproduct of strong assumptions regarding input and output encoding/decoding (i.e. via state preparation). By scrutinizing these assumptions and their implications, researchers can more accurately assess the practicality of quantum algorithms and identify potential drawbacks that may impede their real-world applications. This line of inquiry has lead to the identification of classical counterparts to quantum-based machine learning methods, which can be implemented on classical hardware and achieve performance levels comparable to their quantum analogs. By designing algorithms that can be efficiently executed on classical resources, the costly and arguably impractical requirements for quantum state preparation and hardware can both be circumvented [42].
Ewin Tang presented the seminal work in this area in 2019 [43]. In an attempt to prove that no classical algorithm could match the runtime of the quantum recommendation system developed by Kerenidis and Prakash [44], the author was instead able to devise a classical algorithm that matched the fast runtime. The classical algorithm ran in time \(O(poly(k)log(mn))\), only polynomially slower than the quantum algorithm's \(O(poly(k)polylog(mn))\), thus there is no quantum advantage observed. This was a significant result, as this quantum algorithm was once thought to be one of the most promising candidates for demonstrably exponential improvements in quantum machine learning [3]. The key observation is that the exponential speedup achieved by the quantum algorithm relies on specific input assumptions about the user-product preference matrix. These were, in general, the prevailing assumptions for many quantum machine learning algorithms at the time; that either computing the corresponding quantum state \(\ket{v}\) from some input vector \(v\) is arbitrarily fast, or that the necessary quantum states come into the system already prepared. However, the author emphasizes that the cost of state preparation is nontrivial; if state preparation cannot be performed in poly-logarithmic time, then the claimed exponential speedup of the associated quantum algorithm can not be realized in practice, as it does not account for the time constraints imposed by state preparation.
The quantum recommendation system by Kerenidis and Prakash [44] describes an explicit data structure used to quickly prepare quantum states, as seen in Figure 4. This is implemented via a set of binary search trees, one for each row (user) in the preference matrix. Quantum states are encoded using the square amplitudes of the coefficients of the given row vector \(\ket{v}\) in a hierarchical manner: \(\ket{v}=\sum_{i=1}^{n}v_{i}\ket{i}\) for \(v\in\mathbb{C}^{n}\). The amplitudes and signs are stored as leaf nodes in the tree; the root node contains \(\|v\|^{2}\). This structure can be seen as a classical analogue to
\begin{table}
\begin{tabular}{|c|l|} \hline
**Variable** & **Description** \\ \hline \(\|A\|_{F}\) & Frobenius norm of matrix \(A\) \\ \hline \(\|A\|_{2}\) & Spectral norm of \(A\) \\ \hline \(\|A\|\) & Operator norm of \(A\) \\ \hline \(\|v\|\) & L2 norm of vector \(v\) \\ \hline \(k\) & Rank of matrix \(A\) \\ \hline \(\kappa\) & Condition number of \(A\) \\ \hline \(d\) & Degree of the polynomial associated with \(A\) in the QSVT \\ \hline \(\sigma\) & Singular value threshold \\ \hline \(\varepsilon\) & Approximation error \\ \hline \end{tabular}
\end{table} TABLE I: Variable Descriptions for Dequantized Algorithms
Fig. 2: Yearly plot displaying the frequency of works found via the search term “quantum-inspired” AND ‘machine learning ’-combinatorial optimization”
QRAM 1. Sampling from this requires randomly traversing down through each subsequent child node with probability proportional to its weight. This data structure allows for \(O(log(mn))\) query access, as well as time \(O(log^{2}(mn))\) for online updates to preference matrix elements \(A_{ij}\). However, the model assumes quantum access to this data structure with prepared quantum states. Tang demonstrated that this data structure employed to meet the state preparation assumptions can also fulfill classical L2-norm sampling assumptions, allowing for "sample and query" access (SQ access) to the data. As a result, a classical algorithm aiming to "equal" the performance of the quantum algorithm can take advantage of these assumptions. In this way, more reasonable comparisons can be made between QML algorithms with state preparation assumptions and classical counterparts with sampling assumptions.
Footnote 1: We disregard the exact notion of QRAM for quantum settings, as the data structure is only interfaced with classically; no quantum operations are involved.
Specifically, SQ access to \(x\) -- \(SQ(x)\) is possible if, in time \(O(T)\), the given data structure supports the following operations:
* Sample: sample the \(i\)-th entry from \(x\) with probability \(x_{i}^{2}/\|x\|\)
* Query: output entries \(x_{i}\) (\(i\in[n]\)) of \(x\), and;
* Norm: determine the L2-norm \(\|x\|\).
A data structure only supporting query operations over \(x\) will be denoted as \(Q(x)\). These assumptions serve as the classical analogue to quantum state preparation, and are often easier to satisfy.
This work also introduces three dequantized linear algebra protocols, which have been widely used by subsequent studies to dequantize various machine learning techniques.
**Protocol 1: Inner Product Estimation** (from Proposition 4.2 in [43]): For \(x,y\in\mathbb{C}^{n}\) given \(SQ(x)\) and \(Q(y)\), the inner product \(\langle x|y\rangle\) can be approximated with an additive error of \(\|x\|\|y\|_{c}\), and a probability of at least \(1-\delta\), using \(O(1/\varepsilon^{2}log(1/\delta))\) queries and samples.
**Protocol 2: "Thin-Matrix" Multiplication** (from Proposition 4.3 in [43]): For \(w\in\mathbb{C}^{k}\) and \(V\in\mathbb{C}^{n\times k}\), given \(SQ(V^{\dagger})^{2}\)
Fig. 3: Categories of QML Methods
(sample and query access to columns of \(V\)) and \(Q(w)\), sampling from the linear combination of \(V\)'s columns \(Vw\) can be done in \(O(k^{2}C(V,w))\) query and time complexity, where
\[C(V,w):=\frac{\sum_{i=1}^{k}|w_{i}V^{(i)}|^{2}}{|Vw|^{2}} \tag{1}\]
The cancellation measure \(C(V,w)\) quantifies the extent of cancellation in the matrix-vector product \(Vw\), with a value of 1 indicating no cancellation in orthogonal columns and undefined values in cases of maximum cancellation, such as linearly dependent columns.
**Protocol 3: Low-Rank Approximation** (from Theorem 4.4 in [43]): For matrix \(A\in\mathbb{C}^{m\times n}\), a threshold \(k\), and an error parameter \(\varepsilon\), a low-rank approximation of A can be described with probability of at least \(1-\delta\), in \(O(poly(k,1/\varepsilon))\) query and time complexity. Specifically in [43], the low-rank description is \(SQ(S,\hat{U},\Sigma)\) for matrices \(S\in\mathbb{C}^{\ell\times n}\), \(\hat{U}\in\mathbb{C}^{\ell\times k}\), and \(\hat{\Sigma}\in\mathbb{C}^{k\times k}\) (with \(\ell=poly(k,1/\varepsilon)\)). \(S\) is a normalized sub-matrix of \(A\), while \(\hat{U}\), and \(\hat{\Sigma}\) result from the singular value decomposition of \(S\). This implicitly describes the low-rank approximation of \(A\), denoted by \(D\):
\[D:=A\hat{V}\hat{V}^{\intercal}=A(S^{\dagger}\hat{U}\hat{\Sigma}^{-1})(S^{ \dagger}\hat{U}\hat{\Sigma}^{-1})^{\dagger}. \tag{2}\]
#### 4.1.1 Recommendation Systems
We now briefly describe the dequantized recommendation systems approach in [43]. Like its quantum version, the algorithm finds a low rank approximation of the hidden preference matrix \(A\), from which the preferences for a user \(i\) are sampled. Specifically, given sample and query access \(SQ(A)\) to \(A\) in the data structure, the algorithm leverages L2-norm sampling to obtain a succinct description of matrix A: \(SQ(S,\hat{U},\hat{\Sigma})\) (Protocol 3). SQ access is shown to be available to these components; \(\hat{V}:=S^{\intercal}\hat{U}\hat{\Sigma}^{-1}\) is obtained implicitly (the approximate largest \(k\) right singular vectors of \(A\)). The low-rank approximation of \(A\) is thus \(D\), the projection of \(A\) onto the subspace of the right singular vectors \(\hat{V}\), satisfying an acceptable error bound. Given, \(SQ(S,\hat{U},\hat{\Sigma})\), \(SQ(D)\) and thus \(SQ(D_{i})\) is available. Protocol 1 is used to approximate \(A_{i}\hat{V}\) using \(k\) inner product estimations. Rejection sampling then allows for finding \(D_{i}=A_{i}\hat{V}\hat{V}^{\intercal}\) in time independent of dimensions (Protocol 2). The algorithm's quality bounds on recommendations match those of the quantum version. Further, this regime produces an exponential improvement over the next best classical recommendations systems algorithm, with a running time of \(\tilde{O}(\|A\|_{F}^{24}/\sigma^{24}\varepsilon^{12})\)3.
Footnote 3: The notation \(\tilde{O}(\cdot)\) is used to suppress poly-logarithmic factors in variables.
Finally, the author provides a resulting guideline for performing any comparisons between QML and classical algorithms:
_When QML algorithms are compared to classical ML algorithms in the context of finding speedups, any state preparation assumptions in the QML model should be matched with L2-norm sampling assumptions in the classical ML model_[43].
Improvements over this work in the recommendation systems task come after several advancements in the dequantized algorithms field, particularly with the advent of the dequantized Quantum Singular Value Transform (QSVT) (Section 4.1.6). Chia et al. [45] used their dequantized QSVT framework to obtain a complexity bound of \(\tilde{O}(\|A\|_{F}^{6}\|A\|^{10}/\sigma^{16}\varepsilon^{6})\), which introduces a dependence on \(\|A\|\), but otherwise substantially improves upon [43]. Chepurko et al. [46] presents an algorithm that achieves a lower bound run-time complexity of \(\Omega(k^{3}/\varepsilon^{6})\) for generating a low-rank \(k\) approximation of an input matrix \(M\), guaranteeing that \(||A-M||_{F}^{2}\leq(1+\varepsilon)||A-A_{k}||_{F}^{2}\). Their methodology
Fig. 4: QRAM-like Data Structure for a matrix \(A\in\mathbb{C}^{2\times 4}\)[45]. Sampling from this structure involves traversing down the tree based on L2-norm probability. The terminating nodes contain the sign of each element \(A(i,j)\). An example is shown for \(A(2,3)\) (blue highlighting), sampled with probability \(\|a_{2}|^{2}/\|A\|_{F}^{2}\times|A(2,3)|^{2}/\|a_{2}\|^{2}\).
employs '\(\lambda\)-importance' sampling sketches that over sample ridge leverage score sketching, which relies on certain assumptions regarding the size of the Frobenius norm of \(A\), \(||A||_{F}\), and the residual \(||A-A_{k}||_{F}\). The authors note that this is directly comparable to [43], which requires a run-time that is polynomially large in \(k\), \(\kappa\), and \(\varepsilon^{-1}\). While providing relative error bounds for low-rank approximations, their method has an additive error, with a bound more like \(||A-A_{k}||_{F}+\varepsilon||A||_{F}\).
Bakshi and Tang [47] similarly employed their dequantized QSVT framework, and achieved a complexity of \(\tilde{O}(\|A\|_{F}^{4}/\sigma^{9}\varepsilon^{2})\). The authors note that direct comparison with [46] is difficult due to the additional assumptions on the size of \(\|A_{k}\|_{F}\) and the residual \(\|A-A_{k}\|_{F}\) required by ridge leverage score sketching. Introducing the bound \(k\leq\|A\|_{F}^{2}/\sigma^{2}\) converts these error bounds to be more "QSVT-like", which elucidates the run-time in [46] to be \(\tilde{O}(\|A\|_{F}^{6}/\sigma^{6}\varepsilon^{6})\), which is improved upon in terms of \(\|A\|_{F}\) and \(\varepsilon\), yet loses a factor of \(\sigma^{3}\).
#### 4.1.2 Clustering and Dimensionality Reduction
Following their work in [43], Tang further went on to dequantize both quantum supervised clustering and principle component analysis (PCA) [48]. In that work, the dequantization model (SQ access input model with L2-norm sampling assumptions) is formalized, which directly comes from the prior work on the recommendation systems problem:
_A quantum protocol's \(\mathbb{S}:O(T)\)-time state preparation of \(\ket{\phi_{1}},\ldots,\ket{\phi_{c}}\rightarrow\ket{\psi}\) is "dequantized" if a classical algorithm of the form \(\mathbb{C}_{\mathbb{S}}:O(T)\)-time \(SQ(\phi_{1},\ldots,\phi_{c})\to SQ^{v}(\psi)\) can be described with similar guarantees to \(\mathbb{S}\) up to polynomial slowdown [48]._
Under this framework, only a quadratic speedup (due to amplitude amplification) is observed by the quantum nearest-centroid supervised clustering method proposed in [49], which aims to find the distance from a point \(u\in\mathbb{C}^{n}\) to the centroid of a cluster of points given by \(V\in\mathbb{C}^{mxn}\) (and let \(\bar{V}\) be \(V\) with unit normalized rows). Assuming SQ access to both \(u\) and rows of \(V\), the problem reduces to approximating \(\|Mw\|\), where:
\[M:=\begin{bmatrix}u/\|u\|\\ \frac{1}{\sqrt{n}}\bar{V}\end{bmatrix}\text{ and }w:=\begin{bmatrix}\|u\|-\frac{1}{ \sqrt{n}}\bar{V}\end{bmatrix} \tag{3}\]
At this point, the quantum version will perform a swap test to construct \(\ket{wM}\). This can be effectively dequantized by reformulating \(\|Mw\|\) into \(w^{\dagger}M^{\dagger}Mw\), which can be given as the inner product \(\bra{a}b\) of two tensors \(a\), \(b\in\mathbb{C}^{m\times n\times n}\) constructed from \(M\) and \(w\):
\[a:=\sum_{i=1}^{d}\sum_{j=1}^{n+1}\sum_{k=1}^{n+1}M_{ji}\|M_{k,\star}\|\ket{i} \ket{j}\ket{k}; \tag{4}\]
\[b:=\sum_{i=1}^{d}\sum_{j=1}^{n+1}\sum_{k=1}^{n+1}\frac{w_{j}^{\dagger}w_{k}M_{ ki}}{\|M_{k,\star}\|}\ket{i}\ket{j}\ket{k}. \tag{5}\]
Then, given \(SQ(a)\) and using Protocol 1, the desired approximation is achieved in \(O(T(\|w\|^{2}/\varepsilon^{2})log(1/\delta))\) time.
The Quantum Principle Component Analysis (QPCA) method [50] was dequantized via a similar methodology to that in [43], since the low rank approximation effectively produces a dimensionality reduction of the given matrix. In QPCA, the algorithm outputs estimates for both the top-\(k\) singular values \(\hat{\sigma}_{1}^{2},\ldots,\hat{\sigma}_{k}^{2}\) and singular vectors \(\ket{\hat{v}_{1}^{2}},\ldots,\ket{\hat{v}_{k}^{2}}\), using density matrix exponentiation to find these principal components. The dequantized version uses Protocol 3 to find \(SQ(S,\hat{U},\hat{\Sigma})\) and produces the approximate large singular vectors \(\hat{V}:=\hat{S}^{\dagger}\hat{U}\hat{\Sigma}^{-1}\). The \(\sigma_{i}\)'s are taken from \(\Sigma\), and the \(v_{i}\)'s are given implicitly: \(\hat{v}_{i}=S^{\dagger}\hat{U}_{\star,i}/\hat{\sigma}_{i}\).
In the QSVT dequantization method of [45], the bounds in [48] for supervised clustering were reproduced, and improved for PCA from \(\tilde{O}(\|X\|_{F}^{36}/\|X\|^{12}\lambda_{k}^{12}\theta^{\varepsilon 12})\) to \(\tilde{O}(\|X\|_{F}^{6}/\|X\|^{2}\lambda_{k}^{2}\eta^{6}\varepsilon^{6})\), a substantial improvement in almost all parameters.
#### 4.1.3 Matrix Inversion and Solving Linear Systems
The task of matrix inversion is an essential subroutine in many machine learning optimization operations. Solving the linear system \(Ax=b\) for some \(A\in\mathbb{R}^{m\times n}\) typically requires solving the pseudoinverse of \(A\): \(\bar{x}=A^{+}b\) (finding an \(\bar{x}\) minimizing \(\|Ax-b\|\) when \(A\) is not invertible). In classical settings, computing the pseudoinverse can be computationally difficult, especially for large or ill-conditioned matrices. Known methods of doing so, such as the Moore-Penrose pseudoinverse, may take, at worst, close to \(O(n^{3})\) time (where \(m\approx n,m<n\)) [51]; a severe bottleneck in large data applications [52].
In the quantum setting, matrix inversion is performed using the famed HHL algorithm [53]. Dequantizing this in the sparse matrix inversion setting is generally difficult due to its BQP completeness [53]. However, many machine learning contexts operate in low rank settings, wherein which dequantization becomes possible, as demonstrated by the low-rank matrix inversion algorithms independently proposed by Gilyen et al. [54] and Chia et al. [55]. These algorithms exploit the low-rank structure of the input matrix to perform the inversion efficiently using classical techniques. In [54] an exponential speed-up was shown to be possible for classical low-rank matrix inversion. Given \(SQ(A)\) and \(SQ(b)\), both in constant time, \(SQ(A^{+}b)\) is approximated in time \(O(polylog(mn))\), equivalent to the quantum version presented in [53], thus showing no quantum advantage. Specifically, the approximation to \(\bar{x}\) is given by:
\[\begin{split}\tilde{x}&\approx\sum_{\ell=1}^{k}\frac{1}{ \tilde{\sigma}_{\ell}^{4}}\ket{R^{\dagger}w^{(\ell)}}\bra{R^{\dagger}w^{(\ell)}} \ket{A^{\dagger}b}\\ &\approx\sum_{\ell=1}^{k}\frac{1}{\tilde{\sigma}_{\ell}^{2}}\ket{ \hat{v}_{A}^{(\ell)}}\bra{\hat{v}_{A}^{(\ell)}}\ket{A^{\dagger}b}\approx(A^{ \dagger}A)^{+}A^{\dagger}b=A^{+}b\end{split} \tag{6}\]
\(SQ(A)\) allows for L2-norm sampling of row indices \(r\) from \(A\in\mathbb{C}^{m\times n}\); \(s\in[r]\) indices are then sampled from uniformly, from which the column indices \(c\) are L2-norm sampled to find \(C\) (Protocol 3); \(R\in\mathbb{C}^{r\times n}\) is implicitly defined from the \(r\) rows. The Singular Value Decomposition
(SVD) of \(C\) is then computed to obtain its left singular vectors \(u^{1},\ldots,u^{k}\) and singular values \(\overline{\sigma}_{1},\ldots,\overline{\sigma}_{k}\), which are shown to be good approximations for the right singular vectors of \(R\), and implicitly \(A\). Specifically, this approximation is good with probability at least \(1-\eta\) given \(r\geq\frac{4ln(2n/\eta)}{\varepsilon^{2}}\). The left singular vectors \(\widetilde{v}_{A}^{(\ell)}:=\left(\sum_{s=1}^{r}R_{i_{u}}^{(\ell)}/\overline{ \sigma}_{\ell}\right)\) of \(A\) are then approximated by the projection \((\tilde{v}_{A}^{(\ell)}\left|A^{\dagger}/\overline{\sigma}_{\ell}\right)\). Then an estimate of \(\langle\tilde{v}_{A}^{(\ell)}\left|A^{\dagger}\right.\left|b\rangle\) is obtained via Protocol 1 to additive error, since \(\langle\tilde{v}_{A}^{(\ell)}\left|A^{\dagger}\right.\left|b\rangle=\text{Tr} \left[\left|A^{\dagger}b\right.\left\langle\tilde{v}_{A}^{(\ell)}\right| \right]\right.\) is an inner product. Protocol 2 produces samples from the linear combinations of these to achieve the approximate solution \(\bar{x}\).
Chia et al. [55] present a similar method. The objective is to approximate \(A^{\dagger}A\) from the singular values and singular vectors of a small sub-matrix of \(A\), from which the solution \(A^{-1}b\) can then be sampled from in sub-linear time, given \(SQ(A)\). The left singular values and vectors are produced from \(A\) using a similar sub-sampling technique in [54] to arrive at \(A\in\mathbb{C}^{m\times n}\to R\in\mathbb{C}^{p\times n}\to C\in \mathbb{C}^{p\times p}\to W\in\mathbb{C}^{k\times k}\) where \(W\)'s \(u^{1},\ldots,u^{k}\) left singular vectors and \(\hat{\sigma}_{1},\ldots,\hat{\sigma}_{k}\), singular values are good approximations of \(A\)'s. \(V\in\mathbb{C}^{n\times k}\) is then formed from column vectors \(\frac{S^{\dagger}}{\hat{\sigma}_{i}}\hat{u}_{i}\), with \(D\in\mathbb{C}^{k\times k}\) being a diagonal matrix containing the \(\hat{\sigma}_{i}\)'s. This allows for the description of \(\hat{A}^{\sim 2}=VD^{-2}V^{\dagger}\), the approximation of the square matrix \(A^{\dagger}A\) in the normal equation \((A^{\dagger}A)^{+}A^{\dagger}b\). The problem is now to find \(\bar{x}=VD^{-2}V^{\dagger}A^{+}b\). An extension of Protocols 1 and 2 is proposed that shows that approximating \(x^{\dagger}Ay\) up to additive error is possible given \(SQ(A)\), \(SQ(x)\) and \(SQ(y)\). This is used to find \(w\in\mathbb{C}^{k}=V^{\dagger}A^{+}b\). The authors then show that both querying an entry in \(V_{i},D^{-2}w\) (via Protocol 1) and sampling the solution \(VD^{-2}w\) (via Protocol 2) can be done in sub-linear time.
The works by [54] and [55] achieve running times of \(\tilde{O}(k^{\varrho}\|A\|_{F}^{\varrho}\|A\|^{16}/\sigma^{22}\varepsilon^{6})\) (where \(k\leq\frac{\|A\|_{F}^{2}}{\sigma^{2}}\) is rank(\(A\))) and \(\tilde{O}(\|A\|_{F}^{\varrho}\|A\|^{22}/\sigma^{28}\varepsilon^{6})\) respectively. The dependence on \(k\) in the former makes direct comparison between these complexities difficult, though the algorithm in [54] operates in the more restricted setting where \(A\) is strictly rank \(k\). Chia et al. go on to employ their dequantized QSVT [45] (see Section 4.1.6) to re-derive an algorithm for the task that runs in the same time as [55], in a more general setting where \(\sigma\) can be chosen as an arbitrary threshold, and not necessarily the minimum singular value of \(A\).
Several subsequent works have produced quantum-inspired classical low-rank linear regression algorithms that improve on these complexity bounds. Gilyen et al. [56], in the same setting, provide two algorithms; one that outputs a measurement of \(|x_{i}|\) in the computational basis and another that outputs an entry of \(x\) in \(\tilde{O}\left(\|A\|_{F}^{\varrho}\|A\|^{6}/\sigma^{12}\varepsilon^{4}\right)\) and \(\tilde{O}\left(\|A\|_{F}^{\varrho}\|A\|^{2}/\sigma^{8}\varepsilon^{4}\right)\) time, respectively, a large improvement on all prior methods. The problem considered is extended to the ridge regression setting: finding a solution \(x\) given \(f(x):=\frac{1}{2}\left(\|Ax-b\|^{2}+\lambda\|x\|^{2}\right)\). The key contribution is the construction of a sparse description of the solution vector \(b\), after which a stochastic gradient descent (SGD) based optimization process exploits sample-query accesses \(SQ(A)\) and \(Q(b)\) to find \(x\). This avoids explicitly computing the matrix pseudo-inverse, resulting in large improvements in running time. The authors note that this method is viable only when \(b\) is sparse, but show that via matrix sketching techniques, this requirement can be bounded up to a certain degree. This work highlights the advantages of using iterative approaches in conjunction with quantum-inspired linear algebra techniques.
Chepurko et al. [46] proposed an algorithm for the ridge regression problem using alternative techniques from randomized numerical linear algebra. A key distinction is the use of Projection-Cost Preserving (PCP) sketches, with analysis via ridge leverage score sampling techniques, as opposed to the L2-norm sampling common in prior quantum-inspired algorithms. The authors argue that sampling according to the squared row norms of a matrix \(A\) is akin to sampling from a distribution close to the leverage score distribution. Consequently, by oversampling by a factor of the square of the condition number \((\kappa^{2}(A))\), the same guarantees can be achieved as with leverage score sampling. PCP sketches produces the low-rank approximation of \(A\), which is then processed through the SVD, and a QR factorization to produce an approximation of \(A\). This small, sub-matrix decomposition solves a linear system of equations using a conjugate gradient method to find the solution, as opposed to inverting the matrix directly. A running time of \(\tilde{O}(\|A\|_{F}^{4}\log(c)/\sigma^{8}\varepsilon^{4})\) is achieved, where \(\sigma\) is the minimum singular value of \(A\), \(c\) is the number of columns in \(A\), and assuming some sizable regularization parameter \(\lambda=\Theta(\|A\|_{2}^{2})\)4; a roughly \(\|A\|_{F}^{2}/\|A\|^{2}\) improvement on [56].
Footnote 4: The symbol \(\Theta\) denotes that the behavior of \(\lambda\) is tightly bound to the squared 2-norm of matrix \(A\), neither growing much faster nor much slower than that term.
Shao and Montanaro [57] propose two algorithms for solving linear systems: One based on the randomized Kaczmarz method, and one based on the randomized coordinate descent method. Both are iterative methods. The authors explore their algorithms under various settings of \(A\), such as when it is dense, sparse, or semi-positive definite (SPD), and note that their latter algorithm only operates in the SPD setting. The former algorithm focuses on the dual form of the Kaczmarz method, specifically iterating on \(y\), obtained by introducing \(y\) such that \(x=A^{\intercal}y\) for consistent linear systems. The goal is to find a sparse approximate solution for \(y\), which, when plugged back into \(x=A^{\intercal}y\) provides an approximate solution for the original linear system, similar to [56]. This sparsity, guaranteed by the iterative method, simplifies querying an entry of \(x\), and an entry of \(x\) can be sampled using rejection sampling. The resulting algorithm has a complexity bound of \(\tilde{O}(\|A\|_{F}^{\varrho}\|A\|^{2}/\sigma^{8}\varepsilon^{2})\), which improves in \(\varepsilon\) over [46], but suffers in \(\|A\|_{F}^{6}\). In [56], the authors recognize the roughly \(\|A\|_{F}^{2}/(\sigma^{4}\varepsilon^{2})\) improvement on their work, when \(\lambda=0\) and \(Ax^{*}=b\) exactly (without any regression parameter). The authors note the similarity between this method and the ones introduced in [57]; the randomized Kaczmarz update is a variant of SGD, and by replacing their stochastic gradient with the Kaczmarz update (and with accommodating minor adjustments to their sketch), a similar running time to [57] is achieved for the no regression setting.
Finally, Bakshi and Tang [47] achieve a running time of \(\tilde{O}(\|A\|_{F}^{4}/\sigma^{11}\varepsilon^{2})\) with their method and note that this is comparable with, or improves upon several, prior works
over the matrix inversion and linear systems tasks. When contrasted with [46], their work exhibits better \(\varepsilon\) dependence and equivalent \(\|A\|_{F}\) dependence. Notably a superior \(\sigma\) dependence in low-error settings is achieved, although in situations where a higher error is permissible, the approach in [46] may be more efficient. Relative to [57], the authors match the dependence in \(\varepsilon\), excels in \(\sigma\)-dependence, but falls short in Frobenius norm dependence. Additionally, the authors note their approach is more general than in [57], which requires \(b\) to be in the column space of \(A\).
#### 4.1.4 Support Vector Machine
The seminal quantum support vector machine (SVM) algorithm presented in [58] finds a solution to a set of linear equations -- the Least Squares SVM (LS-SVM); a variant of the SVM optimization problem. LS-SVM attempts to label points in \(\mathbb{R}^{m}\) as +1 or -1. Given input data points \(x_{j}\) for \(j=1,...,m\in\mathbb{R}^{n}\) and their corresponding labels \(y\) in \(\pm 1^{m}\), the goal is to find a hyperplane, specified by \(w\in\mathbb{R}^{n}\) and \(b\in\mathbb{R}\), that separates these points. Since it may not be possible to perfectly separate all points, a slack vector \(e\in\mathbb{R}^{m}\) is introduced, where \(e(j)\geq 0\) for all \(j\in[m]\). The objective is to minimize the squared norm of the residuals, which is a combination of the squared norm of \(w\) and \(e\), given by:
\[\min_{\mathbf{w},b} \frac{\|w\|^{2}}{2}+\frac{\gamma}{2}\|e\|^{2}\] (7) s.t. \[y_{i}(w^{\intercal}x_{i}+b)=1-e_{i},\forall i\in[m]. \tag{8}\]
In the dual problem, LS-SVM tries to find a classification for the data points based on their projection on the optimal hyperplane. This is achieved by solving for the \(\alpha\) values in the equation:
\[(X^{\intercal}X+\gamma^{-1}I)\alpha=y, \tag{9}\]
where \(\alpha=u_{k}y_{k}\); this term quantifies the weight of each data point in defining the classification hyperplane. The equation for the hyperplane thus becomes \(x^{\intercal}X\alpha=0\). For new data points \(x\), the LS-SVM performs the classification by evaluating \(\text{sgn}(x^{\intercal}X\alpha)\).
In the quantum approach, labeled data vectors \(x_{j}\) for \(j=1,...,m\) are transformed into quantum vectors \(|x_{j}\rangle=\frac{1}{|x_{j}|}\sum_{k}(x_{j})_{k}\ket{k}\) via QRAM. The kernel matrix is then assembled by leveraging quantum inner product estimations. The solution is obtained by solving a system of linear equations related to the quadratic programming problem of the SVM, facilitated by the HHL algorithm. A dequantized LS-SVM algorithm was presented in [59]. Here, the data vectors are stored in classical QRAM [44], allowing for \(SQ(x)\). Although the L2 norm sub-sampling methods from [54, 55] could be used to inverse \((X^{\intercal}X+\gamma^{-1}I)\), they would not be able to produce the inverse in the reported logarithmic time, since directly computing \(X^{\intercal}X\) is done in polynomial time. Instead, the authors devise an indirect sampling technique to perform the inversion, given only \(SQ(X)\). This process involves two primary stages: column and row sub-sampling from the matrix \(X\), forming matrices \(X^{\prime}\) and \(X^{\prime\prime}\) and their squares \(A^{\prime}=X^{\prime\intercal}X^{\prime}\) and \(A^{\prime\prime}=X^{\prime\prime}\Upsilon X^{\prime\prime}\) respectively. Here, it is the spectral decomposition of \(A^{\prime\prime}=V^{\prime\prime}\Sigma^{2}V^{\prime\prime\intercal}\) that yields the approximate eigenvalues and eigenvectors of \(X\). The elements of the vector \(\tilde{\lambda}_{l}=\tilde{V}_{l}^{\intercal}y\) are estimated, using the approximated eigenvectors and the data labels. Then, a vector \(u\) is derived as a weighted sum of the columns from the spectral decomposition. The algorithm constructs \(\tilde{\alpha}\), an approximation of the solution to the LS-SVM problem obtained by finding query access of \(\tilde{\alpha}=\tilde{R}\tau u\), where \(\tilde{R}\) represents a sampled part of matrix \(R=X^{\intercal\intercal}X\). The final classification of any new data point \(x\) is determined by calculating its projection onto the classification hyperplane, given by \(x^{\intercal}X\tilde{\alpha}\), with the sign of this projection determining the class of \(x\).
#### 4.1.5 Semidefinite Programming
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function over the intersection of the cone of positive semidefinite matrices with an affine space. While traditional, classical SDP solvers assume entry-wise access to matrices, quantum SDP solvers make use of oracle access to specially constructed data structures, allowing for sub-linear access of matrix elements in superposition. Such quantum SDP algorithms have been explored in the sparse setting to achieve exponential speeds. With respect to this paradigm, [60] propose a dequantized SDP solver on the back of Tang's methodology [43], since such access can be analogously described classically via the sample and query framework.
In [60] the normalized SDP feasibility problem variant of SDP is explored, which involves finding, for a given \(\varepsilon>0\), a positive semidefinite matrix \(X\) with trace 1 that satisfies \(\text{Tr}[A_{i}X]\leq a_{i}+\varepsilon\) for a set of given Hermitian matrices \(A_{i}\) and real numbers \(a_{i}\), or determining that no such matrix exists. By leveraging binary search, the \(\varepsilon\)-approximation of the SDP can be simplified into a sequence of feasibility tests, as per the above definition. The Matrix Multiplicative Weight (MMW) method is used as the solver, which is a game-theoretic, iterative process that seeks an approximate feasible solution. In each iteration, the first player attempts to find a feasible solution \(X\in S_{\varepsilon}\), which is updated based on any violation \(j\in[m]\) by any proposed \(X\) of the constraints found by the second player, i.e., \(\text{Tr}[A_{j}X]>a_{j}+\varepsilon\). Each round \(t\) has the update \(X_{t+1}\leftarrow\exp[-(A_{ji}+\ldots+A_{jt})]\) for the next round. The solution is arrived at in the equilibrium of this process.
The approach here is to sample from an approximate description of the matrix exponentiations \(\exp[-(A_{ji}+\ldots+A_{ji})]\), using the approximate spectral decomposition \(A:=\sum_{r=1}^{t}A_{jr}\) as \(A\approx\tilde{V}\hat{D}\hat{V}^{\dagger}\) to achieve \(e^{-A}\approx\tilde{V}e^{-D}\hat{V}^{\dagger}\), where \(\tilde{V}\in\mathbb{C}^{n\times r}\) and \(\hat{D}\in\mathbb{R}^{n\times n}\) are diagonal. This work devises two methods to address a few encountered challenges. First, the dynamic nature of the matrix \(A\) throughout the MMW method makes the assumption \(SQ(A)\) infeasible. This problem is circumvented by devising a weighted sampling procedure that provides a succinct description of a low-rank approximation of \(A\) by sampling each individual \(A_{jr}\): \(A:=\sum_{r=1}^{t}A_{jr}\). Secondly, standard sampling procedures yield an approximation \(VD^{2}V^{\dagger}\approx A^{\dagger}A\) instead of a spectral decomposition \(\hat{V}\hat{D}\hat{V}^{\dagger}\approx A\), even if \(A\) is Hermitian. This discrepancy is problematic for matrix exponentiation, as the singular values disregard the signs of the eigenvalues, leading to significant errors when approximating \(e^{-A}\) by naively
exponentiating the SVD. The resolution is a novel approximation procedure called symmetric approximation, which can calculate and diagonalize the small matrix \((\hat{V}^{\dagger}A\hat{V})\), yielding approximate eigenvalues of \(A\) and a desired description of its spectral decomposition in logarithmic time in dimension. Evaluating the solution via the constraints can be done with the dequantized protocols, e.g. \(\mathrm{Tr}[A_{j}X]\) via Protocol 1.
#### 4.1.6 Quantum Singular Value Transform
Recent research by Gilyen et al. [61] has proposed that a broad array of quantum algorithms may be manifestations of a singular underlying principle, encapsulated by the Quantum Singular Value Transformation (QSVT). The QSVT can be understood as a process that, given a matrix \(A\in\mathbb{C}^{m\times n}\) and a function \(f:[0,\infty)\rightarrow\mathbb{R}\), implements a unitary \(U\) that represents the polynomial transformation of \(A\) to the matrix \(f(A)\), defined via its singular value decomposition. The QSVT is uniquely characterized by its parametric nature, defined by phase angles \(\phi(k)\) which can be efficiently and stably computed in a classical manner given a desired polynomial transformation [62]. The flexibility of this parameterization imbues the QSVT with remarkable power and adaptability; many quantum algorithms can be phrased under the QSVT framework, so long as there can be found a specific parameterization for the task. As such, the QSVT has been touted as a "unifying framework" for quantum algorithms [62]; its power lies in its capacity to encapsulate a broad range of quantum algorithms within a coherent theoretical structure.
The QSVT extends upon Quantum Signal Processing and Qubitization [63], applicable to an entire vector space. It can apply polynomial transformations to all the eigenvalues of a given Hamiltonian \(H\) that has been block-encoded into a larger unitary matrix \(U\) - the Quantum Eigenvalue Transform (QET). This can be generalized for non-square matrices to its singular values - the QSVT. Given the singular value decomposition of a matrix \(A\):
\[A=U_{\Sigma}\Sigma V_{\Sigma}^{\dagger}=\sum_{k=1}^{r}\sigma_{k}\ket{u_{k}} \bra{v_{k}}, \tag{10}\]
where \(U_{\Sigma},V_{\Sigma}\) are unitaries and \(\Sigma\) is a diagonal matrix with non-negative, real singular values \(\sigma_{1},\ldots,\sigma_{k}\) along the diagonal, up to the \(k\)-th (\(r=\mathrm{rank}(A)\)) singular value. \(U_{\Sigma}\) and \(V_{\Sigma}\) span orthonormal bases, denoted by \(\{\ket{u_{k}}\}\) (the left singular vector space) and \(\{\ket{v_{k}}\}\) (the right singular vector space) respectively. These spaces can be defined by the projectors \(\tilde{\Pi}:=\sum_{k}\ket{u_{k}}\bra{u_{k}}\) and \(\Pi:=\sum_{k}\ket{v_{k}}\bra{v_{k}}\). The block-encoding of \(A\) within a unitary \(U\) can thus be given by:
\[U=\begin{array}{c}\tilde{\Pi}\\ \tilde{\Pi}\end{array}\left[\begin{array}{cc}A&\cdot\\ \cdot&\cdot\\ \cdot&\cdot\end{array}\right]. \tag{11}\]
\(\tilde{\Pi}\) and \(\Pi\) locate \(A\) within \(U\), such that they form the projected unitary encoding \(A:=\tilde{\Pi}U\Pi\). The QSVT is then realized with the projector-controlled phase-shift operations of \(\tilde{\Pi}\) and \(\Pi\) to approximate the degree-\(d\) polynomial transformations:
(12)
where \(f^{\text{(SV)}}(A)\) is defined for an odd polynomial as:
\[f^{\text{(SV)}}(A):=\sum_{k=1}^{r}f^{\text{(SV)}}(\sigma_{k})\ket{u_{k}} \bra{v_{k}}, \tag{13}\]
which applies the polynomial transformation to the singular values of \(A\). A similar result is obtained for even polynomial transformations, where only the right singular vector projections are involved. Of importance is the compositional properties of block-encoded matrices. Many quantum algorithms and QML applications can be reformulated via this framework, given a satisfactory approximation can be achieved by carefully considered polynomial transformations.
**Dequantizing the QSVT:** A natural question now is whether the QSVT can be successfully dequantized. Several independent efforts have been made on this front. Chia et al. [45] note that algorithms framed under QSVT fall roughly into two categories based on their assumptions on input data \(A\): those with sparsity assumptions and those with low stable rank and under QRAM assumptions (which inform how the block-encodings in [61] can be obtained). These settings allow for efficient block-encodings, outside of which they are generally not efficiently computable. It may not be possible to dequantize algorithms under sparsity assumptions, since this would imply dequantizing the HHL algorithm, which is recoverable from the QSVT framework; HHL is known to be BQP-complete for sparse matrices, even for constant precision [53]. For block-encoding arithmetic operations over algorithms using QRAM-based data structures, this category shares efficient operations with classical algorithms that have _oversampling and query_ access available. From this insight, the authors go on to implement a dequantized QSVT framework applicable to the prior tasks examined under the dequantized lens, showing improvements in query and time complexity over several established methods. These results suggest that dequantization techniques can be generalized to most, if not all QRAM-based QSVT models, and give strong evidence that these models admit no exponential speedup in the bounded Frobenius norm regime. In particular, the framework extends upon the notion of SQ access for vectors \(SQ(v)\) and matrices \(SQ(A)\), and formalizes \(\phi\)-oversampling and query access to vectors and matrices \(SQ_{\phi}(v)\) and \(SQ_{\phi}(A)\) respectively. \(\phi\)-oversampling and query access to a vector \(v\) is available if we have \(Q(v)\) and sample and query access \(SQ(\tilde{v})\) to a vector \(\tilde{v}\), which serves as an "element-wise upper bound" of \(v\) and complies with the following properties: \(\|\tilde{v}\|^{2}=\phi\|v\|^{2}\) and \(|\tilde{v}_{i}|\geq|v_{i}|\) for every index \(i\). The definition can be extended similarly for a matrix. The factor \(\phi\) can be interpreted as a kind of computational surplus incurred in the execution of algorithms: Via rejection sampling, \(SQ(\tilde{v})\)
is capable of performing approximations of all the queries of \(SQ(v)\), albeit with an additional cost denoted by the factor \(\phi\). This cost roughly corresponds to overheads in post-selection in the quantum setting. The oversampling and query input model has closure properties very similar to that of block-encodings, that allow for the chained composition of complex arithmetic operations, and can be achieved whenever quantum states and generic block-encodings can be prepared efficiently.
Specifically, these closure properties allow for accessing the approximation of \(f(A^{\dagger}A)\) for a smooth, bounded Lipschitz-continuous function \(f\) in time independent of dimension, given a generic matrix \(A\). \(f(A^{\dagger}A)\) can be shown to, via various matrix sketching techniques and the aforementioned closure properties, approximate an RUR decomposition \(R^{\dagger}\bar{f}(CC^{\dagger})R\), where \(\bar{f}(x:=f(x)/x)\), and \(R\) and \(C\) contain the normalized columns of \(A\) and \(R\) respectively. This expresses a desired matrix as a linear combination of \(r\) outer products of rows of the input matrix, for which oversampling and query access is available, since having \(SQ(A)\) implies \(SQ_{\phi}(R^{\dagger}UR)\), where \(U:=\bar{f}(CC^{\dagger})\). We can then achieve an approximation for \(f^{\text{(SV)}}(A)\) up to some error, from which we can sample from some solution vector \(v:=|f^{\text{(SV)}}(A)b\rangle\) such that \(\|f(A)b-v\|\leq\varepsilon\) in \(\tilde{O}(d^{22}\|A\|_{F}^{6}/\varepsilon^{6})\) time; the dequantized generic singular value transform for degree-\(d\) polynomials. Significantly, the authors argue for the generality of their framework, which allows for approximately low rank matrices, rather than strictly low rank matrices, as input, which has been stipulated in prior works. This framework is then applied to re-derive several existing dequantized algorithms, showing improvements in complexity bounds in most cases, and some relaxation of input constraints in many, as seen in Section 4.1.9. Further, the framework was applied to two tasks that had not yet been explored under the dequantization framework: Hamiltonian Simulation and Discriminant Analysis.
In a parallel work, Jethwani et al. [64] also proposed their version of the dequantized QSVT. The authors similarly define the function \(f\) as smooth, Lipschitz-continuous, such that \(f^{\text{(SV)}}(A^{\dagger})=h(A^{\dagger}A)A^{\dagger}\) for \(h(x)=f(\sqrt{x})/\sqrt{x}\). The authors provide a sketch of \(A\) down to a smaller sub-matrix \(W\) from successive samples of rows and columns of \(A\). Then a similar result to [45] is obtained by computing the SVD of \(W\) to find \(U:=W^{+}f(W^{\dagger})W^{+}(W^{\dagger})^{+}\). The solution is then sampled from \(R^{\dagger}URA^{\dagger}b\approx f(A^{\dagger})b\). The complexity achieved is \(\tilde{O}(\|A\|_{F}^{6}\kappa^{20}(d^{2}+\kappa)/\varepsilon^{6})\), where \(\kappa\) is the condition number of \(A\). The authors require that \(A\) be a well-conditioned matrix, such that all \(\sigma_{k}\) of \(A\) lie within \([1/\kappa,1]\). The conditional number dependence makes direct comparison with the results from [45] difficult, however, [45] surmises that for the singular value transform in [64], the typical cases of \(f\) require \(A\) to be strictly low-rank.
Gharibian and Le Gall [65] explored dequantizing QSVT for sparse matrices, noted to be a difficult task. This is the primary setting relevant to quantum chemistry and quantum PCP applications. They derive a dequantized QSVT algorithm for \(s\)-sparse Hamiltonians, where \(s\) denotes the number of non-zero entries per row or column. This algorithm is then considered for the task of Guided Local Hamiltonian estimation (GLH): estimating the ground state energy of a local Hamiltonian when given a state sufficiently close to the ground state. Two key results follow: (1) showing that QSVT in sparse settings provided with SQ access can be efficiently dequantized and computed classically given constant-degree polynomials and with constant precision, and; (2) this problem becomes BQP-complete for input representations in a "semi-classical state", which are uniform vectors represented as a uniform superposition of basis states within a polynomial size subset, with polynomial precision. This is distinguished from other efforts in dequantizing the QSVT, which only consider QSVT circuits in the QRAM setting. This work highlights the idea that dequantizing quantum algorithms is not a uniquely ML-driven focus; other fields may benefit from efforts in dequantizing algorithms for their respective applications.
Finally, the most recent entry into this line of work is presented by Bakshi and Tang [47], who focused on improving the complexity bounds of the dequantized QSVT algorithms presented earlier. The authors suggested that a significant cost of performing these algorithms is due to the computation of the singular value decomposition of matrix \(A\), which directly incurs the \(\|A\|_{F}^{6}/\varepsilon^{6}\) cost in [45]. A key insight is to additionally use iterative algorithms on top of the sketches of \(A\) and \(b\) to approximate matrix polynomials. Bakshi and Tang's work [47] combines sketches of matrices \(A\) and \(b\) with Clenshaw recurrence, an iterative algorithm, to approximate matrix polynomials more efficiently than previous approaches. They introduce the Bilinear Entry Sampling Transform (BEST) for matrix sparsification, which optimizes \(\varepsilon\)-dependence without requiring full spectral norm bounds. This technique streamlines the error analysis and achieves a notable reduction in complexity bounds for dequantized QSVT. Consequently, their method presents substantial improvements in terms of time and resource consumption; computing \(v\) only requires \(\tilde{O}(d^{11}\|A\|_{F}^{4}/\varepsilon^{2})\) time, and without the condition number dependence in [64]. This complexity reduction brings the time required within the scope suggested as indicative of quantum advantage [66], thus providing robust evidence countering the notion of quantum supremacy. The framework was then used to find improvements in complexity bounds for the tasks of recommendations systems, linear regression, and Hamiltonian simulation.
#### 4.1.7 Discriminant Analysis
This subsection leads on from the work of [45]. The dequantized QSVT framework is used to dequantize the quantum Fisher's Linear Discriminant Analysis (LDA) algorithm, presented by Cong and Duan [67]. The LDA problem aims to project classified data onto a subspace that maximizes between-class variance while minimizing within-class variance. Given \(M\) input data points \(\{x_{i}\in\mathbb{R}^{N}:1\leq i\leq M\}\) belonging to \(k\) classes, we define between-class scatter matrix \(S_{B}\) and within-class scatter matrix \(S_{W}\). The original goal is to solve the generalized eigenvalue problem \(S_{B}v_{i}=\lambda_{i}S_{W}v_{i}\), but this may not be feasible in cases where \(S_{W}\) is not full-rank. Consequently, Cong and Duan consider a relaxation where small eigenvalues of \(S_{W}\) and \(S_{B}\) are ignored, leading to an approximation using inexact eigenvalues, which can be applied to the quantum context.
Dequantizing this involves finding an approximate isometry \(U\) and a diagonal matrix \(D\), such that
\(S^{\frac{1}{2}}BS_{W}^{-1}S^{\frac{1}{2}}BU\approx UD\), which finds the approximate eigenvalues and eigenvectors of \(S_{W}^{+}S_{B}\). Given \(SQ(B,W)\in\mathbb{C}^{m\times n}\), with \(S_{W}\approx W^{\dagger}W\) and \(S_{B}\approx B^{\dagger}B\), the Even Singular Value Transform in [45] approximates \(\sqrt{W^{\dagger}W}\approx R_{W}^{\dagger}U_{W}R_{W}\) and \(B^{-\dagger}B\approx R_{B}^{\dagger}U_{B}R_{B}\) through \(RUR\) decompositions. Matrix sketching techniques then yield an approximate \(RUR\) decomposition of the target matrix, \(R_{W}^{\dagger}UR_{W}\), where \(U=U_{W}R_{W}R_{B}^{\dagger}U_{B}R_{B}R_{W}^{\dagger}U_{W}\), from which the approximate eigenvalues and eigenvectors can be extracted.
Of course, here's the equation:
#### 4.1.8 Hamiltonian Simulation
The Hamiltonian Simulation problem, rooted in the original motivation for quantum computers proposed by Feynman [68], aims to simulate the dynamics of quantum systems. Given a Hamiltonian \(H\), a quantum state \(|\psi\rangle\), a desired error \(\varepsilon>0\), and time \(t>0\), the objective is to prepare a quantum state \(|\psi_{t}\rangle\) such that \(|||\psi_{t}\rangle-e^{iHt}|\psi\rangle||\leq\varepsilon\). With wide applications in quantum physics and chemistry, the rich literature on quantum algorithms for Hamiltonian simulation includes optimal algorithms for simulating sparse Hamiltonians. In [45] the dequantized QSVT framework is applied for Hamiltonian simulation that operate in different regimes, both for low-rank \(H\) and for arbitrary \(H\). Authors consider a Hermitian matrix \(H\in\mathbb{C}^{n\times n}\), a unit vector \(b\in\mathbb{C}^{n}\), and error parameters \(\varepsilon,\delta>0\). Given \(SQ(H)\) and \(SQ(b)\), the task is to output \(SQ^{\phi}(\hat{b})\) with probability \(\geq 1-\delta\) for some \(\hat{b}\in\mathbb{C}^{n}\) satisfying \(||\hat{b}-e^{iH}b||\leq\varepsilon\). Formulating the problem as a decomposition of a generic function \(f(x)\) into sine and cosine functions results in the expression \(e^{iH}b=f_{\text{cos}}(H^{\dagger}H)b+f_{\text{sinc}}(H^{\dagger}H)Hb\), where \(\text{sinc}(x)=\sin(x)/x\), to which an RUR decomposition can be achieved. The authors obtain a running time of \(\tilde{O}\left(\|H\|_{F}^{6}\|H\|^{6}/\varepsilon^{6}\right)\) in the low-rank regime. The authors note that their algorithm for the Hamiltonian Simulation task is comparatively slower when compared with similar classical techniques in randomized linear algebra that consider sparsity in \(H\) and \(b\)[69], and posit their framework exposes this trade-off between sparsity and speed.
In [47], the authors improve upon the complexity in [45] using their classical algorithm employing Clenshaw recurrence in the same setting. The authors here obtain a description that is \(O(\varepsilon)\)-close to \(e^{iHt}\). Sampling from this can be done in \(\tilde{O}(\|H\|_{F}^{4}\|H\|^{9}/\varepsilon^{2})\) time, an improvement over [45] in all parameters.
#### 4.1.9 Complexity Comparisons
A critical motivator in the dequantized algorithms space is in analyzing the nature of "quantum advantage" in QML settings, when presented with their classical counterparts. We adopt Chia et al.'s [45] Figure 1 (Table II in this review), which presents the time complexities of the dequantized algorithms discussed, and the quantum algorithms they are based on. We extend this table with the complexities of subsequent works in the field. All complexities are given as poly-logarithmic in the input size. The dequantized algorithms are presented loosely in order of decreasing time complexity; the current gap between the QML and classical algorithms can be observed. Significant progress has been made with successive advancements, bringing us closer towards the QML benchmark, particularly seen in the matrix inversion and QSVT tasks. Targeted research efforts in these areas are justified due to the central role of matrix inversion as a fundamental subroutine in various linear algebra settings, and the generalization of numerous tasks under the QSVT framework. Further, the relaxation of parameters and data requirements can be observed in some successive works, showing an increase in generality of algorithms over time.
#### 4.1.10 Critical Views on the SQ Access Model
Much of the work in dequantizing QML algorithms relies on the QRAM-like classical data structure introduced by Kerenidis and Prakash [44]. While this model has been successfully applied in the literature, there may exist alternative methods of representing the input data take can inform classical algorithms. In the quantum domain, input models that store data as entries in density matrices and simulating them as Hamiltonians is common, especially in QML applications [50, 58]. Sparse representations on quantum circuits are another a form that has already been briefly discussed.
Zhao et al. [70] address inefficiencies in the standard input access model for quantum machine learning due to the need for pre-computation and data storage. They introduce a flexible model enabling entry-wise data access, particularly useful when applying varying functions or requiring matrix row entries. The authors show that quantum amplitude encoding and classical L2-sampling can be conducted cost-effectively, even with moderately noisy input data, suggesting that quantum state preparation can be efficient despite initial state preparation challenges.
Cotler et al. [71] discuss the appropriateness of SQ access as a classical analog of quantum state inputs. SQ access, in its current form, allows classical algorithms to manipulate data that is exponentially difficult to extract from quantum states, thus artificially ascribing excessive power to dequantized algorithms when compared to their quantum counterparts. The authors suggest that the definition of SQ access needs to be revised. One plausible approach is to limit classical algorithms to accessing data obtained through measurements of quantum states inputs. This modification would preserve significant computational power for existing dequantized algorithms, while describing an oracle that is at most as powerful as inputs given to the quantum algorithms. Subsequent analysis notes that Quantum PCA under measurement data access retains its exponential quantum advantage [72].
Alternative input models may thus need to be considered, and could provide a more accurate comparison between classical and quantum algorithms and better reflect the true potential of dequantized computation.
### _Tensor Networks_
A large focus in QiML research in the current day has been on the use of tensor networks (TNs) as a machine learning models. QiML-based tensor network research benefits from
a rich body of prior knowledge5 in the classical context, both in and out of the machine learning field; theory and practical applications show a useful degree of transferability into the quantum domain [80, 81]. A key motivator in the use of TNs is their ability to classically simulate the many-body quantum wavefunction efficiently, which has been shown to be consistent with higher-order tensor representations [76]. Hamiltonians of many physically realistic systems tend to exhibit strong locality; the interactions between constituent particles are limited to next or nearest neighbors [82]. In the case of gapped, local Hamiltonians, the exponentially large and intractable Hilbert space is constrained to low energy states bounded by the entanglement area-law, i.e., these states cannot be highly entangled [82]. This constricts the exploration space to only a relevant fraction of the Hilbert space. When modeled by tensor networks, approximating this subset can be done in polynomial time [74].
Footnote 5: It is worth noting that TNs have been widely discussed and analyzed in the literature outside of the machine learning context, with a multitude of informative, introductory materials available on the topic [75, 76, 77, 78, 79].
Further, tensor networks form a bridge between classical neural network methods and quantum computing. Several works have identified the natural corollary of tensor networks to quantum circuits, where many tensor networks have a direct quantum circuit translation [76, 80]. Complex tensors under tensor network decomposition are represented as unitary gates; the bond dimension connecting two nodes of the tensor network is determined by the number of qubits connecting two sequential unitaries in the circuit. It is shown that qubit-efficient tree tensor network (TTN) and matrix product state (MPS) models can be devised, with logarithmic scaling in the number of physical (\(O(1)\), independent of the input data size) and ancilla (\(O(\log_{2}D)\) in bond dimension \(D\)) qubits required [83]. This allows tensor networks to express higher order feature spaces in classical settings, and act as classical simulators for quantum circuits. Several works have since successfully implemented tensor networks as parameterized quantum circuits on small, near-term quantum devices for machine learning tasks, with many of the advancements in the classical setting being readily transferred to the quantum domain [84, 85, 86, 83].
The many-body quantum system of \(N\) particles can be described as follows:
\begin{table}
\begin{tabular}{r|c|c} & **Quantum Algorithm** & **Dequantized Algorithms** \\ \hline
**Rec.** & & \\
**Systems**[44, 43, 45, 46, 47] & \(\frac{\|A\|^{2}_{F}}{\sigma}\) & \(\frac{\|A\|^{4}_{F}\|A\|^{10}}{\sigma^{24}\varepsilon^{12}}\), & \(\frac{\|A\|^{6}_{F}}{\sigma^{16}\varepsilon^{6}}\), & \(\frac{\|A\|^{6}_{F}}{\sigma^{6}\varepsilon^{6}}\), & \(\frac{\|A\|^{4}_{F}}{\sigma^{9}\varepsilon^{2}}\) \\
**Supervised** & & \\
**Clustering**[49, 48, 45, 44, 45] & \(\frac{\|M\|^{2}_{F}\|w\|^{2}}{\varepsilon}\) & \(\frac{\|M\|^{4}_{F}\|w\|^{4}}{\varepsilon^{2}}\), & \(\frac{\|M\|^{4}_{F}\|w\|^{4}}{\varepsilon^{2}}\) \\
**PCA** & & \\
**PCA** & & \\
**PCA** & & \\
**PCA** & & \\
**50, 48, 45** & & \\
**Matrix** & & \\
**Inversion** & & \\
**[61, 54, 45, 56, 57] & \(\frac{\|A\|_{F}}{\sigma^{22}\varepsilon^{6}}\), & \(\frac{\|A\|^{6}_{F}\|A\|^{22}}{\sigma^{28}\varepsilon^{6}}\), & \(\frac{\|A\|^{6}_{F}\|A\|^{6}}{\sigma^{12}\varepsilon^{4}}\), & \(\frac{\|A\|^{6}_{F}\|A\|^{2}}{\sigma^{8}\varepsilon^{4}}\), & \(\frac{\|A\|^{6}_{F}\|A\|^{2}}{\sigma^{9}\varepsilon^{2}}\), & \(\frac{\|A\|^{6}_{F}\|A\|^{2}}{\sigma^{11}\varepsilon^{2}}\) \\
**SVM** & & & \\
**[58, 59, 45]** & & & \\
**SDP** & & \\
**[73, 60, 45]** & \(\frac{\|A^{(1)}\|^{2}_{F}}{\varepsilon^{7.5}}+\frac{\sqrt{m}\|A^{(1)}\|^{2}_{F}}{ \varepsilon^{4}}\) & \(\frac{mk^{57}}{\varepsilon^{92}}\), & \(\frac{\|A^{(1)}\|^{22}_{F}}{\varepsilon^{46}}+\frac{\sqrt{m}\|A^{(1)}\|^{4}_{F}}{ \varepsilon^{28}}\) \\
**QSVT** & & & \\
**[61, 45, 44, 47]** & & & \\
**HS** & & & \\
**[61, 45, 44, 47]** & & & \\
**DA** & & & \\
**[67, 45]** & & & \\ \hline \end{tabular}
\end{table} TABLE II: Comparison of time complexities for dequantized algorithms across specific tasks. In each row, the references given in the leftmost column correspond to each successive item from left to right.
\[\left|\Psi\right\rangle=\sum_{s_{1}s_{2}\cdots s_{N}}\Psi^{s_{1}s_{2}\cdots s_{N}} \left|s_{1}s_{2}\cdots s_{N}\right\rangle \tag{14}\]
where, in quantum computing, the quantum state of \(N\) qubits \(\left|\Psi\right\rangle\) with amplitude \(\Psi^{s_{1}s_{2}s_{3}\cdots s_{N}}\) is a composition of the \(s_{N}\) single-qubit basis states. This can be effectively considered as one, large tensor. TNs aim to find alternative representations of this computationally-inefficient description by reducing the complexity of the system. This is achieved by decomposing the large tensor into a network of many smaller tensors of smaller rank. The total number of parameters in the final representation scales sub-exponentially with the number of composite tensors and the bond dimension (the dimension of the largest contracted index within the tensor network) between them, which allows for the classical computation of expectation values [74]. For example, a tensor with \(N\) indices, each of dimension \(d\), must generally be specified by \(d^{N}\) parameters. In contrast, the MPS representation of such a tensor with bond dimension \(m\) only requires \(Ndm^{2}\) parameters, which now scales linearly with \(N\)[76]. Various such methods of tensor network decomposition exist, which depend on the properties of the original tensor and the desired resulting network. In further sections, we discuss relevant tensor network methods and their application as QiML techniques.
#### 4.2.1 Matrix Product States
Matrix Product States (MPS), or Tensor Train Networks, are widely used for the efficient representation of 1D gapped quantum systems with low energy states [88]. In fact, any such quantum state can be exactly represented by the MPS structure efficiently [89]. The MPS is achieved through the decomposition of the quantum state into a product of smaller matrices which represent the constituent tensors. A general process involves successively partitioning away individual tensors from the rest of the system via the singular value decomposition (SVD) [77]; several other related methods have been also been proposed for performing this decomposition in a practical manner [90]. By considering only relevant states with bounded entanglement, a compressed form is obtainable by truncating the least relevant singular values. This low rank approximation is what drives the efficient representation of the quantum state by the MPS, which would otherwise be described exponentially with an \(n\)-qubit state.
**Supervised Tensor Network Modeling:** The first notable instance of tensor networks being used as machine learning models utilize the MPS decomposition. Stoudenmire and Schwab [91] demonstrated their capabilities in parameterized supervised learning. They note that handling high-dimensional vectors is also a critical challenge in non-linear kernel learning. Traditional approaches often rely on the "kernel trick" [92] to solve the dual representation; instead the authors advocated for applying tensor network decompositions. In their proposed model, the kernel learning problem is expressed as:
\[f^{l}(x)=W^{l}\cdot\Phi(x). \tag{15}\]
Here, vectors \(x\) are mapped to a high-dimensional feature space through a non-linear mapping \(\Phi(x)\), with \(W^{l}\) as the weight matrix; \(l\) is a tensor index for the function \(f^{l}(x)\) that maps every \(x\) to the space of classification labels. Both \(W\) and \(\Phi(x)\) could be considerably large, which presents large computational bottleneck scaling exponentially with their size; i.e. the resulting Hilbert space of \(\Phi(x)\) is \(2^{N}\), and necessitates a compatibly sized \(W\). An MPS tensor decomposition is thus leveraged to reduce this computational complexity. By representing \(W\) via an MPS and optimizing it directly, the model scales linearly with the training set size. An MPS decomposition can derived from \(W^{l}\):
\[W^{l}_{s_{1}s_{2}\cdots s_{N}}=\sum_{\{a\}}A_{s_{1}}^{\alpha_{1}}A_{s_{2}}^{ \alpha_{1}\alpha_{2}}\cdots A_{s_{j}}^{l:\alpha_{j}\alpha_{j+1}}\cdots A_{s_{ N}}^{\alpha_{N-1}}, \tag{16}\]
where index \(\alpha_{j}\) is the bond dimension between sites. \(\Phi(x)\) is obtained via an embedding scheme that converts classical input data into a linear combination of quantum states in an orthogonal basis. In [91], the quantum mapping function \(\phi(x_{j})\) is used to embed every \(j\)-th gray-scale pixel value
\[\phi(x_{j})=\left[\cos\left(\frac{\pi}{2}x_{j}\right),\sin\left(\frac{\pi}{2} x_{j}\right)\right] \tag{17}\]
into the L2-normalized trigonometric basis. A full image is thus the tensor product of \(\phi(x_{j})\) individual local embeddings over all \(N\) pixels:
\[\Phi(\mathbf{x})=\phi\left(x_{1}\right)\otimes\phi\left(x_{2}\right)\otimes \cdots\otimes\phi\left(x_{N}\right). \tag{18}\]
Words and sentences within natural language documents can be associated with quantum systems by building their vector and tensor space representations [93, 94, 95]. Each word \(w_{i}\) is mapped to an \(m\)-dimensional vector space, with each dimension representing a different semantic meaning. A word can thus be represented as a linear combination of these orthogonal semantic bases:
\[w_{i}=\sum_{i=1}^{m}\alpha_{i}\left|e_{i}\right\rangle, \tag{19}\]
Fig. 5: Common Tensor Network Decompositions. (a): Matrix Product State (MPS), (b): Project Entangled-Pair States (PEPS), (c): Tree Tensor Network (TTN), (d): Multi-scale Entanglement Renormalization Ansatz (MERA) [87]
where \(\alpha_{i}\) is the coefficient of the \(i\)-th base vector \(e_{i}\). A sentence \(s=(w_{i},\ldots,w_{n})\) or length \(n\) can then be modeled as a tensor product of the word vectors:
\[\begin{split} s&=w_{1}\otimes\cdots\otimes w_{n}\\ &=\sum_{i,\ldots,n=1}^{m}A_{i,\ldots,n}(e_{i}\otimes\cdots\otimes e _{n}),\end{split} \tag{20}\]
with \(A\) being the \(m^{n}\)-dimensional coefficient tensor of basis states. Trainable word embeddings, similar to those used in recurrent neural networks (RNN) that treat the word embeddings as variational parameters, have also been proposed; this approach has shown strong predictive performance [96].
A few other local embedding methods have been discussed in the literature, including the polynomial basis [97]:
\[\phi(x_{j})=\left[1,x_{j}\right], \tag{21}\]
which enables a transformed feature space capturing interactions within categorical data, and simplifying the interpretation of the resulting model as a high-degree polynomial. Other kernels that are the product of some \(N\) local kernels could also potentially be used [91].
A quadratic cost function:
\[L=\frac{1}{2}\sum_{n=1}^{N_{T}}\sum_{l}(y_{n}^{\ell}-f^{\ell}(\mathbf{x}_{n})) ^{2}, \tag{22}\]
is optimized via a "sweeping" algorithm, inspired by the density matrix renormalization group (DMRG) algorithm successfully used in physics applications [98]. This process essentially involves "sweeping" across the MPS (Matrix Product State), where only two adjacent MPS tensors are varied at a time. The tensors at sites \(j\) and \(j+1\) are combined into a single bond tensor \(B^{\ell}\), followed by the calculation of the derivative of the cost function with respect to the bond tensor for a gradient descent step. The gradient update to the tensor \(B^{\ell}\) can be computed as:
\[\Delta B^{\ell}=-\frac{\partial L}{\partial B^{\ell}}=\sum_{n=1}^{N_{T}}\left( y_{n}^{\ell}-f^{\ell}\left(\mathbf{x}_{n}\right)\right)\tilde{\Phi}_{n}, \tag{23}\]
where \(\tilde{\Phi}_{n}\) is the projection of the input via the contraction of the "outer ends" of the MPS that do not include \(B^{\ell}\), \(B^{\ell}\) is then replaced by the updated bond tensor: \({B^{\ell}}^{\prime}=B^{\ell}+\alpha\Delta B^{\ell}\), where \(\alpha\) is a scalar value that controls convergence. This can then be decomposed back into separate MPS tensors using a singular value decomposition (SVD), which assists in adapting the MPS bond dimension. The singular value matrix \(S\) can then be absorbed into the right singular vector matrix, resulting in the updated sites \(A_{s}^{\prime}=U_{s_{j}}\) and \(A_{s_{j+1}}^{\prime\ell}=SV_{s_{j+1}}^{\ell}\). The MPS form is restored, with the \(\ell\) indexing moving to the \(j+1\) site. The process then iteratively continues to the next \(j+1\) and \(j+2\) tensors and returning through the opposite direction when an end node is reached for a predetermined number of "sweeps". A major advantage to this is that the resulting bond dimension can be chosen adaptively based on number of large singular values. This flexibility allows the MPS form of \(W\) to undergo maximum possible compression; the degree of compression can vary for each bond, while still ensuring an optimal decision function. Inference is performed by successively contracting the network until the value \(f^{l}(x)\) is obtained, where the classification \(l\) is determined by the largest \(|f^{l}(x)|\).
While the DMRG sweeping method has seen success in several works, gradient descent based methods can be used to directly optimize tensor networks [97, 99, 100, 101]. This was first noted by Novikov et al. [97] who developed parameterizing supervised learning models with MPS in parallel with the work in [91]. Gradient descent based approaches for minimizing the cost function incurs an increased bond dimension after each iteration, which [91] handles via successive SVD operations to reduce the rank. To avoid the bond dimension growth, a stochastic Riemannian optimization procedure is used instead, with the addition of a bond dimension regularization term in the cost function. The polynomial basis (Equation 21) was used as the local feature mapping.
Efthymiou et al. [99] similarly provide an implementation using automatic gradients for optimization, rather than DMRG. The results show strong performance over benchmark image classification tasks.
Araz and Spannowsky [102] perform experiments comparing the effectiveness of DMRG and SGD approaches in high-energy physics applications, as well as investigating training methods that combined both SGD and DMRG techniques, previously noted to be speculatively compatible [91]. At each epoch, the first batch has the DMRG applied for a certain number of sweeps, after-which SGD takes over for the remaining batches. This method showed comparable performance with MPS training solely on DMRG or SGD,
Fig. 6: DMRG Sweeping Algorithm [91]: (a) Formation of the bond tensor \(B^{\ell}\) at bond \(j\); (b) Projection of a training input into the “MPS basis” \(\tilde{\Phi}_{n}\) at bond \(j\); (c) Computation of the decision function and the gradient update \(\Delta B^{\ell}\); (d) Restoration of the MPS form by the SVD.
however the authors note the conflicting aims of the algorithms; DMRG attempts to reduce degrees of freedom of node, while the SGD tries to increase it.
**Unsupervised Tensor Network Modeling:** The second prominent machine learning formalism for tensor networks is in probabilistic generative modeling. Generative modeling approaches learn the underlying probability distribution that describes a set of data, from which new data instances can be either generated or inferred from the distribution. Quantum states inherently possess a probabilistic interpretation, in which the squared norm of a quantum state's amplitudes gives rise to the probabilities of different outcomes. This connection can be traced back to Born's rule in quantum mechanics [103]. Thus, Han et al. [104] first proposed the use of tensor networks for generative modeling. Their model, dubbed by contemporaries as the "Tensor Network Born Machine" (TNBM) [105], produces random samples by first encoding probability distributions into quantum states, which are represented by a tensor network. A given dataset \(x\) is modelled using a quantum state described by a real-valued wavefunction \(\Psi(x)\), which could be some quantum state embedding kernel such as in Equation 15. This in turn forms the model probability distribution:
\[P(x)=\frac{|\Psi(x)|^{2}}{Z}, \tag{24}\]
normalized by a partition function \(Z=\sum|\Psi(x)|^{2}\); \(|\Psi(x)|^{2}\) is the energy function of \(x\). This form allows for the representation of \(\Psi(x)\) via a tensor network. Similar to the supervised setting, both DMRG-like, and gradient descent algorithms can be used for optimization [104]. The loss function used is the negative log-likelihood (NLL):
\[L=-\frac{1}{N_{T}}\sum_{n=1}^{N_{T}}\ln P(x_{n}). \tag{25}\]
While the TNBM formalism has been predominately used in unsupervised contexts, it has also been adapted for supervised learning [100], where classification involves finding the maximum fidelity between a given test example and the learned quantum states \(\text{argmax}_{c}|v^{\dagger}\Psi_{c}|\), for \(c\in{1,\ldots,K}\), where \(K\) is the total number of classes. The work in [100] demonstrates that the mapping of images into Hilbert space produces natural clustering patterns with and between classes, which was only previously assumed. This explicitly shows advantages in solving machine learning tasks in the many-body Hilbert space as opposed to data-driven feature spaces.
Bit-string classification over parity datasets have suggested that the highly expressive probability distributions encoded by generative MPS networks lead to strong predictive outcomes [106]. Further, MPS decompositions have been shown to handle tasks that deep learning methods cannot. Bradley et al. [107] note that standard models like Restricted Boltzmann Machines (RBMs) often struggle while learning high-length parity datasets, despite their categorization as universal approximators, theoretically endowed with the necessary expressive capabilities. Results show that MPS models can excel at such tasks, lending further weight to the unique inductive bias provided by tensor networks, echoing findings from other studies such as [100].
#### 4.2.2 Tree-Tensor Networks/MERA
Tree Tensor Networks (TTN) (also called the Hierarchical Tucker decomposition [78]) are a tensor network structure where the tensors are arranged hierarchically, often forming a binary tree structure [108]. TTNs exhibit advantages in computational efficiency compared with other tensor network forms, which stems from the tree structure that avoids loops in the network, enabling efficient and exact contraction. The Multi-scale Entanglement Renormalization Ansatz (MERA) is a particular type of TTN, which is specifically designed to efficiently represent quantum states with long-range correlations [109] due to its incorporation of disentanglers and isomorphisms in its network structure, accounting for and managing entanglement at different length scales. This makes it particularly suitable for representing ground states of critical systems or systems with power-law decay of correlations. While not as general as PEPS or MPS, TTNs and MERA offer advantages in specific contexts where their structure aligns with the physical system.
The mathematical form of the TTN varies based on the compositional structure, dependent on the number of layers, the number of tensors, and the contraction of those tensors within those layers. The contractions are often defined recursively. An example is given in Equation 26 for the construction of a binary TTN, with each tensor node connected to two children:
\[W=\sum_{\{\alpha\}}A_{\alpha_{2},\alpha_{3}}^{[1]}\prod_{n=2}^{N}A_{\alpha_{n},\alpha_{2n},\alpha_{2n+1}}^{[n]}, \tag{26}\]
where tensor \(A^{[1]}\) is the root tensor, with subsequent \(A^{[n]}\) children over \(N\) total layers [110]. Similar to the MPS, the TTN can be trained via DMRG and gradient-based optimization. Predicting an output is then given by the contraction:
\[\ket{\vec{p_{n}}}=W^{\dagger}\cdot\Phi(x_{n}). \tag{27}\]
In addition to gradient-descent based optimization used in the MPS, a MERA-like training process has also been proposed [111]. The cost function to be minimized is chosen as:
\[f=-\sum_{n=1}^{N}\bra{p_{n}|\vec{p_{n}}}, \tag{28}\]
with \(N\) as the total number of samples. This cost can be reduced by imposing unitary constraints on all tensors \(A\) of the TTN such that \(A^{\dagger}A=I\), inducing the whole transformation as a unitary: \(W^{\dagger}W=I\). Thus the simplified cost function becomes:
\[f =\sum_{n=1}^{N}\left(\bra{\Phi(x_{n})}WW^{\dagger}\ket{\Phi(x_{n })}-2\bra{\Phi(x_{n})}W\ket{p_{n}}+1\right) \tag{29}\] \[=\sum_{n=1}^{N}\bra{\Phi(x_{n})}W\ket{p_{n}},\]
and is shown to reduce the complexity of optimization. Over the task of Modified National Institute of Standards and Technology (MNIST) [112] image classification in [111], this learning method exhibited relatively small entanglement between classification states, meaning that the TTN efficiently represents the MNIST dataset; a conjecture that may extend over classical images in general.
2D hierarchical structures have also been proposed for generative modeling as a direct extension of the MPS version in [104], where the modeling of 2D images can be directly achieved [110]. This was shown to overcome the issue of exponential decay of correlations in MPS, making it more effective in capturing long-range correlations and perform better for large-size images.
TTNs have been used as a means of coarse-grained unsupervised feature extraction in [87]. The model prepares quantum states from classical input and feeds them into a 1D TTN. Optimal weights \(W\) are computed from its left singular vectors \(W_{s}=\sum_{n}\beta_{n}U_{s}^{\dagger n}\). The basis \(U^{\dagger}\) diagonalizes the feature space covariance matrix \(\rho_{s}^{s^{\prime}}=\sum_{n}U_{n}^{s^{\prime}}P_{n}U_{s}^{\dagger n}\). Since direct diagonalization of \(\rho\) is not feasible, the DMRG-like algorithm is employed to iteratively produce and diagonalize reduced density matrices. This procedure is repeated \(log(N)\) times to produce a suitable approximation, leading to the diagonalizing \(U\) of isometry tensors, approximated as a layered TTN. To produce a classification output, the reduced feature set is subsequently used in a supervised context, where the layers of \(U\) are fixed and the top tensor is replaced with an MPS decomposition, optimized via DMRG. The authors note that the method is akin to a direct computation of kernel PCA in feature space.
#### 4.2.3 Projected Entangled-Pair States
Projected Entangled-Pair States (PEPS) are a natural extension of MPS to higher-dimensional systems [113]. Just as MPS provide efficient descriptions of 1D quantum systems, PEPS have been shown to efficiently represent ground states of gapped 2D local Hamiltonians [74]. The tensors in PEPS are arranged in a grid-like structure, allowing the PEPS to effectively capture long-range quantum correlations [114]. The PEPS decomposition has a polynomial correlation decay with respect to the separation distance between parts of the network, whereas the MPS decomposition shows an exponential decay [78]. Like MPS, the bond dimension of PEPS determines the maximum entanglement entropy across any cut of the tensor network. However, contracting the PEPS networks is computationally more challenging than MPS due to the increased tensor connectivity. As such, exact calculations for PEPS are practically infeasible for larger systems, and approximation methods are usually employed [74].
The weight matrix \(W\) can be modelled as PEPS decomposition of tensors on an \(L\times L\) grid, mapped to some \(x\in\mathbb{R}^{L\times L}\) input feature tensor:
\[W_{s_{1}s_{2}\ldots s_{N}}^{l}=\sum_{\{a\}}A_{s_{1}}^{\alpha_{1},\alpha_{2}}A_{s_{2}}^{\alpha_{3},\alpha_{4},\alpha_{5}}\ldots \tag{31}\] \[A_{s_{j}}^{\alpha_{L\alpha},\alpha_{k+1},\alpha_{k+2},\alpha_{k +3}}\ldots A_{s_{N}}^{\alpha_{K-1},\alpha_{K}},\]
where \(K\) is the number of bonds in the lattice, and each tensor has a "physical" index \(s_{j}\) connected to the input vector, along with "virtual" indices \(\alpha_{k}\) for contraction with adjacent tensors. A special tensor in the center also has a "label" index \(l\) to generate the output vector [115].
In unsupervised generative modeling, the PEPS model can capture the probability distribution \(P(x)\) as a decomposed sum of individual distributions:
\[P(\mathbf{x})=\frac{N_{1}}{N}P_{1}(\mathbf{x})+\ldots+\frac{N_{m}}{N}P_{m}( \mathbf{x}), \tag{32}\]
each representing different labels or categories over data, in which the total wavefunction seen as a superposition of these distributions with smaller entanglement [116]. Each \(P_{i}\) is weighted by the fraction of its category within the total training set \(N_{i}/N\), with the categorisation determined by labels, if present, allowing for supervised generative modeling, or some agnostic clustering algorithm.
Efficient contractions are possible when both image size \(L\) and bond dimension \(D\) are small; for larger representations, an approximate contraction method is used, which treats the bottom row tensors as an MPS and the rest of row tensors as the operators applied on the MPS, over which the DMRG-like truncating algorithm can operate [115].
The PEPS decomposition has seen success in image modeling. Cheng et al. [115] note that the method better captures structural and spatial information in images when compared to MPS and TTNs [115]. Each image pixel is mapped to each PEPS tensor without the need for flattening, such as with MPS decompositions. The grid structure of the PEPS allowed for the exploration of additional feature maps; two are explored, the trigonometric feature map (Equation 17), and an adaptive feature map that leverages convolutional kernels, allowing the PEPS to accept a feature tensor as input. The use of convolutional feature extractors proved more effective in experiments over MNIST and Fashion-MNIST [117] datasets. Vieira et al. [116] explore PEPS for unsupervised generative modeling, which showed greater performance then that of existing MPS and TTN models. The results suggest the enhanced capability of multi-dimensional tensor network structures in unsupervised generative modeling for image classification, with [116] stating that tensor networks perform better when the network mimics the local structure of the data.
#### 4.2.4 Matrix Product Operators
The Matrix Product Operator (MPO) is an extension of the MPS concept to describe quantum operators, especially in the context of 1D quantum systems [118]. An MPO is expressed as a collection of matrices arranged in a chain, with each matrix indexed by physical indices that account for the ingoing and outgoing states of the operator. This structure allows for a compact and efficient representation of complex quantum operators that would otherwise require an exponentially large amount of information. Further, MPOs can be compressed to approximate a given operator with smaller matrix dimensions. Similar to the MPS, techniques such as SVD can be applied, reducing the bond dimensions while maintaining a controlled approximation error.
An MPO can be expressed in the form:
\[M_{s^{j}_{1},\ldots,s^{j}_{N}}^{s_{1},\ldots,s^{j}_{N}}=\sum_{\{a\}}A_{s^{j}_{1}}^{ s_{1}\alpha_{1}}A_{\alpha_{1}s^{j}_{2}}^{s_{2}\alpha_{2}}\cdots A_{\alpha_{j}s^{j}_{j+1 }}^{s_{j+1}\alpha_{j+1}}\cdots A_{\alpha_{N-1}s^{j}_{N}}^{s_{N}}, \tag{33}\]
in traditional mathematical notation, or diagrammatically as shown in Figure 7[119].
The ability for compressed representation has allowed for the use of MPOs in classical neural network structures. Research has shown that replacing one, or several dense hidden layers in the neural network with a parameterized MPO can allow for significantly decreases the number of parameters involved, effectively compressing the network[120, 121]. This transformation maintains the network's ability to express complex relationships and functions, thus preserving its overall performance while also improving the training efficiency and memory consumption. In [121], a slight improvement is seen in test accuracies in state-of-the-art models that incorporate the MPO compression, compared to their standard parameterization. The authors suggest that by reducing in the number of parameters, local correlations in input signals are emphasized, and the risk of overfitting is reduced by constraining the linear transformation matrix, which helps avoid trapping the training data in local minima. Patel et al. [122] extend this approach for solving differentiable equations involving in portfolio optimization, to similar results.
Wang et al. [123] explore the use of MPOs as sparse representations of operators in parameterizing a linear transformation over an exponentially large space with respect to the number of features. The authors highlight the suitability of the model for anomaly detection, as linear models utilizing MPOs can provide control and manage behavior over the entire input space, even when it's vastly imbalanced in terms of inliers and outliers. In the anomaly detection model, data is first embedded into a high-dimensional vector space \(V\) using a fixed feature map \(\Phi\). "Normal" instances undergoing the transformation would be projected close to the surface of the hypersphere \(W\), whereas anomalous instances are mapped close to its center.
#### 4.2.5 Other Tensor Network Methods
Several other tensor network decompositions have been suggested for use in probabilistic modeling, particularly over natural language modeling tasks. The Canonical Polyadic (CP) and the Tucker decompositions, similar to decompositions previously mentioned, have been used for their capacity for producing low-rank approximations of high dimensional tensors. Several authors have demonstrated their suitability in modelling the sequential and polysemic natural of words. Hierarchial MPS models have also been proposed, providing a structured way to compute exact normalized probabilities and perform unbiased direct sampling, allowing for efficient training through gradient-based procedures and demonstrating competitive performance on tasks such as image classification with reduced computational resources. Finally, we touch on the use of tensor networks embedded within neural network structures, in which tensor networks have demonstrated their capabilities in model compression.
**Canonical Polyadic Decomposition:** In Zhang et al. [93], a weighted form of the CP decomposition is used, which factorizes a higher-order tensor into a sum of rank-one tensors, expressed as:
\[W=\sum_{r=1}^{R}\lambda_{r}\mathbf{a}_{r,1}\otimes\mathbf{a}_{r,2}\otimes \ldots\otimes\mathbf{a}_{r,N} \tag{34}\]
where \(\lambda_{r}\) are scalar weights and \(\mathbf{a}_{r,i}\) are unit column vectors of \(M\)-dimension: \((a_{r,i,1},\ldots,a_{r,i,M})^{\intercal}\), where \(M\) is the dimension of the corresponding mode of \(W\). \(R\) denotes the minimum possible number of rank-one tensors. The authors propose an approach based on the CP decomposition to account for interaction among polysemic words, creating new compound meanings by combining the different basis vectors of the words. The global representation of all possible compound meanings is captured in the quantum many-body wavefunction of Equation 19. Solving the resulting high-dimensional tensor is made feasible through the use of tensor decomposition, which then enables the projection of a global semantic space onto a local one for a particular sequence of words.
**Recurrent Methods:** Zhang et al. [94] propose a Recursive Tensor Decomposition, inspired by the MPS and Tucker decompositions. The decomposition of a tensor \(T\) is expressed as:
\[T =\sum_{i=1}^{r}\lambda_{i}S_{(n),i}\otimes u_{i}, \tag{35}\] \[S_{(n),k} =\sum_{i=1}^{r}W_{k,i}S_{(n-1),i}\otimes u_{i} \tag{36}\]
where \(S_{(1)}=1\in\mathbb{R}^{r}\). Here, an \(n\)-order tensor \(T\in\mathbb{R}^{m\times\ldots\times m}\) is decomposed into an \(n\)-order tensor \(S_{(n)}\in\mathbb{R}^{m\times\ldots\times r}\), a diagonal matrix \(\Lambda\in\mathbb{R}^{r\times r}\), and a matrix \(U\in\mathbb{R}^{r\times m}\); \(\lambda_{i}\) and \(u_{i}\) are the \(i\)-th singular values and left singular vectors resulting from each successive factorization. The parameter \(r\) (with \(r\leq m\)) denotes the rank of the tensor decomposition, and the decomposition can be viewed as a matrix SVD after tensor matricization or flattening by one mode. The method reduces the parameters from \(O(m^{n})\) to \(O(m\times r)\), effectively capturing the main features of the tensor \(T\) while significantly reducing the complexity. The decomposition can be used to calculate the conditional probability \(p(w_{t}|w_{t-1}^{t})\) of sequential words via the softmax function over the inner product of two tensors \(T\) and \(A\), where \(A_{t-1}\) is the input of \((t-1)\) words \(\alpha_{1},\ldots,\alpha_{(t-1)}\). Intermediate variables \(h_{t}\) are recursively calculated using:
\[h_{t}=Wh_{t-1}\circ U\alpha_{t}\]
where \(\circ\) denotes element-wise multiplication, and \(W\) and \(U\) are matrices decomposed from the tensor \(T\).
Fig. 7: Diagrammatic Notation of the Matrix Product Operator (MPO) with six Sites [119]
Miller et al. [95] propose the uniform Matrix Product State (u-MPS) model as a type of recurrent tensor network used for processing sequential data. In the u-MPS, all cores of the MPS are identical tensors \(A\) with shape \((D,d,D)\). The model's recurrent nature allows it to generate \(n\)-th order tensors \(T_{n}\in R^{d^{n}}\) for any natural number \(n\), enabling its application to sequential data. For a sequence of arbitrary length \(n\) over an alphabet of size \(d\), a u-MPS can map the sequence to the index of an \(n\)-th order tensor \(T_{n}\), defining a scalar-valued function \(f_{A}(s)=\alpha^{\intercal}A(s)\omega\), where \(A(s)=A(s_{1})A(s_{2})\ldots A(s_{n})\) represents the compositional matrix product of the sequence, and \(\alpha\) and \(\omega\) are \(D\)-dimensional vectors that serve as boundary conditions terminating the initial and final bond dimensions of the network. The application of a unitary MPS (u-MPS) model has shown unique generative properties and successful extrapolation of non-local correlations, indicating potential scalability to real-world sequence modeling tasks.
**Generalized Tensor Networks:** Glasser et al. [101] explored the connection between tensor networks and probabilistic graphical models, from which "generalized tensor networks" are proposed. These allow for input tensors to be copied and reused in other parts of the network, allowing for greater computational efficiency while also involving fewer parameters, and greater expressivity over new types of variational wavefunctions when compared with regular tensor networks. Several generalized tensor network structures are proposed, the discriminative string-bond states (SBS) and entangled plaquette state (EPS) models utilizing tensor copy operations, and are tested in supervised contexts in both image and environmental sound classification tasks. The method showed superior performance to other tensor network learning models; the authors note that these results are achieved over very small bond dimensions compared to previous works.
**Hierarchical MPS:** Hierarchical MPS models have emerged as a powerful approach for addressing complex machine learning tasks. They are characterized by a tiered structure that emphasizes the flexibility of representation and computational efficiency.
Liu, Zhang and Zhang [124] noted the generally poor performance of prior works in generative tensor network models relative to standard methods. To address this, an autoregressive MPS (AMPS) is proposed, where the joint probability distribution \(P(x)\) is not represented by a tensor network directly, but the factorization of it as a product of conditional probabilities; an idea which stems from autoregressive modeling in ML. The model constructs a 2D hierarchical tensor network representation using separate MPS decompositions for individual conditional probabilities. These individual MPS tensor elements can be trained and parameterized through a gradient-based NLL minimization process, demonstrating a significant, theoretical expressive power that exceeds previous tensor network models, backed by empirical investigation.
Selvan and Dam [125] introduce the Locally Orderless Tensor Network (LoTeNet) model using the theory of locally orderless images that allows it to handle larger images without sacrificing global structure, a deviation from prior models that flatten entire 2D images [91, 104, 99]. LoTeNet begins with a squeeze operation, rearranging local \(k\times k\) image patches and stacking them along the feature dimension, with the stride of the kernel \(k\) determining the reduction in spatial dimensions. The squeezed images with an inflated feature dimension of \(C\cdot k^{2}\) are then flattened from 2D to 1D and processed through MPS blocks, embedded into the joint feature space, and contracted to output a vector with dimension \(\nu\). The output vectors from all MPS blocks are reshaped back into 2D space and passed through subsequent layers of the model. After \(L\) layers, the final MPS block performs a decision contraction, resulting in predictions for the \(M\) classes. In evaluation, this model required fewer hyperparameters and significantly less GPU memory than standard deep learning models. This is primarily due to its design of successively contracting input into smaller tensors, thereby avoiding escalating memory consumption with larger images and batch sizes.
**Tensor Networks in Neural Network Architectures.** In 4.2.4, the benefits of using MPO tensor networks as neural network layers as methods of neural network compression is discussed. Other decompositions have also been suggested for this purpose, such as in [126], who integrate the Tucker decomposition into a neural network architecture for performing voxel-wise processing for fully automated semantic segmentation of brain and liver volume scans. Using a tensor representation for the high-dimensional weight vector is shown to enhance network operations and extracts critical semantic information, while also facilitating faster convergence and improving precision.
Tensor networks have also been used as feature extractors for neural networks, which is a natural extension given their kernel-based feature map representations. The resulting low-rank tensors in [93] are used as trainable kernel weights fed into a convolutional layer in a classical CNN model.
#### 4.2.6 Differences in Optimization Methods
Tensor networks are most commonly optimized using either DMRG sweeping or gradient descent, particularly SGD. A comparison of these methods reveals various strengths and weaknesses:
**Performance:** As evidenced in Tables IV and V, gradient descent-based optimization generally outperforms DMRG on both supervised Fashion-MNIST and unsupervised binary MNIST. An exception can be found in the method used in [91] on the supervised MNIST task, which is the second-best performing model. However, the study by [102] observed that SGD and DMRG produced similar classification results over various settings. These findings do not entirely reflect the performances in Tables IV and V, however this may be due to other factors pertaining to the models in question. Understanding the nature of these observed inconsistencies may call for additional examination or targeted studies.
**Efficiency and Usability:** Gradient descent optimization is often more efficient than DMRG, especially when stochastic gradients are considered [87, 102]. DMRG is known to exhibit large computational complexity, especially for decompositions outside of boundary MPS [115]. This may limit the applicability of the algorithm; we note that DMRG is mostly used in MPS and TTN settings from Tables IV and V, or on small-sized PEPS [115]. Additionally, the widespread use of GD in machine learning, as well as its suitability
for parallelization [99] makes it readily adoptable by ML practitioners.
**Simplicity and Interpretability:** DMRG provides a simpler learning structure and can adapt the network architecture based on the complexity of the problem, reducing the need for hyperparameter optimization [99]. Furthermore, it provides more interpretable results than SGD, emphasizing inherent data structures for a better understanding of the learned representations [102].
**Information Capture:** In image recognition, SGD and DMRG produce similar classification results, but SGD is shown to capture more information due to a larger entropy, potentially outperforming DMRG in scenarios with large data fluctuations. DMRG focuses more on the image's central part and maintains a similar entropy distribution per site [102].
#### 4.2.7 Enhancing Tensor Network Methods
Image recognition tasks have served as an experimental ground for enhancing tensor network methods, especially in optimizing computational efficiency, understanding the exploitation of Hilbert feature spaces, and optimizing model selection based on entanglement scaling analysis.
Liu et al. [127] proposed using quantum entanglement information to guide the learning of MPS architectures for image recognition tasks. By converting images into frequency space using a direct cosine transformation, natural local correlations in 1D space are enhanced, and entanglement structures are shown to prioritize low-frequency data. These factors are captured in lower entanglement entropy across the system. An MPS is used to model these correlations. A MERA-based learning method is used, which minimizes the bipartite entanglement entropy (BEE) by rearranging the MPS contraction path to align the single-site entanglement entropy values (SEE) in descending order. Low SEE sites are discarded. This approach has demonstrated solid accuracy with relatively small bond dimensions on the MNIST dataset.
A few works also tackle the question of selecting the optimal tensor network decomposition for a given task. Convy et al. [128] applied entanglement scaling analysis from quantum physics to classical ML data. For a quantum system represented as a tensor network, the entanglement entropy of the system is bounded by its maximum bond dimension \(m\) and \(n\) connecting indices. Assuming a fixed \(m\) (typically a hyperparameter), the entanglement scaling differs between tensor networks due to \(n\), which depends on the network geometry. Consequently, the toremost objective when deploying tensor network ansatze is to align the entanglement scaling dictated by the data, or the quantum state, with the inherent entanglement scaling of the network. The authors proposed the Mutual Information (MI) score to analyze entanglement scaling on classical data. They investigated MI scaling patterns in the MNIST and grayscale Tiny Images [129] datasets. Results suggested that Tiny Images' MI follows a boundary law, while the findings were less conclusive for MNIST. These insights could guide the selection of tensor networks, with 2D geometries like PEPS being suitable for datasets obeying a boundary law.
Hashemizadeh et al. [130] proposed a greedy algorithm for tensor network structure learning, aimed at efficiently traversing the space of tensor network structures for common tasks like decomposition, completion, and model compression. They introduced a novel tensor optimization problem that seeks to minimize a loss across diverse tensor network structures with a parameter count constraint. This bi-level optimization problem uniquely involves discrete optimization over tensor network structures at the upper level and continuous optimization of a specific loss function at the lower level. Their approach entails a greedy algorithm to address the upper-level problem, which is then combined with continuous optimization techniques to solve the lower-level problem. Starting with a rank one initialization, the algorithm successively identifies the most promising edge in a tensor network for a rank increment. This allows for the adaptive identification of the optimal tensor network structure for a given task, directly from the data, evidenced by experimental results over image reconstruction.
#### 4.2.8 Understanding Neural Networks via Tensor Networks
Aside from the use of tensor networks for learning tasks, the application of these methods to machine learning can help us understand the machinery of neural networks in new ways.
Cohen et al. [131] provides comprehensive examination of the representational capabilities deep learning structures by finding equivalences between neural networks and tensor-network architectures in the context of probabilistic modeling, including structures like non-negative matrix product states and Born machines. This research underscored the fundamental advantage of deep networks over their shallow counterparts, demonstrating that functions that could be efficiently represented by a deep CNN of polynomial size would necessitate an exponential size for approximation by a shallow CNN, which was previously a conjectural theorem. Moreover, the study highlighted the extraordinary power of neural network depth in exponentially reducing the need for breadth with each additional layer, although the exact class of expressible functions remains a subject of ongoing exploration. Subsequent work further extends this analysis to recurrent neural networks, showcasing their natural correlation with MPS and validating Cohen's theorem in this context [132].
Glasser et al. [133] analyze the expressive power of tensor network formalisms, highlighting unbounded disparities in resource requirements for modeling certain distributions. This is reinforced by studies indicating that 1D tensor networks are inefficient for text, as mutual information scales with an exponent close to a volume law, while 2D tensor networks like PEPS may be suitable for images due to an area-law scaling [134]. Tangpanitanon et al., [96], however, demonstrate that MPS variational ansatze may in fact be suitable for sequence modeling tasks such as NLP, particularly for sentiment analysis of movie reviews. The suitability arises not from entanglement entropy but from the high-dimensional word vector embedding, which allow for strong predictive accuracies in sentiment analysis. The authors also reported a phenomenon of entanglement entropy saturation with implications on machine learning model selection; as the model size grows, word embedding becomes key to increasing model expressiveness, putting forward the MPS structure as a viable approach in NLP.
This work stands in contrast to arguments against MPS's usefulness in NLP [133, 106, 134], highlighting instead the significance of high-dimensional word embedding.
Gao and Duan [135] show that efficient Deep Boltzmann Machine (DBM) representations can be efficiently constructed from any tensor network state, including PEPS and MERA, provided a deep enough neural network. In essence, assuming quantum state presentation is a P/poly class problem, deep neural networks possess the capability of representing the majority of physical states efficiently, including the ground states originating from many-body Hamiltonians and states that emerge from quantum dynamics. Chen et al. [136] show that under certain conditions, the reverse is also true -- TN states can be converted into RBMs if they describe a non-entangled quantum system. Levine et al. [137] furthers the work in [135] and extends the proof to CNNs and RNNs -- when considered under TN representations, these neural networks can efficiently represent entangled quantum systems.
### _Quantum Variational Algorithm Simulation (QVAS)_
In view of the categorization of QiML within the "classical" (CC) mode of QML, as indicated in Figure 1, one could interpret QiML as the application of classical data to quantum circuits, all simulated on classical hardware, to process machine learning tasks. This interpretation then overlaps QiML with the "classical-quantum" (CQ) aspect of QML, which encompasses concepts such as variational quantum circuits where optimization of quantum parameters is offloaded to classical computation methods. This research area delves into the potential capabilities of quantum computing in anticipation of the realization of quantum hardware. It is worth noting that despite the common disjunction between references to QiML and classical-quantum based ML in the literature, the common denominator remains that classically simulated quantum circuits are indeed run on classical hardware. Considering this perspective, the task becomes discerning those studies that not only investigate practical machine learning applications but also develop techniques particularly intended for classical hardware simulation. These constitute a minor subset within the vast body of QML literature, where the CQ paradigm takes center stage. Moreover, using naive keyword search terms such as "quantum simulation" may not yield desired results, as this term already denotes an established research domain [138, 139].
Our survey into this area is thus informed by both our own keyword searches, and the recent reviews of other authors exposing the interested subset, such as [140]. We target works that use quantum simulation and attempt to further machine learning tasks of interest by some metric (performance, speed, resource consumption, etc.), over purely classical implementations.
In the following section, we highlight recent advancements in simulating quantum computing, present a concise overview of QML learning frameworks, and discuss recent practical applications in this field. For a more thorough understanding, we direct interested readers towards comprehensive resources such as [141, 142, 143, 36, 4, 144, 36].
#### 4.3.1 Frontiers of Classical Simulation
The successful emulation of quantum computations on classical hardware hinges on the capability to simulate qubits and their potential for exponential information storage effectively. Determining the boundary between classical and quantum computation elicits a discussion on quantum supremacy [145]; finding the exact crossover point, beyond which a quantum system becomes infeasible for classical computer simulation, remains a complex issue [146]. Classical resources needed for such simulations scale exponentially with the number of qubits and the depth of the quantum circuit [146], marking an exponential cost in their classical parameterization.
Despite these challenges, researchers have pushed the boundaries of classical hardware capabilities. Strategies such as data compression [147], optimized circuit partitioning [148], and large-scale batching methods [149] have enabled the simulation of many tens of qubits. However, these frontiers are largely restricted to supercomputing, or high-performance platforms. For users without access to such architectures, the possibilities are considerably more limited; a PC equipped with 16GB of GPU memory can simulate approximately 30 qubits [150]. Given this limitation, many practical applications of QML operate within this smaller qubit range. As such, we will focus on studies that have achieved promising results on these smaller-scale, more accessible devices.
For a comprehensive exploration of the challenges involved in the practical simulation of quantum computers, we refer the interested reader to the work by Xu et al. [150].
#### 4.3.2 Encoding Classical Data
Classical data must be processed through an encoding mechanism for quantum settings, wherein an \(m\)-dimensional classical dataset is mapped onto a quantum state vector within Hilbert space. This procedure permits us to leverage the vast feature space in quantum systems, thereby offering superior representational power in comparison to classical feature spaces. We outline common encoding schemes below:
1. **Basis Encoding:** Also known as computational basis encoding, it is the simplest way of encoding classical data. Given a classical vector \(x=(x_{1},x_{2},...,x_{n})\), where \(x_{i}\in{0,1}\), each classical bit \(x_{i}\) is encoded onto the state of the \(i\)-th qubit. An \(n\)-bit classical string is directly encoded into a quantum state of \(n\) qubits: \[\ket{x}=\ket{x_{1},x_{2},...,x_{n}}.\] (37)
2. **Amplitude Encoding:** This method allows efficient encoding of classical data, taking advantage of the exponentially large Hilbert space. In amplitude encoding, an \(n\)-dimensional normalized real vector \(x=(x_{1},x_{2},...,x_{n})\) such that \(\sum_{i=1}^{n}\ket{x_{i}}^{2}\) is encoded into the amplitudes of a quantum state. This requires at least \(\log_{2}(n)\) qubits, where \(n\) is the dimension of the classical vector. The encoded state is: \[\ket{x}=\sum_{i=1}^{n}x_{i}\ket{i}.\] (38)
3. **Angle Encoding:** In angle encoding, data is encoded into the angles of rotational gates. Given a classical vector \(x=(x_{1},x_{2},...,x_{n})\), each value \(x_{i}\) is used as a parameter in a rotation gate applied to the \(i\)-th qubit. For example, with \(R_{y}\) rotations, the encoded state is: \[\ket{x}=R_{y}(x_{1})\otimes R_{y}(x_{2})\otimes...\otimes R_{y}(x_{n})\ket{0}^{ \otimes n}.\] (39) Note the similarities between this and Equations 15 and 17; a possible \(R_{y}\) gate rotation could produce the embedding: \[\ket{x} =\bigotimes_{i=0}^{n-1}R_{y}(x_{i})\ket{0}^{\otimes n}\] (40) \[=\bigotimes_{i=0}^{n-1}\cos\left(\frac{\pi}{2}x_{j}\right)\ket{0} +\sin\left(\frac{\pi}{2}x_{j}\right)\ket{1}.\]
In each of these methods, classical data is encoded into the quantum state space, and these encoded states are then used as inputs to quantum circuits. Different encoding methods can lead to different computational advantages, and the choice of encoding is often problem-specific.
Once classical data is embedded into the quantum space, it can be manipulated via various QML algorithms. These algorithms are diverse and span a broad range of types, stemming from various mathematical bases [140, 144]. Some algorithms, such as Quantum Boltzmann Machines, are known to be BQP complete, implying they cannot be effectively simulated on classical computers [151]. Conversely, there are QML algorithms that remain within the realm of classical simulation, up to classical computing limits. These typically include quantum kernel and quantum variational methods [140].
#### 4.3.3 Quantum Kernel Methods
Quantum kernel methods utilize quantum devices to compute kernel functions, thus capturing the similarity between data points in a feature space. Notably, these methods have the potential to provide exponential speedups for specific problems while still being classically tractable. A frequently employed quantum kernel method is the Quantum Kernel Estimator (QKE) [152]. This method involves defining a quantum feature map, \(\phi(x)\), that transforms classical data, \(x\), into quantum states via unitary operations on a quantum circuit \(U_{\phi}(x)\). This is performed by applying the circuit to an initial state, commonly chosen as \(\ket{0}^{\otimes n}\):
\[\ket{\Phi(x)}=U_{\phi}(x)\ket{0}^{\otimes n}, \tag{41}\]
where \(U_{\phi}(x)\) can be considered as a unitary that produces quantum states \(\ket{\Phi(x)}\) based on a chosen feature mapping \(\phi\), and \(n\) is the number of qubits. For any two data points, \(x_{i}\) and \(x_{j}\) in the dataset \(D\), their corresponding encoded states are \(\Phi(x_{i})\) and \(\Phi(x_{j})\). The kernel entry between \(x_{i}\) and \(x_{j}\) is given by:
\[\kappa(x_{i},x_{j}) =\abs{\phi(x_{j})\Phi(x_{i})}^{2} \tag{42}\] \[=\abs{\bra{0}^{\otimes n}U_{\phi}^{\dagger}(x)U_{\phi}(x)}^{ \otimes n}|^{2}, \tag{43}\]
representing the inner product of the two feature vectors in the quantum state space. The computation is achieved by approximation in computing the overlap of quantum states \(\Phi(x_{i})\) and \(\Phi(x_{j})\). The quantum kernel can then be used to construct a Quantum Support Vector Machine (QSVM) that integrates the quantum kernel (constructed via QKE) with a classical SVM, replacing the kernel \(K(\mathbf{x}_{i},\mathbf{x}_{j})\) in Equation 93[152]. This method essentially processes data classically and uses the quantum state space as feature space, enabling the use of high-dimensional, non-linear feature mappings that are difficult to compute classically.
The optimal choice of \(U_{\phi}(x)\) is largely an unsolved research problem, especially in the context of classical simulation. A common kernel is the \(U_{\Phi(\mathbf{x})}=Z_{\Phi(\mathbf{x})}H^{\otimes n}Z_{\Phi(\mathbf{x})}H^{ \otimes n}\), where \(H\) is the Hadamard gate, and \(Z\) is a diagonal unitary in the Pauli-Z basis [152]. However, this kernel has been known to be hard to implement classically. There may be opportunity to explore quantum kernels that are more amenable to classical settings.
The QSVM has been explored in multiple simulated application scenarios. Simoes et al. [153] demonstrated its superior accuracy over classical SVMs on small datasets like Iris and Rain. Other applications in cybersecurity corroborate these results, although high computational costs and prolonged execution times are often noted [154, 155].
#### 4.3.4 Variational Quantum Circuits
Variational quantum circuits (VQC) use a hybrid quantum-classical approach to solve complex problems. A classical optimizer adjusts the parameters of a parameterized quantum circuit (PQC), so that the output of the quantum circuit approaches an optimal solution. VQCs serve as quantum analogues of neural networks, with the capability to encode classical data into quantum states and harness the power of quantum computing to minimize cost functions. These cost functions often represent the expectation value of some operator, such as a Hamiltonian in quantum physics or a measurement operator in machine learning tasks.
VQCs have been devised in response to current limitations in implementing quantum algorithms on true quantum computers. By employing a hybrid quantum-classical framework, VQCs utilize classical optimization techniques to fine-tune the parameters of quantum circuits [156]. The classical optimizer guides the training process, iteratively updating the quantum circuit's parameters based on the outcomes of quantum measurements. This approach leverages the power of quantum computing while accommodating the constraints of near-term quantum devices, such as error rates and limited coherence times, thereby facilitating the development of quantum applications that would be currently infeasible with solely quantum-based methods.
A VQC workflow typically comprises three steps [140]:
* **Quantum Feature Map**: Classical data \(x\), is encoded into the quantum state space using a non-linear feature map (\(\phi\)). The encoding circuit defined in Equation 41 can be used, where \(\phi\) may be some chosen
encoding method. This process can be repeated or interleaved with the variational circuit, depending on the problem at hand.
* **Variational Circuit**: A short-depth parameterized quantum circuit, \(U(\theta)\), is applied to the quantum state obtained from the feature map. This circuit consists of layers of quantum gates parameterized by \(\theta\). Learning the parameters \(\theta\) can be seen as an objective minimization task over some loss function \(L(\theta)\) with respect to the circuit expectation values \(M_{k}\), similar to classical machine learning routines. Thus the use of classical routines such as gradient-descent have seen success in application [156, 157, 158]. Offloading the computation to classical machines reduces the number of quantum resources required, allowing for feasibility on NISQ hardware implementation. Analogous to neural networks, variational circuits have been shown to approximate any target function, up to arbitrary error [156], making them viable learning models. The choice of ansatz, or circuit design, in this stage is critical and can significantly influence the performance of the VQC.
* **Measurement**: A measurement is performed on the final quantum state, resulting in a bit string \(z\in\{0,1\}^{n}\), which is then mapped to a label. By running this circuit multiple times, the probability of observing \(z\) can be estimated: \[f(x,\theta)=\left\langle\Phi(x)\right|U(x,\theta)^{\dagger}(\theta)M_{k}U(x, \theta)\left|\Phi(x)\right\rangle\] (44) This quantity is computed for each of the different classes \(k\in\{1,\ldots,K\}\) using the measurement operator \(M_{k}\) and can be interpreted as the prediction of \(x\) by the circuit.
Early works have demonstrated the predictive capabilities of the VQC in classical simulation. Schuld et al. [158] propose a VQC that is both low-depth and highly expressive. The proposed circuit geometry uses systematically entangled gates, allowing for circuits that scale poly logarithmically with the dataset size and representative principle components. However, the authors note that this approach limits the set of applicable datasets to those that allow for \((k,\delta)\) reductions, where \(k\) scales polynomially with the dataset dimensions and \(\delta\) measures classification uncertainty. Simulated tests on handwritten digit and tabular classification tasks revealed its edge over classical models like MLPs and SVMs, with fewer parameters. Comparative research for VQCs extends across domains like cybersecurity [159], finance [160], and physics [161, 162]. These studies underscore the advantages of VQCs over traditional classical methods, for tasks with specific requirements, such as low qubit count and small datasets. Yet, quantum simulation resource needs grow exponentially with increased variables [160, 161].
Blance and Spannowsky [163] proposed a combination of quantum and classical gradient descent methods for parameter optimization. The optimization process employs a forward pass to calculate the mean squared error (MSE) loss function, followed by a backpropagation procedure to update the trainable parameters. Quantum gradient descent, based on the Fubini-Study metric [164], is employed to optimize the quantum weight parameters \(w\) providing an advantage over the Euclidean-based gradient descent by taking into account the geometry of the parameter space of quantum states. Vanilla gradient descent is used to optimize the classical bias term \(b\). The combined optimization algorithm is given by:
\[\begin{split}\mathbf{\theta}_{t+1}^{w}&=\mathbf{\theta}_{t }^{w}-\eta g^{+}\nabla^{w}L(\mathbf{\theta}_{t}),\\ \mathbf{\theta}_{t+1}^{b}&=\mathbf{\theta}_{t}^{b}-\eta \nabla^{b}L(\mathbf{\theta}_{t}),\end{split} \tag{45}\]
where \(g^{+}\) is the pseudoinverse of the Fubini-Study metric. The benefits of the quantum method are seen in simulation results, where faster learning and convergence rates are observed in comparison with classical neural networks using standard steepest descent.
Various extensions to the VQC formulation have also been proposed, which aim to mirror classical neural network frameworks. These VQCs are typically characterized by their circuit architecture, choice of quantum gates, and overall compositional structure. The pipeline of feature mapping, circuit construction, and measurement persists within all these algorithms. We briefly discuss such methods in the sections below.
Fig. 8: Overview of the Variational Quantum Classifier (VQC) [156]. The state preparation circuit \(U_{\Phi(x)}\) encodes classical data into a quantum state using the function \(\phi(x)\). The variational circuit \(U_{\theta}\), parameterized by \(\theta\), acts on this prepared state and possibly on additional ancilla qubits. The framework outputs observable quantities \(\{M_{k}\}_{k=1}^{K}\), which are measured and mapped to the output space through a classical post-processing function \(f\) to predict a result. Classical methods optimize the parameters \(\theta\).
#### 4.3.5 Quantum Convolutional Neural Networks
The quantum convolutional neural network (QCNN) employs VQCs to perform convolutional operations that mimic the functionality of their classical counterparts but in the quantum domain. The QCNN architecture consists of quantum convolution layers represented by quasilocal unitary operations, pooling layers achieved by measuring qubits and applying subsequent unitary operations, and fully connected layers, implemented by specific unitary transformations.
Various frameworks for the QCNN have been offered, exhibiting differing approaches and techniques. In Cong et al. [165], the quantum convolutional layer is characterized by a single quasilocal unitary operation (denoted by \(U_{i}\)). Pooling is carried out by measuring some qubits and using the results to define unitary operations (denoted by \(V_{j}\)) on nearby qubits. When the remaining qubit count is manageable, a unitary operation \(F\) acts as a fully connected layer before the final measurement. During the training phase, the unitaries are optimized with \(O(\log(N))\) parameters, where \(N\) is the input qubit count. Li et al. [166] introduced the Quantum Deep Convolutional Neural Network (QDCNN) that uses a layered sequence of VQCs as the convolutional filters, with a final VQC producing the classification result. The implementation, however, relies on efficient QRAM for input state preparation. Henderson et al. [167] proposed the Quanvolutional Neural Network, a model that integrates random quantum circuits as filter layers within a traditional CNN structure for feature extraction in image classification tasks. These quanvolutional filters can be stacked by inserting a classical pooling layer in between. In particular, Riaz et al. [38] empirically showed that increasing the number of quanvolutional layers enhances performance. Additionally, the use of strongly entangled quantum circuits instead of random quantum circuits as transformation layers further improved performance. Numerical simulation of these methods performed over benchmark image datasets have showed superior performance to a classical CNNs with similar structure, often with faster convergence [165, 166, 167, 168].
#### 4.3.6 Quantum Autoencoders
The quantum autoencoder (QAE) aims to compress quantum data into a smaller dimension, preserving essential information while reducing the number of qubits required to describe the data. It consists of two main parts: the encoder and the decoder. The encoder maps the original data into a compressed space by applying a VQC, reducing the dimensionality of the data. The decoder reverses this process, attempting to reconstruct the original data from the compressed representation. The aim is to minimize the difference between the input and the reconstructed states. This is typically achieved by optimizing the parameters of the encoding and decoding circuits using a cost function, often the fidelity between the original and reconstructed states. Several different methods have been explored. A single unitary can be used that acts as both encoder and decoder. The unitary evolves an input state \(\ket{\phi}\) to a latent state \(\ket{\chi}\) using an encoder circuit \(U(\theta)\), and learns to reconstruct this state using its Hermitian conjugate \(\ket{\phi}=U^{\dagger}(\theta)\ket{\chi}\)[169]. Alternatively, two unitaries can be learned with individual parameterizations, each acting as the encoder and decoder respectively [170]. The size of the latent dimension can be fixed by discarding intermediate qubits that would feed into the decoder circuit.
#### 4.3.7 Quantum Generative Adversarial Networks
Quantum Generative Adversarial Networks (QGANs) [171, 172] are composed of two main quantum circuits, the generator and the discriminator. The generator is a VQC controlled by a set of parameters \(\Theta_{G}\), and is responsible for transforming random quantum noise into quantum states that resemble the real data distribution. The discriminator is another VQC controlled by parameters \(\Theta_{D}\) tasked with distinguishing between real quantum states from the target distribution and the fake ones generated by the generator. In the hybrid quantum-classical setting, the discriminator is a classical neural network [173]. During the training process, the objective is to simultaneously train the generator to produce indistinguishable states from the real data and the discriminator to efficiently differentiate between the real and generated states. The loss function is typically designed as a two-player min-max game, and the parameters are iteratively updated through gradient-based optimization methods, aiming to find the equilibrium of this adversarial game.
#### 4.3.8 Quantum Circuit Born Machines
The Quantum Circuit Born Machine (QCBM) is a type of generative model that approximates a target discrete probability distribution \(q\) in the wavefunction of a quantum system [174]. The QCBM consists of three main components: a VQC, an objective function, and a classical optimizer. An initial \(n\)-qubit state \(\ket{0}^{\otimes n}\) is passed through the VQC \(U(\theta)\) to generate a quantum state. The Born rule is then applied to compute the probability \(p_{\theta}(x)\) of sampling a computational basis state \(x\). This is mathematically represented as \(p_{\theta}(x)=\text{tr}(\Pi_{x}U(\theta)(\ket{0}\bra{0})^{\otimes n}U(\theta)^ {\dagger})\), where \(\Pi_{x}=\ket{x}\bra{x}\) is the projection operator. The KL-divergence \(D_{KL}[q][\ket{p_{\theta}}]\) measures the discrepancy between \(q\) and \(p_{\theta}\), and the Particle Swarm Optimizer (PSO) algorithm adjusts the parameters \(\theta\) to minimize this divergence [175]. Other objective functions, such as the maximum mean discrepancy (MMD) and the Sinkhorn divergence (SHD), have also been used. Studies have suggested that MMD can be more efficiently applied in large-scale systems, as KL divergence is often inaccessible. Coyle et al. [176] reported that SHD is superior to MMD in terms of accuracy and convergence rate, supported by numerical simulations. Besides PSO, various other optimization algorithms have been implemented. Gradient-free methods, such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) and Bayesian Optimization, are prevalent for optimizing non-linear and non-convex objective functions. In contrast, for large-scale parameter optimization, gradient-based algorithms like the Adam optimizer have shown successful application [175].
Since direct access to the quantum wavefunction is not available, the QCBM is a implicit generative model, where sampling from \(q\) is easy, but characterizing it is difficult. Due to this, large QCBM circuits may be computationally intractable. Coyle et al. [176] prove a hardness result for their
Quantum Circuit Ising Born Machine (QCIBM) that utilizes a Hamiltonian-informed ansatz; many circuits trained in the QCIBM model are proven to resist efficient classical simulation up to multiplicative error.
#### 4.3.9 Tensor Network-based VQCs
As initially discussed in Section 4.2, tensor networks serve as effective ansatz for the variational circuits in VQCs. Such implementations are particularly advantageous for near-term quantum hardware, as they often require a reduced number of physical qubits, scaling logarithmically or even remaining constant with data size [140]. Huggins et al. [80] outline the construction of TTN and MPS decompositions using qubit lines connected in a tree-like structure. In these models, tensor nodes correspond to multi-qubit gates, with incoming and outgoing qubits representing the bonds. Isometric tensor network nodes are converted into unitary quantum gates, and bond dimensions are defined by the number of transferred qubits between such gates [79]. VQCs based on MERA architectures have also been proposed [165, 81, 177]. In classical simulation, left-over outgoing qubits are discarded or traced out after the application of each unitary. A qubit-efficient approach has been proposed that instead reinitializes the discarded qubits to the \(|0\rangle\) state and are then used as inputs into subsequent unitaries [80]. This allows for a reduced total qubit count, which is now constant for MPS, determined by the input and bond dimension, and logarithmic in the input size for TTN. Circuit cutting techniques also allow for fewer qubit usage, which partition large quantum circuits into smaller segments executable on limited-qubit hardware [178]. These smaller segments are then classically post-processed to combine their results. This approach facilitates not only the execution of large tensor-network quantum circuits on resource-constrained quantum devices, but also simplifies the classical simulation of a broader spectrum of tensor-network-based VQCs. Numerical simulations have confirmed the effectiveness of circuit cutting techniques in achieving scalable and efficient simulations for MPS-based VQCs. [178]. Empirical results have also suggested that tensor network-based VQCs outperform common VQC models and classical generative models such as GANs in terms of expressivity for certain generative tasks [179, 180]. Additionally, they may require less training data and computational resources to achieve similar performance to classical TN models [81].
Tensor networks have also been used as feature extractors to prepare inputs for VQCs. MPS and TTNs have been used to produce low dimensional feature vectors from input data, which can subsequently be fed into a VQC for classification [81, 181]. The parameters of the feature extracting TN can also be learned alongside the VQC in an end-to-end fashion. In Chen et al. [181], the MPS showed stronger representational power then other dimensionality reduction methods such as PCA. In Araz and Spannowsky [81], the hybrid architecture showed better predictive performance than standalone TN classifiers.
#### 4.3.10 Quantum Natural Language Modeling
Quantum natural language modeling has scarcely left the theoretical realm, with descriptions of prospective frameworks being offered in the literature. Research has focused on the distributional-compositional-categorical model of meaning (DisCoCat) [182], which combines linguistic meaning and structure into a single model via via tensor product composition. Subsequent works show that when semantic representations are modeled in this way can be interpreted in terms of quantum processes, over which quantum computation can readily handle the resulting high dimensional tensor product spaces [183, 184, 185].
Various ansatze and encoding schemes have been suggested for NLP computation over quantum hardware; a few works have implemented these ideas on classical simulators. Kartsaklis et al. [186] developed the lambeq Python library that allows for the conversion of sentences into quantum circuits, providing the tools for implementing experimental quantum NLP pipelines, following the methodology in [185] who first present a quantum pipeline for the DisCoCat methodology. In brief, the pipeline initiates with the generation of a syntax tree from a sentence using a statistical Combinatory Categorial Grammar (CCG) parser, delineating the sentence's grammatical structure. This tree is then translated into a string diagram, refined using rewriting rules to streamline the computation process, potentially omitting redundant word interactions. Finally, the adjusted diagram is transformed into a concrete quantum circuit or tensor network ansatz, trained via standard ML optimization backends such as PyTorch and JAX. Experiments using the Qiskit Aer cloud quantum simulator were performed using a simple binary meaning classification dataset of 130 sentences created using a simple context-free grammar. When compared against a classical pipeline, where the sentences are encoded as tensor networks, similar testing accuracies were achiever, albeit with fluctuation and instability at the early stages of training. These results were corroborated by Lorenz et al. [187], where practical simulations of the DisCoCat compositional model were compared against quantum-friendly versions of the word-sequence and bag-of-words models, the latter methods being represented by simple tensor compositions of semantic bases. The compositional model showed superior results on both classical and quantum hardware.
Li et al. [188] propose a Quantum Self-Attention Neural Network (QSANN) for text classification, noting that the DisCoCat compositional model requires heavy syntactic preprocessing and a syntax-dependent network architecture, limiting its scalability to larger datasets. The self-attention mechanism that has seen large success in classical NLP is introduced into the quantum setting; the key component vectors of classical self-attention: queries, keys and values are modeled and trained using quantum ansatze, with an additional projection onto 1D space and Gaussian function applied to handle long distance correlations induced by inner-product self-attention. Numerical results against both classical self-attention neural networks and the DisCoCat model [185], and showed superior results to both in terms of predictive performance while requiring fewer parameters.
### _Other QiML Methods_
Works in this section do not necessarily fall into the aforementioned categories, but are more intrinsic to early def
initions of QiML -- methods that incorporate and adapt quantum phenomena in classical settings.
#### 4.4.1 Quantum-Inspired Nearest Mean Classifiers
A line of work in QiML research, commenced by Sergioli et al. [189] has explored a so-called "new approach" to QiML [190] which explores supervised binary classification using quantum concepts. In [189], the authors developed a quantum-inspired version of the nearest mean classifier (QNMC). The NMC problem finds an average 'centroid' \(u_{i}\) for each class \(C_{i}\) in the training dataset:
\[u_{i}=\frac{1}{n_{i}}\sum_{x_{i}\in C_{i}}x_{i}, \tag{46}\]
where \(n_{i}\) is the number of data points belonging to the class \(C_{i}\). New instances are assigned labels based on proximity to these centroids. The normalized trace is used as the distance metric due to its ability to preserve the order of distances between arbitrary density patterns. The introduction of the quantum-inspired NMC (QNMC) stems from the observation that any real, two-feature pattern \(x\) corresponds exactly with some quantum pure density operator \(\rho\), obtained by the stereographic projection of \(x\) onto the Bloch Sphere representation. From these encodings, the quantum centroid is defined as
\[\rho_{QC}=\frac{1}{n}\sum_{i=1}^{n}\rho_{i}. \tag{47}\]
The authors show that the QNMC potentially outperforms the classical NMC on synthetic, non-linear data, particularly on datasets with high data dispersion or mixed class distributions. Improvements to this model have come from subsequent works. In [191], the correspondence between real and quantum objects is extended to an arbitrary \(n\) feature patterns, allowing for experimentation on more complex, real-world datasets. The QNMC again shows considerably enhanced performance over classical NMC. In [192], Helstrom's distance is used instead, producing a model coined as the Helstrom Quantum Classifier (HQC). This allowed for the use of multiple copies of quantum states, bolstering their informational content, leading to empirically enhanced performance over benchmark tasks. This model was extended to the multi-class context in [193] by leveraging the _pretly-good measurement_ (PGM) measurement technique from quantum state discrimination; a minimum-error discrimination that discerns between multiple unknown quantum states with high success probabilities. These models have also seen success in practical implementation over biomedical contexts [194, 195]. Leporini and Pastorello [196] consider a geometric construction of the classifier, and discusses a method to encode real feature vectors into the amplitudes of pure quantum states using Bloch vectors. Bloch vectors represent the density operators, and the centroids of data classes are directly calculated based on these vectors. The obtained Bloch vector is rescaled into a real sphere to identify the centroid as a proper density operator, since the mean of a set of Bloch vectors is not typically a Bloch vector. This representation allows for data compression by eliminating null and repeated components, allows for the implementation of feature maps while saving space and time resources without compromising performance. This method showed similar, and sometimes improved accuracies over benchmark datasets, when compared with similar proposed classifiers. Bertini et al. [197] later propose a KNN version of the Bloch vector-based classifier by executing the method over a local neighborhood of training data points.
#### 4.4.2 Density Matrix-Based Feature Representation
A central source of inspiration for QiML is derived from the probabilistic interpretation of quantum mechanics, known as the quantum probabilistic framework. Unlike classical probability, quantum probability encompasses complex-valued probability amplitudes and allows for phenomena such as superposition and entanglement.
The quantum probabilistic framework introduces mathematical structures that can be applied to classical machine learning, specifically through the use of density matrices and Hilbert spaces. A density matrix, represented as \(\rho\), describes the statistical state of a quantum system, allowing for a mixture of pure states, and can be expressed as:
\[\rho=\sum_{i}p_{i}\ket{\psi_{i}}\bra{\psi_{i}}, \tag{48}\]
where \(\ket{\psi_{i}}\) are the pure states of the system, and \(p_{i}\) are the classical probabilities for each state.
In the context of machine learning, this framework can be used to represent complex relationships and dependencies within data. The density matrix can reflect the covariance among different embedding dimensions, representing how scattered words are in the embedded space [198]. Quantum entanglement can also be leveraged to describe intricate correlations between features, providing a more expressive model [199].
The use of density matrices is prevalent in quantum-inspired NLP methods, used to capture probabilities and semantic sub-spaces of the individual words in the sentence [200, 201]. Once classical data is encoded into quantum states, the density matrix representation of these states can be computed via several methods. One method is to compute \(\rho\) directly, where a sentence or document corresponds to a mixed state represented by a density matrix of individual semantic spaces [199, 202]:
\[\rho=\sum_{i}p_{i}\ket{w_{i}}\bra{w_{i}}, \tag{49}\]
where \(p_{i}\) is the relative importance of word \(w_{i}\) within the sentence, satisfying \(\sum_{i}p_{i}=1\). The assigned \(p_{i}\) can be captured in various ways, such as uniformly [201], by number of occurrences [202], or by a softmax function [203]. This representation extends to multi-modal settings by considering non-textual media (images, videos) as "textual" features. For instance, the SIFT algorithm [204] can be used to detect and describe local features in images as vectors, which can be clustered, with cluster centers considered as "visual word" vectors [200] and used directly in the calculation of the density matrix to produce a compositional Hilbert Space [199, 205]:
\[\rho =\sum_{i}p_{i}(|w_{i}^{m}\rangle\otimes\cdots\otimes|w_{i}^{M}\rangle) (\langle w_{i}^{m}|\otimes\cdots\otimes\langle w_{i}^{M}|) \tag{50}\] \[=\sum_{i}p_{i}(\rho_{i}^{m}\otimes\cdots\otimes\rho_{i}^{M}), \tag{51}\]
This method assumes the importances \(p_{i}\) are accurately known or can be reliably computed. An alternative method is to construct projectors in semantic space from word features, from which \(\rho\) can be learned [200, 201]. Each projector can be described as:
\[\Pi_{i}=\left|w_{i}\right\rangle\left\langle w_{i}\right|, \tag{52}\]
where \(\Pi_{i}\) describes the semantic space of a normalized word vector \(\left|w_{i}\right\rangle\). Such vectors may be obtained via various real-valued embedding schemes, such as by GloVe [206] or BERT [207] word embeddings. A document is then considered as a sequence of projectors, \(P=\{\Pi_{1},\Pi_{2},\ldots,\Pi_{n}\}\), where \(n\) is the number of terms in the document. A randomly initialized density matrix is trained and iteratively updated based on this sequence of projectors via a globally convergent Maximum Likelihood Estimation algorithm until a convergence threshold is reached, using the objective function:
\[F(\rho) \equiv\max_{\rho}\sum_{i}\log\left(\operatorname{tr}\left(\Pi_{i }\rho\right)\right),\] (53) s.t. \[\operatorname{tr}(\rho) =1, \tag{54}\] \[\rho \geq 0. \tag{55}\]
The final density matrix encapsulates the semantic dependencies and distributional information of the terms in the document. In multi-modal settings, image and video feature vectors are extracted, which which projectors can be constructed via Equation 52 and included in the set \(P\). A multi-modal fusion method inspired by quantum interference has been proposed [200, 202]. Each mode has its own classifier that takes in only input density matrices for that mode. Then, the individual inferences on a document with \(M\) modes produces \(M\) predictions, thus the overall sentiment of the document could be uncertain. This sentiment can be modeled as a quantum wavefunction, analogized as a combination of the modal sentiment components. For example, the combination of the sentiment of a document containing text and the image components is described by:
\[\psi(x_{i})=\alpha\psi(x_{i}^{\text{text}})+\beta\psi(x_{i}^{\text{image}}). \tag{56}\]
Thus the overall sentiment score can be represented by a probability distribution, measured as:
\[P(x_{i}) =\alpha^{2}P_{t}+\beta^{2}P_{i}+I \tag{57}\] \[I =2\alpha\beta\sqrt{P_{t}P_{i}}\cos\theta \tag{58}\]
such that \(P(x_{i})=|\psi(x_{i}^{\text{}})|^{2}\) describes the probability of the document's sentiment score. \(P_{t}=|\psi(x_{i}^{\text{}})|^{2}\) and \(P_{i}=|\psi(x_{i}^{\text{}})|^{2}\) are the probabilities governing the sentiment scores of the text and image respectively. \(\alpha\), \(\beta\) and \(\theta\) are learnable parameters. The interference term \(I\) reflects the degree of conflict in local decisions; if both modalities agree in sentiment, there is a constructive interference, resulting in a strongly positive or negative sentiment.
In question and answering tasks, the joint representation for a question-answer pair can be viewed as their multiplicative interaction [198]:
\[\rho_{QA}=\rho_{Q}\rho_{A}. \tag{59}\]
The spectral decomposition of \(\rho_{QA}\) exposes the joint eigenspaces, over which the trace inner product reveals the similarity between the question and answer. From this, [198] proposed two methods of constructing the feature set for each document. The first uses the trace and diagonal elements of \(\rho_{QA}\): \([\operatorname{tr}(\rho_{Q}\rho_{A});\operatorname{diag}(\rho_{Q}\rho_{A})]\), where the former captures the semantic overlaps between the question-answer pair, and the latter accounts for the varying degrees of importance for similarity measurement. The second uses 2D convolutions to scan the density matrices, resulting in feature maps which are processed using row-wise and column-wise max-pooling to generate feature vectors, aiming to capture a more nuanced and complex understanding of similarity between question and answer pairs.
**Learning and Classification:** Several classical ML techniques have been used to learn a performant parameterization over the density matrix features, including SVMs and Random Forests [201], as well as deep learning structures such as RNNs [202, 208], and CNNs [198]. Back propagation is often used as the optimization method.
In addition, quantum measurement-based procedures can also be used to extract predictions. Once the set of states \(\{\rho_{c}^{t}\}\) is obtained that represent the data, a global observable \(O\) is introduced, uniquely represented by a set of eigenvalues \(\lambda\) and corresponding eigenstates \(\left|e\right\rangle\), expressed as \(O=\sum_{i}\lambda_{i}\left|e_{i}\right\rangle\left\langle e_{i}\right|\). These eigenvalues and eigenstates correspond to some outcome representation of interest, such as sentiment-related aspects [199] or emotional states [205]. The measurement process leads to a collapse of the state onto one of the eigenstates, and a probability distribution over the eigenstates is calculated as \(p_{i}=\left\langle e_{i}\right|\rho_{c}^{t}|e_{i}\rangle\), where \(\rho_{c}^{t}\) is the state at time \(t\). This probability distribution can then be used to perform predictions over the data.
**Complex-Valued Density Matrix Features:** Recently, researchers have explored the effect of using complex-valued word embeddings in tandem with considering local word correlations, noting in using only real-valued vectors, the full probabilistic properties of a density matrices and their complex formulations are ignored [203].
The formulation of words via Equation 19 is preserved, except with the inclusion of complex components. This complex-valued approach is inspired by the representation of words as a superposition of semantic units, where each word \(\left|w\right\rangle\) is defined as a unit-length vector on \(H\):
\[\left|w\right\rangle=\sum_{j=1}^{n}r_{j}e^{i\phi_{j}}|e_{j}\rangle, \tag{60}\]
where \(i\) is the imaginary unit, and the \(r_{j}\) and \(\phi_{j}\) are non-negative real-valued amplitudes and corresponding complex phases, satisfying \(\sum_{j=1}^{n}r_{j}^{2}=1\), and \(\phi_{j}\in[-\pi,\pi]\)
respectively. The inclusion of complex components in the word embeddings allows for a richer representation of semantic information by leveraging both the magnitude and phase of complex numbers. The magnitudes \(r_{j}\) describe the importance of each semantic base in the composition of the word, while the phases \(\phi_{j}\) capture the subtle interrelations between different semantic units. This leads to a more expressive and nuanced modeling of word semantics [203].
Complex-valued representations have also been used over multi-modal tasks. In emotion recognition, Li et al. [205] consider each utterance is as a mixture of unimodal states whose features are recast as pure states creating a multimodal mixed state representation. Then, a procedure inspired by quantum evolution is employed to track the dynamics of emotional states in a conversation. A quantum-like recurrent neural network tracks the evolving emotional states during a conversation, considering the uncertainties in the conversational context and efficiently memorizing the context information due to unitary transformation, which ensures zero information loss. The "measurement and collapse" phase introduces a global observable to measure the emotional state of each utterance, calculating a probability distribution that corresponds to the likelihood of the state collapsing onto specific eigenstates. The result is then mapped to emotion labels using a neural network with a single hidden layer.
Shi et al. [208] similarly propose complex-valued word embeddings which likens words to quantum particles existing in multiple states, representing polysemy. The method corresponds a word with multiple meanings to a quantum particle that can exist in several states. Additionally, sentences are likened to quantum systems where these particles (or words) interact or interfere with each other, just as quantum particles can interact in a quantum system. Complex-valued word embeddings can be formed from amplitude word vectors and phase vectors, capturing rich semantic and positional information with greater alignment with quantum concepts. These embeddings are used in text classification models utilizing gated recurrent units (GRUs) and self-attentive layers to extract more semantic features. An extended model is also presented that applies a convolutional layer on the projected word embeddings matrix to capture local textual features. This is inspired by the quantum theory concept of 'entanglement', where the state of one particle is connected to the state of another, no matter the distance between them. In a similar vein, the convolutional layer captures dependencies between different parts of the text, or 'local features', that might otherwise be missed.
This quantum probabilistic formulation of density matrices not only serves as an effective representation for sentences or documents in NLP tasks but also offers a versatile framework that may extend to other contexts where data can be captured as probabilistic events.
#### 4.4.3 Quantum Formalisms Applied to Neural Networks
Exploring neural network representations through the lens of quantum mechanics has been a long explored topic. Such methods aim to improve the robustness of classical neural networks by formulating quantum-based activation operators or utilizing quantum feature spaces [209, 210, 211]. In this subsection we present a few recent, practical works in this area.
Patel et al. [212] introduce the Quantum-inspired Fuzzy based Neural Network (Q-FNN), a three-layer neural network that employs Fuzzy c-Means (FCM) clustering to fine-tune connection weights and decide the number of neurons in the hidden layer. The fuzziness parameter \(m\), which manages the overlap among samples from different classes, takes on a qubit representation, which enlarges the search space for the selection of an appropriate fuzziness parameter. The final cluster centroids, found after numerous iterations of fuzzy clustering, serve as the final connection weights of the hidden layer. This model has been proven effective in dealing with two-class classification problems.
Sagheer et al. [213] replace the classical perceptron within the neural network model with a quantum-inspired version: the autonomous perceptron model (APM). In the APM, a feature vector \(x\in\mathbb{R}^{n}\) is replaced by a quantum state vector \(|\psi\rangle\in\mathbb{C}^{n}\) which can be represented as a complex linear combination of the basis vectors in the \(n\)-dimensional complex vector space. Instead of real-valued weights used in a classical perceptron, quantum weights are introduced as normalized complex numbers \(\omega_{i}=\exp(i\theta_{i})\) where \(\theta_{i}\) is the phase of the \(i\)-th weight. The activation function is replaced by a measurement operation, which projects the state of the system onto one of the basis vectors. The output of the APM is the expectation value of this measurement, which can be calculated as \(\langle\psi|M|\psi\rangle\), where \(M\) is the Pauli-Z measurement operator in the computational basis. The measurement outcome is then compared with a threshold value to make binary decisions, akin to classical perceptrons. Results over the UC Irvine (UCI) Machine Learning Repository benchmark classification datasets [214] showed the APM-based model outperformed 15 standard classifiers models when subjected to the same experimental conditions.
Konar et al. [215] developed the quantum-inspired self-supervised network (QIS-Net) architecture for the automatic segmentation of brain magnetic resonance (MR) images. The model is composed of layers of classically implemented quantum neurons arranged. Each neuron is depicted as a qubit using matrix notation. The intra-connection weights among neurons within the same layer are set to \(\pi/2\), to emulate a quantum state. The input layer deals with qubit information from connected neighborhood subsets of each seed neuron, which is then accumulated at the central neuron of the intermediate layer through interconnections. Image data by feeding image pixels to the input layer as quantum bits, which then propagate to the intermediate and output layers, which are updated through rotation gates determined by the relative quantum fuzzy measures of pixel intensity at the constituent quantum neurons between layers. A novel quantum-inspired multi-level sigmoidal (QMSig) activation function is integrated into the QIS-Net model to handle the complexity of multi-intensity grayscale values in brain MR images.
\[\text{QMSig}=\frac{1}{\lambda\omega+e^{-\nu(x-\eta)}}. \tag{61}\]
The function adjusts its activation based on qubits, improving the model's accuracy in segmenting complex images.
When tested on Dynamic Susceptibility Contrast (DSC) brain MR images for tumor detection, QIS-Net demonstrated superior performance compared to classical self-supervised network models commonly used in MR image segmentation, whilst requiring less computational overhead with respect to time and resources.
Zhang et al. [216] introduce a method that leverages quantum entanglement to calculate joint probabilities between features and labels. The maximally entangled Bell state system of two qubits \(\ket{\Phi^{+}}\) is defined, where one qubit is described as the feature and the other as the label. The observables and measurement operators of the entangled system are defined with specific spectral decompositions. The positive and negative measurement operators for the entangled system consists of the \(n\)-th attribute and the label, and are given by \(\mathcal{M}_{n}^{\pm}(\theta_{n},\phi_{n})=P_{n}^{+}(\theta_{n},\phi_{n})\otimes P _{l}^{\pm}\), where polar and azimuth angles \(\theta_{n}\) and \(\phi_{n}\), are the arbitrary real parameters. By applying these operators to the entangled system, probability values for both positive and negative examples are obtained: \(p_{n}^{\pm}(\theta_{n},\phi_{n})=\bra{\Phi^{+}}\mathcal{M}_{n}^{\pm}(\theta_{n },\phi_{n})\ket{\Phi^{+}}\). This formalism enables the calculation of quantum joint probabilities, and is integrated into a classical MLP by replacing hidden layer neurons in the MLP with the measurement process. Model optimization uses the cross-entropy loss function with the Adam optimizer to ensure smooth parameter changes.
#### 4.4.4 Miscellaneous Quantum Mechanics-Based Classifiers
Various other classification methods have also been proposed that incorporate quantum phenomena into their classical learning model. In the following works, the main concepts of encoding classical data into some quantum state representation, and discrimination of those states via a measurement process are highlighted.
Tiwari and Melucci [217] explored the prospect of classification inspired by quantum signal detection theory, which aims to decide between two different hypotheses -- the presence or absence of a signal. A codification process converts the signal into a particle state, which is then measured, much like the classical signal detection framework. When considered in a classification context, the two hypotheses subjected to decision become two class labels, represented by distinct density operators derived from data features, and characterize the system state associated with each class. Outcomes are decided via projections corresponding to the density operators. Using this knowledge, the authors devise a binary classifier over vectorized documents based on frequency of distinct elements. Density operators for each class are estimated using training samples:
\[\rho_{c}=\frac{\ket{v_{c}}\bra{v_{c}}}{\text{tr}(\ket{v_{c}}\bra{v_{c}})}, \tag{62}\]
for each class \(c\in\{0,1\}\), from which an optimal projection operator can be calculated:
\[\Lambda=\sum_{l:e_{l}\geq 0}\ket{e_{l}}\bra{e_{l}} \tag{63}\]
from eigenstates \(e_{l}\) of the operator \(\rho_{1}-\lambda\rho_{0}\). A test sample is then classified based on the result of a projection operation involving the sample's feature vector and the projection operator. If the result is greater or equal to 0.5, the sample is assigned to the class, otherwise, it is not. The model has been tested over image and textual datasets [218], showing superior performance in recall, and comparable precision and F-measures across varying feature ranges compared to baseline models.
Zhang et al. [219] proposed a novel method for data classification using principles of quantum open system theory, termed the Interaction-based Quantum Classifier (IQC) that models the classification process as a quantum system's evolution. The interaction between the target system (qubit) and the environment (input data) is characterized by the Hamiltonian \(H_{\text{int}}=-\tilde{g}\sigma_{Q}\otimes\sigma_{E}\), where \(\tilde{g}\) is the coupling constant whose magnitude reflects the strength of the interaction, leading to the unitary evolution \(U(\tau)=e^{i\sigma_{Q}\otimes\sigma_{E}(\tau)}\), where \(\sigma_{Q}=\sigma_{x}+\sigma_{y}+\sigma_{z}\). Both systems are initialized as equal probability superpositions, and the composite system evolves according to the unitary operator. Measurement of the evolved state determines probabilities used for classification. The two-category classification task is defined by a unitary operator involving input and weight vectors, \(U(\mathbf{x}_{i})=e^{i\sigma_{Q}\otimes\sigma_{E}(\mathbf{x}_{i})}\), and a gradient descent update rule for the weights, \(w_{i}=w_{i}-\eta(z_{i}-y_{i})(1-p_{1}^{2})x_{i}\), where \(p_{1}^{2}\) is the positive ground state probability, is applied to optimize the classification.
## 5 QiML in Practice
In this section, we delve into the practical applications of QiML, showcasing where these techniques have been employed and evaluated empirically. We present an exploration of several sectors, including medical, financial, physics, and more, detailing how QiML has been utilized, and discuss these in terms of the QiML methods presented (dequantized algorithms, TNs, QVAS, and others). To aid this discussion, we present a compilation of relevant works in Table III, offering a quick reference to the practical applications of QiML. Each perspective is treated as a subsection, allowing readers the flexibility to navigate the section that resonates with their domain-specific interest.
### _Image Modeling_
Both image classification and generative image modeling have seen a wealth of implementation using tensor network methods. Particularly, the MNIST and Fashion-MNIST datasets have provided a relatively simple, low dimensional test bed for the development of TN methodologies. Tables IV and V outline these numerical results and presents key improvements in TN performance over the benchmark datasets. The current, classical, state-of-the-art benchmark for the given task is also included, indicated by the asterisk (*). The study conducted by Han et al. [104] is not featured in Table V due to the lack of experimental results from training on the full MNIST dataset. In general, TN learning models have shown competitive, but not superior performance when compared to classical benchmarks.
Similarly, variational quantum algorithms have used image datasets to test various introduced methods [158],
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline & **Dep quantized** & **TNs** & **QVAS** & **Other QiML Methods** \\ \hline \multicolumn{4}{|l|}{**Image Modeling**} \\ \hline Classification & - & [91]*, [111]*, [87], [99]*, [100]*, [101], [115] & [158], [166], [167], [181], [38] & [217], [220] \\ Generative Modeling & - & [104], [110], [116], [124]* & [221], [222]*, [223] & - \\ \hline \multicolumn{4}{|l|}{**Natural Language Processing**} \\ \hline Language Modeling & - & [94], [95]* & - & - \\ Sentiment Analysis & - & - & - & [200], [201], [199], [202], \\ Question-Answering & - & [93] & - & [198], [203] \\ Text Classification & - & - & [186]*, [188], [187]* & [218], [208] \\ Emotion Recognition & - & - & - & [205]* \\ \hline \multicolumn{4}{|l|}{**Medical**} \\ \hline Disease Classification & - & [125]*, [224]* & [225], [226], [227] & [194], [195]* \\ Image Segmentation & - & [126]* & - & [215] \\ \hline \multicolumn{4}{|l|}{**Finance**} \\ \hline Options Pricing & - & [122] & [228]*, [229] & - \\ Portfolio Optimization & [230]* & [231] & [105] & - \\ Time-Series Forecasting & - & - & [160] & - \\ Synthetic Data Generation & - & - & [232], [233] & - \\ \hline \multicolumn{4}{|l|}{**Physics**} \\ \hline Event Classification & - & [102]* & [161], [163], [234], [235], [162], [236], [81]* & - \\ \hline \multicolumn{4}{|l|}{**Chemistry**} \\ \hline Molecule Discovery & - & [237] & - & - \\ \hline \multicolumn{4}{|l|}{**Cybersecurity**} \\ \hline Attack Detection & - & - & [159], [238], [154]*, [155] & - \\ Intrusion Detection & - & - & [239] & - \\ Fraud Detection & - & - & [240] & - \\ Automatic Speech Recognition & - & - & [241]* & - \\ \hline \multicolumn{4}{|l|}{**Other Tasks and Applications**} \\ \hline Generic Classification & [230]*, [59]*, [46] & [97]* & [158], [153] & [189], [192], [193], [196]*, [197], [219], [216], [212], [213] \\ Recommendation Systems & [230]* [46] & [97]* & - & - \\ Bit-string Classification & - & [106]*, [107]* & - & - \\ Generic PDE Solvers & - & [242]* & - & - \\ Generic Anomaly Detection & - & [123] & - & - \\ \hline \end{tabular}
\end{table} TABLE III: Practical Application Domains of QiML: A summary of works that report experimental results and their availability of source code (* indicates source code availability).
[166, 167], and explore their capabilities [38]. In particular, QDCNN [166] and Quanvolutional Neural Networks [167] have been introduced that incorporate quantum filters and operations that mirror classical convolution techniques, and have shown enhanced performance over classical methods with comparable architectures. Rudolph et al. [221] propose a QCBM to learn and sample the prior distribution of a classical GAN, extending its capabilities with quantum samples from multiple measurement bases. The method was shown to enhance the expressivity of the prior distribution, outperforming classical generative methods with as few as 8 simulated qubits. Zhou et al. [223] utilize a QGANs with a quantum circuit generator and classical discriminator, and introduces a remapping method that aims to simplify the task of learning a multimodal distribution for image generation. Gray scale values of all pixels in the original image are sorted in ascending order and are then mapped back into their original pixel positions to create a new image with a unimodal distribution. This resulted in a reduction in the total number of required parameters, without sacrificing the quality of generations. Tsang et al. [222] employed a "patching" strategy in their Wasserstein-QGAN implementation, which splits the output image generated into different patches, each generated by a separate quantum circuit. This allowed for a reduction in the quantum resources required.
Other methods in QiML have shown promise in image classification tasks [217, 220]. The extra flexibility provided by superposition is cited to contribute to better decision-making in these tasks [217]. Applications to real-world data is also seen in medical imaging [195].
Further applications of such methods in this domain may be expected in future.
### _Natural Language Processing_
Numerous natural language processing (NLP) tasks have been explored in QiML literature, including sentiment analysis, question-answering, and text classification. These methods typically extract word embeddings and project them into higher dimensional space, before producing either a tensorial representation by the summation of basis states describing individual semantic elements in the full vocabulary [93, 94] or a full density matrix feature representation [200, 218, 201]. Success is seen in performances over high dimensional word embeddings, with GloVe embeddings used at a 100-dimensional level to semantic sufficiency, although computation complexity concerns have been cited [200, 201, 208]. The dimensions of complex valued word embeddings has varied where used [205, 203, 204], scaling with the vocabulary size. Where noted, the computational time of the QiML method is typically much longer than traditional methods [200, 201], where the discrepancy is due to the matrix representation of samples. It is not clear which input preparation methods are best in representing natural language. Several works have utilized density matrices, or have offered physical interpretations for complex-valued embedding schemes, where different vector components are argued to encode low and high level semantic aspects [203]. Further research may inquire into the transferability or suitability of methods to different language tasks.
Concerning quantum variational methods, quantum NLP remains a largely theoretical field; advancements centered around the distributional-compositional-categorical model of meaning (DisCoCat) which integrates linguistic semantics and structure through tensor product composition
\begin{table}
\begin{tabular}{l|c|c|c}
**Task** & **Method** & **Test** & **Optimization** \\ & & **Acc.** & **Optimization** \\ \hline MNIST & MPS [91] & 99.03\% & DMRG \\ & MPS + TTN [87] & 98.11\% & DMRG \\ & TTN [111] & 95\% & MERA \\ & MPS [99] & 98\% & SGD+Adam \\ & GMPSC [100] & 98.2\% & SGD+Adam \\ & PEPS [115] & 99.31\% & SGD+Adam \\ & CNN- & [101] & 99\% & SGD \\ & Snake-SBS & & \\ & Ensemble CNN*[243] & 99.91\% & - \\ \hline Fashion- & MPS [99] & 88\% & GD \\ MNIST & MPS + TTN [87] & 88.97\% & DMRG \\ & CNN-PEPS [115] & 91.2\% & SGD+Adam \\ & CNN- & [101] & 92.3\% & SGD \\ & Snake-SBS & [244] & 96.91\% & - \\ & DARTS* & & \\ \hline \end{tabular}
\end{table} TABLE IV: Supervised Tensor Network Performance, compared with Classical Benchmarks (the classical benchmark is indicated by *)
\begin{table}
\begin{tabular}{l|c|c|c}
**Task** & **Method** & **Test** & **Optimization** \\ & & **NLL** & **QMLG** \\ \hline Binarized- & \multirow{2}{*}{MPS [110]} & \multirow{2}{*}{101.5} & \multirow{2}{*}{DMRG} \\ MNIST & & & \\ & TTN 1D [110] & 96.9 & DMRG \\ & TTN 2D [110] & 94.3 & DMRG \\ & PEPS (D = 4) [116] & 91.2 & SGD+Adam \\ & AMPS [124] & 84.1 & GD \\ & Deep-AMPS [124] & 81.8 & GD \\ & CR-NVAE*[245] & 76.93 & - \\ \hline \end{tabular}
\end{table} TABLE V: Unsupervised Tensor Network Performance, compared with Classical Benchmarks (the current classical benchmark is indicated by *)
[184]. Research indicates that when semantic interpretations are framed in this manner, quantum processes can manage the resulting high-dimensional tensor product spaces. Experimental results on this approach, using quantum simulators, align with classical tensor network outcomes, emphasizing the potential of quantum methods in NLP. Outside of the DisCoCat model, efforts to enhance classical NLP architectures using quantum components have seen the implementation of the Quantum Self-Attention Neural Network (QSANN) model [188]. In general, while quantum variational methods in NLP are still in nascent stages, there is a growing interest and understanding that they can offer computational benefits and efficiency, particularly in handling high-dimensional spaces and potentially reducing model parameters.
### _Medical_
Quantum Nearest Mean Classifier models have shown some promising results [194, 195] in the medical field. Models like the HQC [195] have been successful, primarily due to their invariance to re-scaling, with the inclusion of the free re-scaling parameter appearing to be a key factor in their performance. In [194], the authors emphasized the importance of incorporating qualitative features, often prevalent in biomedical contexts but neglected in their study, suggesting that more advanced modeling methods need to be investigated. The authors also underlined the criticality of identifying optimal encoding methods that can accurately represent the given dataset, a challenge that persists in both QiML and traditional ML fields.
Tensor networks have also been used in medical contexts, in binary classification over metastasis detection from histopathologic scans, detection of nodules in thoracic computed tomography (CT) scans [125, 224], and 3D magnetic resonance imaging (MRI) scans [224]. The models showed strong area-under-the-curve (AUC) performance compared with classical baselines, while using only a fraction of the GPU memory. However when modeling 3D images, the approach seemed to require a high number of parameters when compared with CNN baselines. The same was not true for model in [126], where tensor network compression was shown to give large savings in parameters.
VQCs have been used in medical contexts over both image [226] and audio-based [227] datasets. Azevedo et al. [226] propose a transfer learning approach, where classical networks pretrained on ImageNet [246] are used to extract features for a quantum circuit, the DressedQuantumNet, which performs the final classification. This circuit, attached to the final linear layer of a pretrained model (Resnet18), takes 512 real values, and performs an angle encoding to construct quantum states, before being passed through to several variational layers. Notably, the quantum classifier is the only trainable part of the network, with weights updated during training via techniques like cross-entropy loss and the Adam optimizer. The model achieved an accuracy of 84%, outperforming the classical standalone ResNet model, which achieved a maximum accuracy of 67%. Esposito et al. [227] applied the Quanvolutional Neural Network to detect COVID-19 from cough audio using DiCOVA [247] and COUGHVID [248] datasets. They integrated quantum circuits as quanvolutional layers into Recurrent and Convolutional Neural Networks (RNN, CNN) using the PennyLane library, with feature extraction via two- and four-qubit quantum circuits. Test accuracies for classical RNN and CNN were 79.4% and 73.0% respectively, while QNNs achieved 74.6%-78.8% with no noise. The results are thus comparable to classical methods, however the quantum simulations were observed to necessitate extended training duration. In [225], a breast cancer dataset was created from histological data for binary classification of metastatic diffusion to lymph nodes. The data was quantum-encoded using the trigonometric kernel mapping (Equation 17) and processed in a quantum circuit of nearest-neighbor two-qubit unitaries and Pauli rotations. CNOT gates managed qubit interactions. The quantum circuit was then converted to a tree tensor network for classical evaluation. Performance was comparable to the CancerMath prognosis tool across a variety of feature selection settings. The authors note the limitation of using the kernel, which severely limited the number of included prognostic factors.
### _Finance_
Finance modeling has seen the implementation of QiML chiefly in portfolio optimization. The implementation in [230] serves as a proof of concept for dequantized matrix inversion on large datasets with intrinsic large-scale matrix calculations, with the goal of identifying practical bottlenecks, rather than achieving high performance. Tensor network structures used here have shown effectiveness in optimization. In [242], a classically simulated quantum register encoded via an MPS is proposed for multivariate calculus computation, capitalizing on the low entanglement between states for smooth functions with bounded derivatives. This representation enables the efficient storage of an exponential amount of weights and proves theoretically amenable to operations such as Fourier analysis, derivatives approximation, and interpolation methods. Mugel et al. [231] implemented an MPS for dynamic portfolio optimization, which showed impressive performance when compared to D-Wave hybrid-quantum annealing and quantum variational circuits in terms of Sharpe ratios and ability to achieve global minimums reliably. The method, however, suffers greatly in computation time when compared with quantum methods. Patel et. al [122] integrate MPO structures into neural network layers to reduce the number of model parameters. The authors show the consequently leading to the model size reduction and is also shown to lead to faster convergence in some cases. This network compression method was able to well approximate the original weight matrices with many fewer parameters, whilst exhibiting minimal loss in performance. However, in the presented works, the nature of the dataset used for experiments is either random, citing difficulties in scaling to large, real-world datasets [231], or not exposed or elucidated [122].
Emmanoulopoulos and Dimoska [160] note that VQCs could match the performance of long short term memory (LSTM) models over time-series forecasting, even exhibiting slight superiority with high noise coefficients data due to the alignment of trigonometric functions in quantum circuits with the nature of time series signals. However, the authors acknowledged that the practical application of VQCs
are currently constrained by their inability to handle large datasets.
QCBMs have also seen large application in finance, for portfolio optimization [105], options pricing [299], and generating synthetic financial data [232, 233]. In many cases, numerical simulations reveals the enhanced expressivity of QCBMs over classical RBM models [105, 232, 233].
### _Physics_
QiML has seen wide use in high-energy physics (HEP) applications. A common task is discriminating between signal and background events in the context of the Standard Model of physics.
Araz and Spannowsky [102] utilized MPS tensor networks for top versus quantum chromodynamics (QCD) jet discrimination in physics modeling using a combined SGD and DMRG optimization method; applying DMRG in the first batch of each epoch and SGD thereafter. Despite slightly weaker performance compared to CNN models, the MPS method provided a higher degree of interpretability.
Variational quantum methods have also seen such use. Terashi et al. [161] applied variational quantum algorithms for signal event classification in HEP data analysis using supersymmetry. Two implementations were tested: one using RY and RZ gates with an Ising model Hamiltonian, and the other using Hadamard and RZ gates with Hadamard and CNOT for entanglement. The study compared these methods against traditional Boosted Decision Tree (BDT) and DNN algorithms, finding comparable discriminating power for small training sets (10,000 events or fewer). Simulations were run on Qulacs [249] and IBM Quantum QASM [250] simulators. Resource demand for quantum simulation was high and increased exponentially with the number of variables used, making extended iterations impractical.
For data analysis of \(t\bar{t}H\) (Higgs coupling to top quark pairs), both the quantum variational classifier [234] and the quantum kernel estimator [235] methods were employed using IBM quantum simulators. Results on the quantum simulators using 10-20 qubits show that these quantum machine learning methods perform comparably to SVM and BDT classical algorithms, with both achieving reasonable AUC scores, indicating good classification performance. These results maintained over various quantum simulators, including Google Quantum [251], IBM Quantum [250], and Amazon Braket [252].
In Gianelle et al. [162], VQCs were used for \(\beta\)-jet charge identification over Large Hadron Collider (LHC) data. Both amplitude and angle encoding schemes were assessed, alongside classical deep neural networks (DNNs). Results found DNNs to slightly outperform angle encoding VQCs, being compatible within a \(2\sigma\) range, suggesting similar performance levels. Amplitude encoding VQCs consistently under-performed in comparison with angle encoding, but generally took less time to train due to being less complex in layer depth. The authors note that the number of layers is a parameter to be optimized, and show that increasing layer depth did not necessarily result in improved performance. The study also highlighted the resilience of quantum algorithms, with the angle embedding model maintaining efficacy with fewer training events, a potential advantage over classical ML methods. However, increases in model complexity and training time present challenges, with accuracy improvements saturating beyond five layers and longer training times for quantum models. The DNN also showed superior performance when a large number of features is employed.
In Ngairangbam et al. [236], a quantum autoencoder (QAE) is used for the task of distinguishing signal events from background events, following an anomaly detection approach. Approximately 30,000 background and 15,000 signal events are generated. Anomaly detection then considers that the compression and subsequent reconstruction of data will work poorly on data with different characteristics to the background. Performance is compared against a classical autoencoder network (CAE); the QAE maintained higher classification performance than the CAE across a range of latent dimensions. Quantum gradient descent is used for faster convergence in optimization; the study finds that using this method allows the QAE to efficiently learn from as little as ten sample events, demonstrating the model is much less dependent on the number of training samples. This suggests that QAEs show better learning capabilities from small data samples compared to CAEs, particularly relevant to LHC searches where the background cross section is small. The authors hypothesize this could be due to the uncertainty of quantum measurements enhancing statistics and the relatively simple circuits employed in QAEs.
Preliminary findings suggest that quantum approaches can achieve comparable results to classical algorithms, especially for smaller training sets [161, 236]. Despite these promising outcomes, challenges such as increased training time and model complexity remain. Overall, quantum variational methods offer promising avenues for HEP data analysis, but their scalability and efficiency in comparison to classical techniques need further exploration.
### _Chemistry_
Moussa et al. [237] utilized tensor network generative models for molecular discovery. Their work illuminates the data-specific effectiveness of generative models, showing that while GANs outperform TNs in some settings, the reverse is true in others. Specifically, GANs excelled on the QM9 molecule dataset [253, 254] but were outperformed by TNs on an in-house antioxidants dataset. The study underscores the potential benefit of quantum-inspired and classical ensemble methods by showing that combining various generative models could result in more robust performance across different evaluation criteria.
It should be noted that quantum-based methods have been used extensively in this domain, such as Variational Quantum Eigensolvers (VQE) which find the approximate ground state of a given Hamiltonian, often used in quantum chemistry and condensed matter physics problems [255, 256]. Further, several quantum-inspired classical algorithms have been devised based on Gaussian boson sampling [257, 258] and unitary coupled cluster theory [259, 260]. However, these methods do not fall under the purview of machine learning, as they primarily aim at solving specific physical problems through quantum simulation, rather than learning from data and generalizing to new instances, and as such are not discussed in this survey.
### _Cybersecurity_
Quantum variational methods have been employed across a broad range of cybersecurity applications.
Payares and Martinez-Santos [154] assessed QSVMs, VQCs, and an ensemble model for detecting DDoS attacks using the CIC-DDoS2019 dataset [261], which closely mirrors real-world PCAP data. To accommodate quantum model computational limits, the dataset's 80 features were reduced to 2 using PCA. An angle-based strategy was applied for quantum embedding in all models. The ensemble model, drawing from quantum superposition, employed multiple parallel quantum classifiers. While all models showcased strong binary classification results, QSVM had high accuracy at 99.6% but was computationally intensive. The ensemble model achieved 96.8% accuracy efficiently, while the VQC excelled with 99.9% accuracy and reasonable computational demand, albeit on a simplified dataset.
Masun et al. [155] evaluated the performance of both QSVM and VQC for malware detection and source code vulnerability analysis, utilizing the ClaMP and Reveal datasets respectively. The dimensionality of the features was reduced using document vectorization and PCA to yield 16 explanatory variables. Both quantum methods under-performed compared to shallow classical neural networks and classical SVMs in malware detection, while achieving comparable performance in source code vulnerability analysis. Furthermore, both quantum methods exhibited significantly extended execution times.
Suryotrisongko and Musashi [238] investigated the effect of adding a quantum circuit as a hidden layer in a classical neural network for domain generation algorithms (DGA)-based botnet detection. The classical model employs a standard deep learning architecture with 2 hidden layers (dense-dropout-dense), with the quantum layer inserted between the dense layers after dropout. Six combinations of ansatze were evaluated, using various embedding and entangling strategies made available by the PennyLane software framework. No single combination seemed to outperform all others across all settings, suggesting that quantum circuit architecture plays a significant role in determining the model's accuracy. The hybrid models performed slightly better than their classical counterparts under certain conditions. For instance, with the combination of Angle Embedding and Strongly Entangling Layers, the accuracy reached 94.7% for 100 random samples. However, on average, the classical models outperformed the hybrid models.
Herr et al. [240] explored a variant of QGANs by adopting the AnoGan [262] generative adversarial network structure, with the generative network portion replaced with a hybrid quantum-classical neural network. Specifically, a short state preparation layer encodes \(N\) uniform latent variables as quantum states, which are fed into a parameterized quantum circuit of \(N\) qubits. Measurement is performed over all qubits in the \(Z\) basis, from which the now classical output is up-scaled via a classical dense network into a higher dimensional feature space. The intuition in using the VQC in the generator is in their ability to more efficiently sample from distributions that are hard to sample from classically. This is seen in experimental results over a credit card fraud dataset; the quantum AnoGAN method showed comparable F1 scores to variety of classical architectures and system sizes, whilst staying robust to changes in the dimension of the latent space.
Yang et al. [241] proposed a novel decentralized feature extraction approach for speech recognition to address privacy-preservation issues. The framework is built upon a quantum convolutional neural network (QCNN), consisting of a quantum circuit encoder for feature extraction and a recurrent neural network (RNN) based end-to-end acoustic model (AM). This decentralized architecture enhances model parameter protection by first up-streaming an input speech to a quantum computing server to extract Mel-spectrogram feature vectors, encoding the corresponding convolutional features using a quantum circuit algorithm with random parameters, and then down-streaming the encoded features to the local RNN model for the final recognition. The authors test this approach on the Google Speech Commands dataset, attaining an accuracy of 95.12%, showing competitive recognition results for spoken-term recognition when compared with classical DNN based AM models with the same convolutional kernel size.
Other subfields within QiML have yet to see many works in the cybersecurity domain, with the anomaly detection work by Wang et al. [123] cited to have potential applications in fraud prevention and network security, among other applicable domains.
### _Other Tasks and Applications_
Concerning dequantized algorithms, in general, while many works present the theoretical application of these methods to various ML tasks, few provide experimental analysis of the methods on data. Arrazola et al. [230] both implemented, and performed analysis on the quantum-inspired algorithms for linear systems [61] and recommendation systems [43]. In implementation, the former was applied to portfolio optimization on stocks from the S&P 500, and latter to a dataset of movies; significantly faster run-times were observed than what their complexity bounds would suggest. Analysis showed that when the rank and condition number are small, the dequantized algorithms provided good estimates in reasonable time, even for high-dimensional problems. However, outside this specification, the dequantized algorithms performed poorly in terms of both run time and estimation quality relative to direct computation of the solution on practical datasets. The authors note a threshold for improved relative performance when the matrix size is larger than \(10^{6}\). Ding et al. [59] test their quantum-inspired LS-SVM on low-rank, and low approximated rank synthetic data, and analyzed the performance against the classical counterpart LIBSVM. Their results show that their model outperformed LIBSVM by 5% on average, and noted greater performance in low rank settings. However, the running times for both models are omitted. Chepurko et al. [46] similarly provided analysis of their implemented algorithms. For the recommendation systems task, their algorithm operates in a similar setting to [230]. The results indicated a notable six-fold speed increase, and demonstrated superior performance over direct computation methods. However, this enhancement was accompanied by a slight uptick in error. Comparable outcomes were found in their work on
the ridge regression task. The observed improvements seem to stem from a more efficient implementation than [230], coupled with an algorithm possessing a superior asymptotic runtime. While the field has progressed far beyond these benchmarks, and given their inconclusive nature, the applicability of dequantized algorithms to practical data remains an open question, pending further investigation.
QiML has seen implementation over an assortment of ML tasks, typically over benchmark generic datasets, such as the UCI and PMLB [263] datasets. Works involving the Quantum Nearest Mean Classifiers have used these datasets extensively to assess the model's capabilities, often as validation before moving to real-world contexts (Section 5.3). In this setting, these works commonly cite improved performance over other baseline models. The models are also shown to be able to learn complex distribution, typically challenging for classical Nearest Mean Classifiers [191]. However, a caveat of these methods is their much longer training and inference time compared with classical methods [190]. Further, these benchmark datasets are typically small-sized. QNMCs have yet to see use over large-scale data. The HQC, despite it's prowess, admits a roadblock to this, as increasing the number of copies of samples introduces a non-negligible computation cost [192]. The works inspired by quantum interference [219] and quantum correlation [216] show promise over these benchmark datasets, however also note computation costs to be a detriment to the applicability of these methods. These works defer improvements in training speed to the promise of realizable quantum computers.
In tensor network modeling, early works with MPS suggests the applicability of encoding features as polynomial functions [97], showing success over generic classification tasks from the UCI dataset, recommendation systems, and synthetic data. Results show that inference time is competitive with baseline classical models, however training time suffers significantly, scaling with the chosen bond dimension of the MPS. In bit-string classification tasks, tensor network models have demonstrated superior learning capabilities compared to generative neural networks, especially for parity learning problems [106, 264]. Establishing performance for challenging real-world tasks presents as potential future work.
### _Limiting Factors on the Use of QiML in Practice_
#### 5.9.1 _Dequantized Algorithms_
A few key limitations to the applicability of these algorithms have been discussed in the literature. First, the requirements for the input matrices are often strict. The matrices must be of low stable rank [56, 43], have a small condition number [64], or be relatively sparse [47]. These requirements are generally not conducive to the needs of real-world datasets, though these conditions have progressively become more lax with advancements in the field.
Secondly, the need for an input model that provides SQ access may not be readily amenable to current ML implementations. Performing the necessary preprocessing for adapting datasets to this structure may be reasonably assumed to be expensive and detrimental to computational efficiency, limiting the applicability of the these algorithms to existing systems [42].
Thirdly, many QML algorithms (and hence, often, the resulting dequantized algorithms) are tailored to solve tasks that deviate from what is conventionally addressed in the classical literature. For instance, while classical approaches to the recommendation systems problem typically employ low-rank matrix completion, the quantum algorithm instead executes sampling over a low-rank approximation of the input matrix [44]. In [59], a simplified version of the least squares SVM problem is considered by assuming data points are equally distributed across hyperplanes. Another example is the algorithm presented by [46] which sees no classical counterpart. As such, many works do not remark on the performance of dequantized algorithms in comparison with other, more traditional classical algorithms for their examined tasks.
Lastly, Chia et al. [45] argues that dequantized algorithms operate under more restrictive and ostensibly weaker computation parameters compared to classical randomized numerical linear algebra algorithms. Dequantized algorithms assume the ability to efficiently measure quantum states related to the input data and aim to provide quick algorithms with dimension-independent runtime. However, this model is intrinsically weaker than its standard counterpart. In essence, a dequantized algorithm, with a runtime of \(O(T)\), translates to a standard algorithm with a run time of \(O(nnz(A)+T)\) dependent on both \(T\) and the number of non-zero entries in the input matrix. Although this may lead to under-performance in typical sketching contexts, it broadens the range of problems where quantum speedup may not exponentially surpass classical solutions. Therefore, dequantized algorithms, in spite of their theoretical promise of exponentially improved runtime, may not perform as well as conventional sketching algorithms.
As such, the particular nature of the gap between classical and quantum ML algorithms remains an open question. In general, QML applications operate in either the low-rank or the high-rank data setting. The dequantization formalism suggests that most quantum linear algebra tasks over low-dimensional data can likely be dequantized into a classical variant, provided SQ access is available to that data. In contrast, evidence suggests that dequantizing high-dimensional problems incurs must greater difficultly. Several high-dimensional problems cannot be successfully dequantized despite SQ access. For example, the Fourier Sampling Problem is solved by randomized linear algebra techniques (i.e., SQ access) in exponential time, whereas the quantum version can find a solution in \(O(1)\)[265]. Furthermore, high-rank data frequently necessitates the use of the HHL algorithm or its variants, which have been discussed to be BQP-complete [53]. Another example is with quantum Boltzmann machine training, noted in [48], which cannot be dequantized in full unless BQP = BPP. This presents a significant impediment for classical algorithms trying to match the performance of their quantum counterparts. As such, QML algorithms can potentially extend their advantage in the high-rank setting by making assumptions such as taking sparse matrices as input or utilizing other high-rank quantum operations that cannot be efficiently implemented in the classical setting, such as the Quantum Fourier Trans
form [45]. Additionally, dequantized algorithms hinge on cost-effective access to classically analogous QRAM. As of now, the quantum version of this input model has not been practically realized. However, should an efficient quantum input model be developed, one that eschews the need for expensive computations or classical interfacing, it could potentially prompt a reevaluation of the supposed advantages of dequantization methods.
In light of this, evidence has shown that there is a strong opportunity for classical algorithms to compete with quantum in the low-rank setting. In early works, for many tasks, the quantum algorithm still admitted a strong polynomial advantage. This restricted the applicability of several dequantized algorithms; even matrices with very low-rank could not be practically computed [266]. As noted by several authors, the cost of computation is dominated by the SVD computation that occurs after sampling down to the low-rank approximation [43, 46, 47]. New techniques have since been developed that bypass this computation, with Bakshi and Tang's method most recently demonstrating that low-degree QSVT circuits do not exhibit exponential advantage [47]. It should be noted that, at present, quantum algorithms polynomially surpass their dequantized counterparts. As both quantum and dequantized algorithms continue to improve their relative complexity bounds, it remains to be seen whether there is a limit to the performance of dequantized algorithms. An insurmountable threshold may exist that definitively ascribes quantum supremacy, or a classical regime might be discovered that denounces the advantage entirely.
#### 5.9.2 Tensor Networks
Tensor networks learning models have scarcely stepped outside of a few, benchmark datasets, such as MNIST and Fashion-MNIST, due to difficulties in extending the tensor network model to larger inputs and higher dimensional feature spaces. In [128], the Tiny Image dataset was used, however the images were cropped to a 28 \(\times\) 28 resolution and converted to gray scale, matching the context of MNIST images. For MPS structures, accommodating for images of larger resolution is difficult due to the inherent exponential loss of correlation across the network, which can also be exacerbated by common encoding methods. Images are typically flattened and encoded into high dimensional space [91, 99]; for small images, the pixel correlations are largely preserved, however they are lost when considering high resolution images [125]. This issue is less prevalent in higher order decompositions such as PEPS [115] and TTNs [110]. However, several works have noted that the advantage of neural-network-based models over these tensor network methods lies in the better priors for images, made possible by the use of convolution. Tensor network methods that incorporate convolution are nearing the performance of traditional neural networks [124]. The potential for discovering approaches within tensor network methods that could surpass neural networks is an area of interest and highlights a promising direction for future research.
Tensor network algorithms demand a high cost in both the bond dimension, a user-chosen free parameter, and the number of components contained after each local feature mapping, determined by choice of \(\phi\) in Equation 18. Tensor network machine learning methods currently admit cubic, or even higher polynomial dependence on these parameters [91, 110], despite only scaling linearly with the number of input components (e.g., the number of pixels in an input image). Thus there appears to be a a trade-off in the greater expressivity afforded by increasing these parameters and computation time.
The challenge of high dimensionality is especially prevalent in language modeling, as the semantic spaces of word vectors can be inherently large. This is in contrast to commonly used image modeling datasets, where pixel values can be represented in a low-dimensional format. The tensor products of such word vectors can become computationally challenging, maintaining high complexity even after tensor decompositions are applied [94]. This complexity may explain the limited research in the field of tensor networks for language modeling. In [95], tensor network evaluation was performed on a context-free language task using relatively simple, synthetic data. The field has yet to see robust, performant methods for complex language modeling tasks involving tensor networks.
These observations are supported in existing tensor network literature, stemming from the fact that classical tensor networks are only able to represent low-entangled, low-complexity states [79]. However this stipulation is less relevant outside of the quantum setting, where less complex classical input data is concerned which is inherently non-entangled. Further research may be necessary to understand what types and volumes of data become prohibitive in learning, and how to best utilize tensor networks for working with such data.
#### 5.9.3 Quantum Variational Algorithm Simulation
The hybrid quantum-classical variational methods presented have seen success in performing computations over relatively small datasets, and using small-scale quantum circuits. Few methods have been presented outside this setting, due to the exponential limitation in simulating larger circuits with more qubits. For instance, methods using basis encoding require qubits that scale linearly with the number of representative features [158]. This restricts the number of features that can be used for learning. As such, when evaluating relative performance against classical counterparts, many works will apply constraints to classical methods in order to provide fair comparisons. This may involve severely condense the number of features [154, 234], or using a heavily reduced dataset size [238], for amenability with current-day quantum simulator architectures. Such limitations imply that comparisons between quantum simulation and classical methods in their fully-optimized, unrestricted settings may be inherently challenging, pending the development of more performant quantum algorithms.
The training time for simulating quantum algorithms is frequently noted to be more prolonged than their classical counterparts due to the inherent complexities of emulating quantum systems on traditional hardware [155, 162, 227, 227, 227]. In [163], the speedup over classical methods is owed to the use of quantum gradient descent used in the VQCs, allowing for faster convergence than classical neural networks using traditional gradient descent. Such
optimizations could inform methods of decreasing training time in classical settings, for acceptable loss thresholds.
For classical simulation of QML in the noisy setting, such algorithms naturally inherit the limitations of QML, such as the need for robust gate error correction, and degradation of performance due to decoherence after prolonged training. Noiseless simulations are free from such issues, although they may not accurately represent the conditional settings of quantum hardware execution [150]. Despite this advantage for classical implementation purposes, many of the works in this domain are forward-facing, developing methods and frameworks for true quantum computation, and assess their robustness with added noise and constraints. Very few studies have concentrated on the specific context where computation on classical machines is the primary objective of the proposed methodology.
#### 5.9.4 Other QiML Methods
The range of QiML methods discussed demonstrate a diverse application of quantum theory to facilitate machine learning tasks. Generally, the introduction of quantum phenomena into these models is observed to enhance their expressive power compared to their classical counterparts, often resulting in improved performance [93, 190, 219]. However, a commonly reported drawback among many of these methods is their high computational time complexity when compared with classical techniques. This issue predominantly affects models that rely heavily on quantum formalisms requiring computationally intensive operations, such as tensor products and complex number manipulations [190, 219]. This is especially noted in methods that require matrix operations over density operators [189, 191, 201], or require making tensor copies of quantum patterns in producing classification [192, 193, 195]. In essence, there is an apparent trade-off between performance and computational speed, by way of simulating quantum operations via computationally heavy mathematical objects to incorporate greater expressivity in QiML models.
## 6 Parallels Between QiML Models and Conventional Classical Models
Efforts in QiML have applied quantum mechanics to enhance machine learning routines. Some of these efforts yield outcomes that bear resemblance to conventional classical ML models. In this subsection, we explore several of these parallels, shedding light on the relationships and distinctions between QiML techniques and their classical counterparts. By doing so, we aim to provide a familiar framework for understanding and interpreting QiML approaches. This approach could make the field more accessible to machine learning practitioners and researchers who are venturing into quantum information theory for the first time.
### _Tensor Networks_
Previous studies have explored the relationship between tensor networks and neural network models [268]. As mentioned in Section 4.2.8, several tensor network decompositions exhibit parallels between deep learning architectures, such as CNNs [131], RNNs [132], and RBMs [135, 136]. Non-negative Matrix Product States (MPS) have been established as having a correspondence with Hidden Markov Models (HMM). Specifically, they can factorize probability mass functions of HMM into tensor network representations, capturing the essential stochastic relationships between hidden and observed variables [133].
Tensor networks employ kernel learning approaches, similar to SVMs, in which samples are mapped to a higher dimensional feature space for improved separability [100].
The ability of tensor networks to model the joint distribution of variables, as seen in Tensor Network Born Machines (TNBMs), draws parallels with generative models like Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) that parameterize conditional probabilities via deep neural networks [124]. In fact, several works have shown that many-body quantum states can also be efficiently represented by neural network structures, such as DBMs [269] and shallow fully-connected neural networks [270], provided a simplified Hamiltonian ground state; similar conditions in which tensor networks see success.
### _Quantum Variational Algorithm Simulation_
Variational quantum algorithms draw obvious parallels between classical machine learning methods. Quantum kernel methods, similar to SVMs, map data to a high-dimensional Hilbert space where they become linearly separable [158]. The computation of quantum kernel is effectively measuring the inner product in this Hilbert space, analogous to the operation of the kernel function in SVMs. VQCs combine classical optimization methods with a variational quantum circuit to learn a parameterized quantum state. This learning mechanism bears a strong resemblance to the operational principle of classical neural networks, which adjust weights and biases through an iterative optimization process to learn a function that can accurately classify data. Similarly, in a VQC, parameters of the quantum circuit are iteratively updated, effectively optimizing the quantum state to classify quantum data [157]. As such, performance comparisons are often made between devised quantum algorithms and classical models of similar scale, i.e., by scaling down the number of available parameters in a classical neural network [158]
The relationship between tensor networks and quantum circuits has also been explored in the literature, where tensor networks are seen to admit quantum circuits [80]. As such, tensor network decompositions have inspired quantum variational methods. Hierarchical ansatz layouts mirroring tree-tensor and MERA networks have been shown to enable the classification of highly entangled states through greater expressive power [84, 271].QCBMs utilize the probabilistic aspects of quantum wavefunctions for generative modeling, an approach that finds a parallel in Tensor Network Born Machines that use MPS for similar tasks [272, 273]. While TNBMs are more aligned with explicit classical generative models, QCBMs show a greater affinity to implicit generative models, such as GANs.
### _Dequantized Algorithms_
Regarding dequantized algorithms, extracting such correlations may initially seem an unassuming task. As noted in
Section 5.9.1, the precise task that dequantized algorithms solve can be different to what is conventionally tackled. However, drawing face-value parallels between dequantized algorithms and classical machine learning methods seems intuitive and practical, since their underlying objectives are largely congruent. For instance, quantum-inspired SVMs aim to establish hyperplanes for classification, and quantum-inspired supervised clustering also engages in nearest centroid discrimination, mirroring their classical counterparts.
### _Other QiML Methods_
Several QiML methods outside the aforementioned subsets incorporate quantum theory into existing classical structures. These include [200] and [201], where density matrix projections are processed and feed into traditional classifiers such as CNNs, LSTMs and SVMs. In [212], a traditional fully-connected neural network structure is realized using a fuzzy c-means-based learning process, where learning parameters are represented via quantum bits and quantum rotational gate operations. In [208], complex-valued word embedding are constructed via GRUs and scaled dot-product self-attention. Features are then extracted from projected density matrices via convolutional and max-pooling layers.
Other works have produced methods that use quantum operations to mirror neural network behavior, such as in [205] where a parameterized update function is used to evolve a "hidden" density matrix, which is updated at each time step based on the current input quantum state and the previous hidden density matrix, mirroring the key operational dynamics of an RNN.
Methods such as the QNMC and HQC [191, 192, 196] exhibit obvious similarities to classical nearest mean classification, as they rely on distance-based evaluation to a constructed centroid object. Methods involving quantum signal detection theory [217, 218, 220] share similarities to the Naive Bayes classification, where probabilities are calculated based on the frequencies of features.
Some methods, like the Interactive Quantum Classifier (IQC) [219], seem entirely quantum mechanical. The IQC interprets the classification process as an interaction between a physical target system and its environment, using quantum-inspired unitary transformations to adjust probability amplitudes and phases. Although the method involves common machine learning components, such as feature modeling and gradient-based updates, the core architecture of IQC is heavily rooted in quantum theory.
As the field continues to evolve, we anticipate the emergence of other such models that deeply integrate quantum theory while still leveraging classical machine learning strategies to varying extents.
### _Levels of Quantum Mechanics Integration_
Several sources of quantum inspiration have driven QiML learning methods. As briefly discussed in Section 6.4, the various QiML methods vary in how much quantum mechanics they integrate, leading to both opportunities and challenges. On one hand, principles from quantum mechanics such as superposition and entanglement provide rich inspiration, allowing the development of innovative algorithms and encoding strategies beyond classical paradigms. On the other hand, implementing these quantum-infused methods in classical computing settings can be difficult. Quantum mechanics often exhibits complex correlations and dynamics that are hard to simulate classically, leading to potential exponential slowdowns or the need for approximations that may sacrifice quantum advantages.
Figure 9 illustrates a qualitative view of QiML methods based on the extent of quantum involvement. We surmise that methods incorporating more, or deeper levels of quantum mechanics tend to face greater challenges or limitations when attempting to simulate or implement them using classical computing resources. At the far right end of the spectrum, just before QML, lie variational quantum algorithms. Quantum circuit and quantum kernel learning methods heavily incorporate quantum aspects such as qubits, quantum gates, superposition, and entanglement. The classical-quantum hybridization of these techniques that rely on classical optimization allow for easier simulation on classical devices, however factors such as the circuit depth, width (i.e., number of qubits used), choice of gates, and encoding method can have a dramatic effect on this ease of simulation, as discussed in Section 5.9.3. In other words, increasing the quantum-based complexity of these methods reduces the ability to classically simulate them.
QiML methods that project classical data into Hilbert feature spaces also exhibit varying levels of classical simulatability, typically tied to the nature of the projection and the dimensionality of the Hilbert space. Tensor network methods adapt techniques from quantum many-body physics, with network decompositions easing the computational burden by reducing the scope of the feature space. However, challenges still arise in their computation, as discussed in Section 5.9. Additionally, higher-order tensor network methods such as PEPS often resort to approximation techniques instead of exact computation. In a machine learning context, these approximations may be sufficiently representative, as generalizations are usually more beneficial than precise exactitude. Furthermore, the need to operate over density matrices presents a common source of computational difficulty. Methods that rely on potentially large density matrices as features typically incur greater time complexity compared to classical ML methods using vector-based features, representing a trade-off for increased expressive power.
At the classical end of the spectrum, dequantized algorithms attempt to match QML methods using classical techniques. Although devoid of explicit quantum components, these methods can incur substantial computational costs in line with the matrix dimension and norms. Despite the intention to make them more classically amenable, dequantized algorithms may still be relatively slow, highlighting the intrinsic complexity and potential inefficiency of translating quantum-inspired techniques into classical paradigms. This mirrors the challenges found in more quantum-intensive techniques, revealing that the integration of quantum insights in classical contexts is a nuanced and demanding endeavor.
While the primary focus of this analysis has been on methods that trade computational capacity for quantum
inspiration, it's worth noting a caveat: some QiML methods, including compression techniques, can actually experience speed-ups compared to their classical counterparts. This emphasizes the diverse potential of QiML, extending beyond mere computational trade-offs to offer tangible benefits and enhancements to classical ML.
## 7 Available Resources for QiML Implementation
We discuss the available resources allowing for the development and exploration of QiML methods.
### _Research Implementations_
While many of the explored works operate as standalone research models, not all perform empirical evaluations of their models using real data. Furthermore, only a subset of these works offer accessible code repositories that allow others to reproduce their results, with many authors stipulating that their code is available upon request. We highlight works that include accessible code in Table III, indicated by the asterisk (*). The ability to reproduce presented methods is an essential part of scientific inquiry. However, the quality and accessibility of these repositories can vary considerably, often due to varying levels of documentation and the specific requirements of certain computational environments and packages. This variation makes the replication process, and the adaptation of methods to wider domains and tasks challenging. Custom implementations of presented methods are less prevalent in tensor network research, thanks in part to the availability of well-developed toolboxes and libraries, as we discuss in Section 7.2.
We advocate for more uniformity in the way code is shared in the QiML research community. Greater transparency, along with increased adoption of best practices for documentation and repository organization, will enhance the accessibility of these implementations and enable more robust scientific discourse, especially in an emerging field.
### _Toolboxes_
Various toolboxes and libraries have been developed for tensor network operations and quantum circuit simulation. Table VI outlines a few commonly used, available packages for QiML purposes.
Basic tensor operations are supported by implementations for various languages. Most prominently, Python has the TT-Toolbox [276], TorchMPS [281] and Scikit-TT [279] frameworks, which all provide MPS solvers with support for DMRG optimization. tntorch [287] facilitates auto differentiation-based optimization for MPS. Tensorly [277] and TensorNetwork [280] Python libraries offer more generalized functions, allowing for additional decomposition formats with support for various Python backends, such as PyTorch or JAX which provide the machine learning functionality. Non-specialised, generic Python libraries such as Numpy [288], which provides a base for many tensor network libraries, have also been successfully used on a standalone basis [100, 124]. MATLAB, C++ and Julia also see a host of supporting tensor network libraries. Psarras et al. [289] provides a comprehensive survey of existing tensor network software. Wang et al. [78] categorizes tensor network toolboxes based on their functionality and application areas. We collate the ones that have been used by researchers in the QiML context in Table VI. For works that did not explicitly mention what packages were used, we discovered them by inspecting noted code repositories.
While these toolboxes have been used to much success, a few limitations present themselves, namely the lack of both predefined models for higher-order tensor decompositions (such as PEPS and TTN), and input embedding pipelines. This restricts the ability for users to freely produce and develop new models.
Several libraries have been developed to facilitate quantum circuit construction and simulation. PennyLane [40] and TensorFlow Quantum [284] are heavily focused on the integration of quantum computing with classical machine learning frameworks, such as PyTorch and TensorFlow (predominantly in the latter), with back-end support for various quantum simulation platforms. Qulacs [249] and Qibo [285] provide efficient and flexible standalone quantum simulators operable on personal computing devices. Intel Quantum Simulator [282] offers similar support, with adaptations for high-performance computing environments. Qiskit [39] provides general comprehensive quantum software development kits that supports a wide range of quantum computing workflows, including local simulation, circuit optimization, and execution on IBM's cloud-based quantum hardware. Cirq [278] allows user to build quantum circuits for near-term quantum hardware and NISQ (Noisy Intermediate-Scale Quantum) devices, giving fine-tuned control over quantum program execution.
The Quantum Nearest Mean Classifier sees a package for its implementation, which allows for application over custom data, and offers parallel computing capabilities [192]. Methods for input modeling and encoding are not provided in this repository.
In contrast, there is a sparsity in toolboxes that facilitate the implementation of practical models for dequantized algorithms. This may be primarily due to the theoretical nature of the research, the level of specificity necessary for adapting general ideas to particular tasks, and the (im)maturity of the field. As highlighted in [230], claimed complexities may not always be indicative of real-world application scenarios. The development and introduction of frameworks could potentially provide researchers with deeper insights into these issues. Additionally, they could serve as a tools for validating proposed methods through comparative or ablative studies. Similar observations apply to the various other QiML methods. However, given that these methods have seen practical implementations with direct application of the proposed methods, we can anticipate the development of dedicated toolboxes for them in the near future.
### _Commercial Applications_
Several noisy intermediate-scale quantum computing architectures have been developed and commercialized and used for various applications. Cloud-based compute for quantum simulation has been employed extensively in practical research, particularly for applications requiring computational resources beyond the scope of personal computing.
Common cloud-computing platforms employed in literature are outlined in Table VII, which offer high-performance simulation capabilities. IBM Quantum [250] provides the QasmSimulator and StatevectorSimulator backends through Qiskit Aer: QasmSimulator allows for multi-shot execution of circuits, while StatevectorSimulator also returns the final statevector of the simulator after application. Both simulate up to 32 qubits in both noisy and noise-free settings. Google's qsim [251] provides a full wavefunction quantum circuit simulator that leverages vectorized optimization and multi-threading, capable of simulating up to 40 qubits. Amazon Braket [252] provides on-demand state vector, tensor network and density matrix simulators, similar to IBM Quantum, with a current qubit limit of 34. Quantum hardware accessibility is offloaded to third-party providers. QuTech offer high performance cluster computing power via their Quantum Inspire platform [292], simulating up to 34 qubits and allows for inspection of the simulated quantum state. The cQASM hardware-agnostic quantum assembly language is used for constructing circuits, with a Python API also made available. Microsoft's Azure Quantum platform [290] offers three back-end simulators, from the IonQ, Quantum and Rigetti providers. IonQ provides a GPU-accelerated idealized simulator supporting up to 29 qubits, Quantum provides emulators of real physical quantum models supporting up to 32 qubits, and Rigetti provides a cloud service simulator for Quil, a quantum instruction set language, supporting up to 30 qubits. Other providers, such as Orquestra [293] and qBraid [291], also similarly offer end-to-end software development architectures using external quantum devices, with additional front-end support from various libraries such as Cirq, Braket and Qiskit.
A few quantum-inspired-related frameworks are available such as the Fujitsu [294], NEC [295] and D-Wave quantum-inspired annealing services6, with the latter being applied to a few, small-scale quantum-inspired [231] and quantum-assisted [296] ML applications. Outside of these services, dedicated QiML architectures have yet to see such production and adoption.
Footnote 6: [https://docs.ocean.dwavesys.com/en/stable/](https://docs.ocean.dwavesys.com/en/stable/)
In general, there are ample toolbox options available for TN and QVAS methods, with commercial platform-as-a-service providers supplying compute power for quantum simulation. However there is a lack of available tooling outside of these areas, which limits the scope of choices for ML practitioners in exploring QiML solutions. The gap presents ample opportunities for the development, and potential commercialization, of such frameworks, which could in turn catalyze further research in the field.
## 8 Open Issues
QiML research is subject to the numerous challenges of an emerging discipline. In the practical setting, these challenges largely revolve around how QiML can be used for a broader range of applications, and in more performant ways. We identify potential issues in furthering this goal.
1. **Speed and Performance:** A significant challenge in present QiML methods is the dichotomy between the complexity of the methods and their performance. Works have shown the performance of QiML methods in general is worse than that of contemporary classical methods. Where they do show competitive results, this is typically caveated by slower runtimes, larger model sizes or greater incurred error [124, 42, 190]. In the case of QVAS models, comparable performance is often only observed when classical models are deliberately scaled back in terms of architecture size, the number of parameters and/or number of input features. A few exceptions to this have been presented, particularly in methods designed with parameter reduction in mind [126, 122]. In general, however, there is a clear necessity for research that improves the speed and performance of QiML, striking a balance between complexity and effectiveness.
2. **Constraints on Input Data:** Dequantized algorithms and tensor network learning methods are predicated on the assumption that input data exhibits low rank characteristics: low linear algebraic rank and low bond dimension, respectively. However, such assumptions may not hold true in real-world scenarios, where data can often be high-dimensional and complex. While there are claims advocating for the broad applicability of QiML methods [48], these assertions have yet to be effectively demonstrated on large, real-world datasets. Therefore, addressing the challenge of high-dimensional data representation in quantum machine learning remains a key area for future research and development.
3. **Alternative Input and Embedding Methods:** Currently, methods for transforming classical data to be compatible with QiML methods are largely under-explored. Input modeling typically adheres
Fig. 9: QiML methods represented on a 1D spectrum, illustrating their relative positioning based on the level of quantum inspiration, as compared to purely classical and purely quantum machine learning. The positioning is not determined by a strict quantitative measure but rather reflects an informal assessment of the underlying principles and techniques. Citations given are representative works.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Toolbox** & **Mode** & **Functionality** & **Languages** & **Research Group** \\ \hline LIQ\(l_{l}|\)) (2014) [274] & QVAS & comprehensive framework for quantum programming with three built-in classes of simulators & F\# & Microsoft Research \\ NCON (2014) [275] & TN & functions that facilitate tensor network contractions, which are integral to several other tensor network toolboxes & MATLAB & Perimeter Institute for Theoretical Physics \\ TT-Toolbox (2014) [276] & TN & basic tensor arithmetic, contractions, and routines involving MPS & MATLAB, Fortran, Python & Institute of Numerical Mathematics RAS \\ Tensorly (2016) [277] & TN & tensor methods and deep tensorized neural networks via several Python backends & Python & Imperial College London \\ Qiskit (2017) [39] & QVAS & software development framework for modeling circuits, algorithms, and hardware & Python & IBM \\ Cirq (2018) [278] & QVAS & quantum programming library for NISQ hardware control and simulation & Python & Google, Open-source \\ PennyLane (2018) [40] & QVAS & open-source quantum software library for quantum machine learning tasks with support for various quantum computing platforms & Python & Xanadu AI \\ Scikit-TT (2018) [279] & TN & MPS methods for representing and solving linear systems & Python & Freie Universitat Berlin \\ HQC (2019) [192] & Other (QNMC) & probabilisitc classification with parallelization available & Python & University of Cagliari \\ TensorNetwork (2019) [280] & TN & defining and manipulating general tensor network models & Python & Alphabet (Google) X \\ TorchMPS (2019) [281] & TN & MPS modeling with DMRG support via PyTorch backend & Python & Universite de Montreal \\ Intel Quantum Simulator (2020) [282] & QVAS & quantum circuit simulator with high-performance computing capabilities & C++, Python & Intel Labs \\ PastaQ (2020) [283] & TN, QVAS & various quantum circuit simulation methods using tensor-network representations & Julia & Flatiron Institute \\ Qulacs (2020) [249] & QVAS & fast, low-scale quantum circuit simulator that provides a wide range of built-in quantum gates and operations & Python, C++ & QunaSys, Osaka University, NTT, Fujitsu \\ TensorFlow Quantum (2020) [284] & QVAS & provides tools and frameworks for building hybrid quantum-classical models & Python & Google \\ lambeq (2021) [186] & QVAS & library for end-to-end quantum NLP pipeline development & Python & Cambridge Quantum Computing \\ Qibo (2021) [285] & QVAS & builds and runs quantum circuits, supporting GPU, multi-GPU, and multi-threaded CPU. & Python & Quantum Research Center (QRC) \\ ITensor (2022) [286] & TN & tensor arithmetic, contractions, and support for MPS and MPO decompositions & C++, Julia & Flatiron Institute \\ tntorch (2022) [287] & TN & supports tensor factorizations, including CP, Tucker, and MPS, and offers autodifferentiation optimization & Python & IE University, Madrid \\ \hline \end{tabular}
\end{table} TABLE VI: QIML Toolboxes
to a few established embedding schemes, for example Equation 18 for tensor networks, or density matrix-based representations commonly seen in QiML subsets. Though these appear to work in generalized settings, authors have noted potential adaptations, such as having independent mappings per feature, or promoting the mapping space to higher dimensions [91] for TNs. Analyzing and suggesting appropriate access models for adequate comparisons between dequantized and quantum algorithms is also an emerging area of concern. As the embedding method itself often directly depends on the nature of the underlying data, investigation of the effects of different embedding schemes on factors such as performance and information entropy, researchers can gain insights into optimal methods for specific data types and tasks.
4. **The Need for Comprehensive Tooling:** There is a significant need for comprehensive, user-friendly tooling in the field of QiML. Existing toolboxes, while useful, do not fully meet the needs of researchers and developers. They often lack the extensive array of tools and capabilities required to effectively develop new models, re-implement existing ones, or explore innovative research avenues, especially in the areas of tensor networks and other QiML methods. The development of more robust toolsets could greatly accelerate the pace of advancement in this promising field.
5. **Effective Quantum Formalisms in QiML** In the current literature, several quantum formalisms that inspire QiML have demonstrated considerable success, especially ones that contribute towards building and utilizing quantum feature spaces. However, there remains a lack of systematic documentation identifying which quantum principles are adaptable to classical ML, the reasons for their success or potential, and indeed, which quantum concepts may not translate well or at all. The exploration of novel quantum mechanics adaptations within classical ML is an ongoing area of research. Consequently, the QiML field would significantly benefit from a methodical analysis delineating which approaches are effective and which are not.
## 9 Conclusion
Quantum-inspired Machine Learning has seen a rapid expansion in recent years, diversifying into numerous research directions, such as tensor network simulations, dequantized algorithms and other such methods that draw inspiration from quantum physics. Prior surveys have alluded to QiML, often presenting it as a facet of QML or concentrating on specific QiML subsets. In contrast, this survey provides a comprehensive examination, bringing these emergent fields together under the QiML umbrella. We have explored recent work across these areas, highlighting their practical applications, offering readers an understanding of where and how QiML has been used. This insight can potentially guide readers in exploring QiML for their specific use cases. Significantly, we strive to pin down a more precise definition of QiML, addressing the issue of vague and generic descriptions prevalent in previous work. Furthermore, this review illuminates crucial open issues in QiML, particularly pertaining to its current level of practical applicability. As we move forward, we anticipate the emergence of new QiML methods. Quantum mechanics, quantum computing and classical machine learning are expansive fields with a constant influx of emerging knowledge and techniques. The untapped potential of these fields presents a vast reservoir of methods and approaches that could further enrich QiML. The continuous evolution of these fields presents a fertile ground for novel perspectives and cross-pollination, promising to stimulate the growth and diversification of QiML.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Platform** & **No. Sim. Qubits** & **Noisy/Noiseless Simulation** & **Languages** & **Integration with:** & **Quantum Hardware Accessibility** \\ \hline Amazon [252] & Braket & 34 & Yes/Yes & Python & Amazon Web Services & Yes \\ \hline Google & Quantum AI [251] & 40 & Yes/Yes & C++, Python & Cirq, TensorFlow Quantum & Yes \\ \hline IBM Quantum [250] & 32 & Yes/Yes & Python, Swift & Qiskit & Yes \\ \hline Microsoft & Azure Quantum & Cloud & 29-32 & Yes/Yes & Q\# &.NET & Yes \\ \hline qBraid [291] & 29-32 & Yes/Yes & Python & Cirq, Braket, Qiskit & Yes \\ \hline QuTech & Quantum Inspire [292] & 26-34 & Yes/Yes & cQASM, Python & Qiskit & Yes \\ \hline Orquestra [293] & 29-32 & Yes/Yes & Python & Cirq, D-Wave, PennyLane, Qiskit & Yes \\ \hline \end{tabular}
\end{table} TABLE VII: Various Cloud-based Quantum Computing Platforms
## Acknowledgments
This work was supported by CSIRO's Quantum Technologies Future Science Platform. Ajmal Mian is the recipient of an Australian Research Council Future Fellowship Award (project number FT210100268) funded by the Australian Government.
|
2310.07641 | Evaluating Large Language Models at Evaluating Instruction Following | As research in large language models (LLMs) continues to accelerate,
LLM-based evaluation has emerged as a scalable and cost-effective alternative
to human evaluations for comparing the ever increasing list of models. This
paper investigates the efficacy of these ``LLM evaluators'', particularly in
using them to assess instruction following, a metric that gauges how closely
generated text adheres to the given instruction. We introduce a challenging
meta-evaluation benchmark, LLMBar, designed to test the ability of an LLM
evaluator in discerning instruction-following outputs. The authors manually
curated 419 pairs of outputs, one adhering to instructions while the other
diverging, yet may possess deceptive qualities that mislead an LLM evaluator,
e.g., a more engaging tone. Contrary to existing meta-evaluation, we discover
that different evaluators (i.e., combinations of LLMs and prompts) exhibit
distinct performance on LLMBar and even the highest-scoring ones have
substantial room for improvement. We also present a novel suite of prompting
strategies that further close the gap between LLM and human evaluators. With
LLMBar, we hope to offer more insight into LLM evaluators and foster future
research in developing better instruction-following models. | Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Goyal, Danqi Chen | 2023-10-11T16:38:11Z | http://arxiv.org/abs/2310.07641v2 | # Evaluating Large Language Models at
###### Abstract
As research in large language models (LLMs) continues to accelerate, LLM-based evaluation has emerged as a scalable and cost-effective alternative to human evaluations for comparing the ever increasing list of models. This paper investigates the efficacy of these "LLM evaluators", particularly in using them to assess instruction following, a metric that gauges how closely generated text adheres to the given instruction. We introduce a challenging meta-evaluation benchmark, **LLM-Bar**, designed to test the ability of an LLM evaluator in discerning instruction-following outputs. The authors manually curated 419 pairs of outputs, one adhering to instructions while the other diverging, yet may possess deceptive qualities that mislead an LLM evaluator, _e.g._, a more engaging tone. Contrary to existing meta-evaluation, we discover that different evaluators (_i.e._, combinations of LLMs and prompts) exhibit distinct performance on LLMBar and even the highest-scoring ones have substantial room for improvement. We also present a novel suite of prompting strategies that further close the gap between LLM and human evaluators. With LLMBar, we hope to offer more insight into LLM evaluators and foster future research in developing better instruction-following models.1
Footnote 1: Our data and code are available at [https://github.com/princeton-nlp/LLMBar](https://github.com/princeton-nlp/LLMBar).
## 1 Introduction
The recent success of LLM-based chat assistants has spurred countless research efforts in both academia and industry, with new models being released at an astonishing rate. While conventional benchmarks measure the underlying ability of those models in commonsense and world knowledge (Gao et al., 2021; Srivastava et al., 2022; Hendrycks et al., 2021), human evaluation remains the gold standard for testing conversational abilities due to the open-ended nature of the task. However, this is neither scalable nor reproducible (Karpinska et al., 2021). Consequently, LLM evaluators have emerged as a cost-effective alternative for obtaining preference judgments between outputs from different models (Chiang and Lee, 2023; Dubois et al., 2023; Chen et al., 2023b).
Operationally, an LLM evaluator is a combination of a strong base LLM (OpenAI, 2022; 2023; Anthropic, 2023) and its prompting strategy (Wei et al., 2022; Zheng et al., 2023). They are usually given one instruction and corresponding outputs from two models, and asked to choose a preferred one. It remains an open question whether we can rely on those LLM evaluators and which ones to use. This highlights the need for a good _meta-evaluation benchmark_ (consisting of instructions and output pairs associated with human judgments) so that we can evaluate to what extent different LLM evaluators agree with human preferences and choose evaluators in an informed manner.
_How should we construct a good meta-evaluation benchmark?_ Prior work has primarily used randomly-sampled output pairs and crowdsourced annotators to construct meta-evaluation benchmarks to assess LLM evaluators (Dubois et al., 2023; Zheng et al., 2023; Zhang et al., 2023; Wang et al., 2024).
et al., 2023). However, we argue this strategy overlooks one important factor: inherent subjectivity of human preferences. Consider the top example in Figure 1: despite the quality difference being indiscernible, the dataset still provides a preference label possibly reflecting a personal preference for a longer length. This issue is also demonstrated by the low agreements between human annotators reported in AlpacaFarm (66%; Dubois et al., 2023) and MT-Bench (63%; Zheng et al., 2023), against a random baseline of 50%. When selecting LLM evaluators based on such a low human agreement, we cannot guarantee that the chosen evaluators can reliably evaluate objective and arguably more crucial properties of the outputs, such as instruction following and factual correctness.
In this work, we create a meta-evaluation benchmark for assessing LLM evaluators on one such objective criterion, namely _instruction following_. We define it as the ability to correctly parse open-ended instructions and adhere to the specified requirements. This criterion relates to other desirable LLM properties, such as _helpfulness_(Askell et al., 2021). Furthermore, unlike attributes that can be easily acquired through imitation learning, such as engaging tones (Gudibande et al., 2023), even the strongest LLMs today struggle with following instructions (Wu et al., 2023; Li et al., 2023). Figure 1 (bottom) shows an example of instruction following vs. superficial quality. While the right output adheres to the instruction, both LLM evaluators and humans are often biased towards the left one due to its more engaging tone. If we do not rigorously analyze the capability of LLM evaluators to distinguish between the true ability of instruction following and superficial clues, there is a risk of advancing models that excel in mimicking effective assistants rather than executing desired tasks.
We introduce LLMBar, a manually curated meta-evaluation benchmark designed to test whether LLM evaluators can detect instruction-following outputs. LLMBar consists of 419 instances, where each entry consists of an instruction paired with two outputs: one faithfully and correctly follows the instruction and the other deviates from it. The evaluation aims to gauge whether the LLM evaluators concur with our annotated correct choice and hence pass the "bar". LLMBar departs from existing meta-evaluation (Dubois et al., 2023; Chiang and Lee, 2023; Wang et al., 2023; Zheng et al., 2023; Zhang et al., 2023) in the following aspects:
* All the instances are examined by the authors to guarantee their quality.
* LLMBar focuses exclusively on the instruction-following quality and enforces objective preferences. As a result, LLMBar has an expert annotator agreement rate of \(94\%\), significantly higher than those of previous benchmarks.
* LLMBar provides both a Natural set and an Adversarial set. The Natural set collects and filters preference data from existing benchmarks, aiming to gauge evaluator performance in real-world distributions. Conversely, the Adversarial set comprises adversarially crafted instances that tend to confound less adept evaluators.
We assess the performance of five LLMs--GPT-4 (OpenAI, 2023), ChatGPT (OpenAI, 2022), LLaMA-2-Chat (Touvron et al., 2023), PaLM2 (Anil et al., 2023), and Falcon (Almazrouei et al., 2023)--paired with various prompting strategies as evaluators. Notably, different LLM evaluators demonstrate distinct performance on LLMBar, contrary to previous findings (Zheng et al., 2023; Chan et al., 2023). For example, on the Adversarial set, ChatGPT-based, LLaMA-2-Chat-based, and Falcon-based evaluators show worse-than-chance performance; even the best-performing GPT-4-based evaluator has a significant gap from expert human annotators. Leveraging insights from
Figure 1: Comparison of instances from previous work and our proposed meta-evaluation benchmark LLMBar. LLMBar curates output pairs that have _objective_ preferences. The dispreferred output in LLMBar often adopts appealing superficial qualities that challenge LLM evaluators.
LLMBar, we propose a suite of novel prompting strategies and show that a combination of them significantly improves evaluators in detecting instruction following. Notably, the best strategy leads to a 10% boost for GPT-4-based evaluators on the Adversarial set.
LLMBar provides an objective and replicable benchmark for assessing LLM evaluators in judging instruction following. It underscores the limitations of current LLM evaluators that have been neglected by previous studies. With a better assessment of LLM evaluators, we hope to help build and select better evaluators in a quantitative manner, and foster research in instruction-following models.
## 2 LLMBar: A Meta-evaluation Benchmark
We introduce LLMBar, a meta-evaluation benchmark designed to test LLM evaluators' ability to discern instruction-following outputs. Each instance in LLMBar is a tuple \((I,O_{1},O_{2},p)\), where \(I\) is the input instruction, \(O_{1}\) and \(O_{2}\) are two corresponding outputs, and \(p\in\{1,2\}\) is the associated gold preference label indicating \(O_{p}\) is _objectively_ better than the other.
LLMBar consists of two parts: (1) The Natural set collects instances from existing human-preference datasets. We further filter and modify them to ensure that an objective preference exists for each instance. (2) In the Adversarial set, the authors create the dispreferred output such that it deviates from the instruction but often has good superficial qualities and may thus distract the evaluator. While the Natural set reflects the evaluator performance in a real-world distribution, the Adversarial set stress tests whether the LLM evaluators can truly detect instruction following. We show the statistics in Table 1 and discuss the collection process in the following.
### The Natural Set
We first randomly sample a set of instructions and corresponding output pairs \((I,O_{1},O_{2})\) from AlpacaFarm (Dubois et al., 2023)2 and LLMEval2(Zhang et al., 2023)3. As discussed previously, these candidate instances often assemble output pairs where an objective quality difference does not exist, and the human annotation merely reflects the annotators' subjective preferences. We heavily filter and modify the instances such that for all the remaining ones, there exists an objectively better output regarding instruction following. Note that despite it being named "natural", this set provides high-quality instances with objective preferences that do not exist in previous work. Appendix A.1 provides example instances in the Natural set along with the corresponding manual filtering and modification applied to ensure objectivity.
Footnote 2: The instructions \(I\) in AlpacaFarm were constructed using self-instruct (Wang et al., 2023d), while \(O_{1}\) and \(O_{2}\) are generated by instruction-tuned LLMA-7B (Touvron et al., 2023a).
### The Adversarial Set
The Adversarial set is specifically designed to stress test LLM evaluators with instances that tend to mislead them. All the instances are constructed by a two-step process:
1. First, we **generate challenging candidate instances**, expecting that one output \(O_{1}\) faithfully follows the instruction \(I\), and the other output \(O_{2}\) deviates from \(I\) but tends to exhibit superior superficial quality, _e.g.,_ with a more polished tone or a better format. A good evaluator should prefer \(O_{1}\) over \(O_{2}\) without being distracted by the superficial qualities.
2. Next, we perform **adversarial filtering** to retain the most difficult candidate instances. We use four ChatGPT-based evaluators from AlpacaFarm and two different presentation orders (\(O_{1},O_{2}\) and \(O_{2},O_{1}\)) to obtain eight preference labels. We filter out the candidate instance if a majority of those preferences are aligned with our expected one. This is then followed by **manual filtering and modification** by the authors to ensure objectivity and correctness.
In the following, we describe four different strategies to collect candidate instances for step 1, which correspond to the four Adversarial subsets. We first sample instructions from three existing
\begin{table}
\begin{tabular}{l r} \hline
**Natural** & 100 \\
**Adversarial** & 319 \\ Neighbor & 134 \\ GPTInst & 92 \\ GPTOut & 47 \\ Manual & 46 \\ Total & 419 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics.
instruction-tuning datasets: Alpaca (Taori et al., 2023), OpenAssistant (Kopf et al., 2023), and ShareGPT4. If not specified, \(O_{1}\) is either generated by an instruction-tuned LLaMA-7B model or the reference output from the datasets. Figure 2 illustrates these different collection strategies.
Footnote 4: [https://sharegpt.com](https://sharegpt.com).
Neighbor Instructions (Neighbor).Given an instruction \(I\in\mathcal{D}\) where \(\mathcal{D}\) is its corresponding dataset, we retrieve a _closely related yet sufficiently different_ instruction \(I^{\prime}\) from the same dataset \(\mathcal{D}\),
\[I^{\prime}=\operatorname*{arg\,max}_{I^{\prime\prime}\in\mathcal{D},\text{ sim}(I,I^{\prime\prime})<\epsilon}\text{sim}(I,I^{\prime\prime}).\]
Here, \(\text{sim}(\cdot)\) is the cosine similarity measured by INSTRUCTOR (Su et al., 2023), a sentence embedding model. \(\epsilon\) is a threshold to ensure that \(I^{\prime}\) and \(I\) are semantically different enough.5 We then prompt a relatively **weaker** model with \(I\) to generate \(O_{1}\), and prompt a **stronger** model with \(I^{\prime}\) to generate \(O_{2}\).6 This gives us a candidate instance \((I,O_{1},O_{2},p=1)\). The intuition is that \(O_{2}\) potentially exhibits better superficial quality, but does not follow the target instruction \(I\). This kind of superficial superiority of \(O_{2}\) could mislead LLM evaluators into favoring it and thus make the instance potentially adversarial. See Appendix A.2 for more details.
Footnote 5: Note that if \(I\) and \(I^{\prime}\) are not semantically different enough, \(O_{2}\) may be correct for \(I\). These instances will be filtered out in the later stage of manual filtering and modification.
Gpt-4 Inhelpful Outputs (GPTOut).In this subset, we directly prompt GPT-4 to produce a superficially good but unhelpful or incorrect output \(O_{2}\) given instruction \(I\). This is a challenging task even for GPT-4. In most cases, \(O_{2}\) produced by GPT-4 is either correct or obviously incorrect (thereby not adversarial). Nonetheless, we are still able to obtain a high-quality subset of instances after adversarial filtering and manual inspection. See Appendix A.4 for more details. A potential limitation about this subset is that since the adversarial outputs are created by GPT-4, GPT-4-based evaluators may have an unfair advantage when they are assessed on this subset. We leave an in-depth analysis of this matter for future work.
Manual Construction (Manual).In addition to the aforementioned automatic processes of generating candidate instances, we take inspiration from the previous three subsets and manually con
Figure 2: Illustration of the Adversarial set collection process (except the Manual subset). Given an instruction \(I\) and a preferred output \(O_{1}\), we either collect a closely related but different enough instruction \(I^{\prime}\) and generate dispreferred (adversarial) output \(O_{2}\) (in Neighbor and GPTInst), or directly construct an output \(O_{2}\) (in GPTOut). We often use _weaker models_ to generate \(O_{1}\) and _stronger models_ to generate \(O_{2}\) such that \(O_{2}\) is more superficially appealing.
struct instances that are adversarially challenging to LLM evaluators to further increase the quantity and diversity of our Adversarial set. Appendix A.5 gives example instances in this subset.
## 3 Prompting Strategies for LLM evaluators
In this section, we present a collection of prompting strategies for LLM evaluators examined on LLMBar. While the capacity of base LLMs largely determines how accurate the evaluator is, we find that different prompting strategies also play a significant role.
We first examine existing prompting strategies, followed by a suite of novel prompting strategies--**Rules, Metrics**, and **Swap** (see Figure 3)--proposed by this work.
**Vanilla**[13], We instruct the LLM to select better outputs, followed by the instruction \(I\) and the two outputs \(O_{1}\) and \(O_{2}\). The LLM is asked to simply output its preference without any explanation. We prompt the LLM in a zero-shot manner by default.8
Footnote 8: We also experiment with few-shot in-context learning in Appendix E and there is no significant difference.
**Chain-of-Thoughts (CoT; Wei et al., 2022).** Instead of generating labels only, we instruct the LLM to first generate a concise reasoning, prior to generating its preference between the two outputs.
**Self-Generated Reference (Reference; Zheng et al., 2023).** We first prompt the LLM evaluator to generate an output given the instruction. The generated output is then passed to the LLM evaluator as a reference when making the comparison.
**ChatEval**[15].** We experiment with ChatEval (Chan et al., 2023), where multiple LLM evaluators, personalized by different role prompts, evoke a discussion on the preference. All the evaluators take turns to give their final preference given the context of their discussions.
**Rules.** In the prompt, we explicitly list some general rules for LLM evaluators to follow when making the comparison, for example, "prioritize evaluating whether the output honestly executes the instruction". We find that **Rules** improves the evaluator's accuracy almost universally and is easy to apply on top of any other prompting strategies. In the following text and tables, we mark prompting methods that use **Rules** with *. For example, **Reference*** indicates **Rules+Reference**.
**Self-Generated Metrics (Metrics).** Intuitively, LLM evaluators could benefit from some metrics that specify what constitutes a good output given this specific instruction. To do so, we first prompt the LLM to generate a set of instruction-specific metrics that a good output should adhere to. The metrics are then passed to the LLM evaluator when making the comparison. It encourages the LLM
Figure 3: Illustration of our proposed prompting strategies **Rules, Metrics**, and **Swap**. Each block represents one generation step of the LLM, along with intermediate outputs used to obtain final evaluation. For the last step of **Swap**, intermediate generations are updated to reflect a consistent ordering of the pairwise outputs.
evalators to focus on specific aspects of instruction following. Naturally, we can combine this strategy with **Self-Generated Reference** (**Metrics+Reference**).
**Swap and Synthesize (Swap).** Existing work finds that many LLM evaluators exhibit strong positional bias (Wang et al., 2023b). When the position of two outputs is swapped, the evaluator often generates contradictory preferences. Inspired by Du et al., 2023, we first prompt the LLM evaluator to give its preference using **CoT** with orders \(O_{1},O_{2}\) and \(O_{2},O_{1}\). Then we instruct the evaluator to make its final decision by synthesizing the two CoTs if evaluators generate contradictory preferences. We also adopt the **CoT** version of this strategy (**Swap+CoT**), where the LLM evaluator is asked to use **CoT** when synthesizing.
The exact prompt for each strategy, more details, and some examples can be found in Appendix B.
## 4 Experiments
In this section, we conduct comprehensive experiments and evaluate different LLM evaluators on LLMBar to answer the following research questions: (1) How do different LLMs and prompting strategies affect the evaluator performance on LLMBar? (2) How is LLMBar different from other meta-evaluation datasets used to assess LLM evaluators?
### Experimental Setup
We employ both proprietary and open-source LLMs as base models. To enhance reproducibility, we set the temperature to 0 for proprietary models, and utilize greedy decoding for open-source models.
**Proprietary models.** We adopt GPT-4 (OpenAI, 2023) and ChatGPT (OpenAI, 2022), two representative proprietary instruction-tuned LLMs that are commonly used as LLM evaluators (Dubois et al., 2023; Rafailov et al., 2023; Chen et al., 2023a; Li et al., 2023c, _etc_). Note that even though GPT-4 is believed to be much stronger, it is 30\(\times\) more expensive than ChatGPT, making ChatGPT appealing for researchers with limited budgets. We also experiment with PaLM2 (Anil et al., 2023).
**Open-source models.** Using proprietary API LLMs as evaluators presents many challenges. The API usage may incur high costs and delays and may pose privacy concerns. Thus, employing open-source LLMs as evaluators can be a promising substitute (Zheng et al., 2023; Wang et al., 2023c). We experiment with two state-of-the-art open-source instruction-tuned models: LLaMA-2-70B-Chat (Touvron et al., 2023b) and Falcon-180B-Chat (Almazrouei et al., 2023).
### Human Agreement on LLMBar
We sample 80 instances randomly from LLMBar and assign each instance to two paper authors (as expert human annotators).9 We ask them to select the output that better follows the given instruction. The agreement rate between expert annotators on the sampled LLMBar set is **94%**. Human agreement rate is 90% and 95% respectively on the Natural and the Adversarial set10. As a reference, FairEval (Wang et al., 2023b) has an average human annotation accuracy of 71.7%; LLMEval2(Zhang et al., 2023) has a human agreement of 80%; MT-Bench (Zheng et al., 2023) report a human agreement rate of 63%. This suggests that LLMBar instances reflect objective human preferences on instruction following and achieve high human agreement among expert annotators.
Footnote 9: Authors who manually curate LLMBar are NOT involved in the experiment as they know the gold labels.
Footnote 10: The agreement rate is 18/20 and 57/60 on (sampled) Natural and Adversarial instances respectively.
### LLM Evaluator Performance on LLMBar
We evaluate different evaluators (combinations of LLMs and prompting strategies) on LLMBar. For each output pair, we query the evaluator twice with swapped orders. We then report _average accuracy_ (Acc.) and _positional agreement rate_ (Agr.). _Positional agreement rate_ (Agr.) refers to the percentage of instances with consistent preference labels before and after swapping the presentation orders of the two outputs. Average accuracies of eight representative LLM evaluators11 are shown
in Figure 4. Detailed results of GPT-4, ChatGPT12, LLAMA-2-70B-Chat (LLaMA2), PaLM213, and Falcon-180B-Chat (Falcon) are reported in Table 2, Table 5, Table 7, Table 8, and Table 9. We also try rating-based evaluation (instead of comparison-based) and show the results in Appendix D.
Footnote 12: By default, we use gpt-4-0613 and gpt-3.5-turbo-0613 for GPT-4 and ChatGPT respectively. We also report results of ChatGPT-0301-based evaluators (using gpt-3.5-turbo-0301) in Table 6.
LLM evaluators significantly underperform human on LLMBar.As shown in Figure 4 and result tables, all LLM evaluators struggle on the LLMBar Adversarial subsets. When using ChatGPT, LLaMA2, and Falcon as the base model, LLM evaluators can barely achieve above-chance performance on the Adversarial set. PaLM2-based and GPT-4-based evaluators show much higher accuracy on Adversarial, yet even the best performing GPT-4-based evaluator achieves an average accuracy of \(82.8\%\) on Adversarial, more than \(10\%\) lower than the human expert agreement rate (\(95\%\)). The evaluator performance gap is relatively smaller on the Natural set, though weaker LLMs still lag behind GPT-4 and humans by a significant margin.
Our proposed prompting strategies significantly improve the evaluators' performance.Figure 4 demonstrates that a combination of **Rules+Metrics+Reference*** in the table) consistently improves evaluator performance across all LLMs for both Natural and Adversarial sets. Looking at individual prompting strategies, each of **Rules**, **Metrics**, and **Reference** significantly improves LLM evaluators on the Adversarial set and combining them leads to the overall highest accuracy. Contrary to common beliefs, **CoT*** falls short in enhancing LLM evaluators on Adversarial. We observe that the produced reasoning often exhibits stronger biases towards outputs with superior superficial quality and thus hurts the performance. Based on **CoT***, **Swap*** and **Swap+CoT***** significantly improve the positional agreement rate, without negatively affecting the average accuracy, and in some cases, slightly improving it.
\begin{table}
\begin{tabular}{l|c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Strategy**} & \multicolumn{4}{c}{**Natural**} & \multicolumn{4}{c}{**Adversarial**} & \multicolumn{4}{c}{**Average**} & \multicolumn{4}{c}{**Average**} \\ & Acc. & \multicolumn{1}{c}{Agr.} & Acc. & \multicolumn{1}{c}{Agr.} & Acc. & \multicolumn{1}{c}{Agr.} & Acc. & \multicolumn{1}{c}{Agr.} & Acc. & \multicolumn{1}{c}{Agr.} & Acc. & \multicolumn{1}{c}{Agr.} & Acc. & \multicolumn{1}{c}{Agr.} \\ \hline
**Vanilla** & 93.5 & 97.0 & 64.2 & 89.6 & 76.6 & 90.2 & 76.6 & 87.2 & 75.0 & 89.1 & 73.1 & 89.0 & 77.2 & 90.6 \\
**Vanilla*** & 95.5 & 95.0 & 78.7 & 93.3 & 86.4 & 94.6 & 77.7 & 93.6 & 80.4 & 82.6 & 80.8 & 91.0 & 83.7 & 91.8 \\
**CoT*** & 94.5 & 91.0 & 75.0 & 90.3 & 83.2 & 90.2 & 74.5 & 87.2 & 73.9 & 82.6 & 76.6 & 87.6 & 80.2 & 88.3 \\
**Swap*** & 94.5 & 97.0 & 77.6 & 97.0 & 88.0 & 95.7 & 73.4 & 97.9 & 81.5 & 93.5 & 80.1 & 96.0 & 83.0 & 96.2 \\
**Swap***+CoT*** & 94.0 & 100.0 & 78.7 & 99.3 & 85.3 & 96.7 & **79.8** & 97.7 & 72.9 & 93.5 & 80.3 & 96.8 & 83.0 & 97.5 \\
**ChatEval*** & 91.5 & 95.0 & 82.5 & 85.8 & 88.0 & 87.0 & 68.1 & 78.7 & 77.2 & 80.4 & 78.9 & 83.0 & 81.5 & 85.4 \\
**Metrics*** & 93.0 & 94.0 & 83.2 & 93.3 & **89.7** & 90.2 & 73.4 & 89.4 & 81.5 & 80.4 & 82.0 & 88.3 & 84.2 & 89.5 \\
**Reference*** & 95.5 & 97.0 & 80.6 & 89.7 & 87.5 & 90.2 & 77.7 & 85.1 & **84.8** & 87.0 & 82.6 & 88.0 & 85.2 & 89.8 \\
**Metrics+Reference*** & **96.0** & 96.0 & **85.4** & 94.8 & **89.7** & 90.2 & 72.3 & 83.0 & 83.7 & 84.8 & **82.8** & 88.2 & **85.4** & 89.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of GPT-4-based evaluators on LLMBar. * indicates the incorporation of **Rules** into the prompting strategy. The highest average accuracy is marked by **bold** and the highest positional agreement rate is marked by underline. Random guess would achieve a 50% accuracy.
Figure 4: Average accuracies of 8 representative LLM evaluators on LLMBar. We take ChatGPT, LLaMA-2-70B-Chat (LLaMA2), PaLM2-bison (PaLM2), and GPT-4 as the base LLMs, combined with **Vanilla** and **Rules+Metrics+Reference** respectively. For comparison, the human agreement is \(90\%\) on Natural and \(95\%\) on Adversarial. Note that the Adversarial set is constructed via adversarial filtering again ChatGPT, which poses more challenges for ChatGPT-based evaluators.
### Comparison to Other Meta-Evaluations of LLM evaluators
We compare LLMBar to existing meta-evaluation benchmarks for LLM evaluator and investigate if they show different trends from ours. Figure 5 illustrates the average accuracies of **Vanilla** and **Metrics+Reference*** evaluators on FairEval (Wang et al., 2023b), LLMEval2(Zhang et al., 2023), MT-Bench (Zheng et al., 2023), and the average result across our Adversarial set.14
Footnote 14: We remove LLMEval\({}^{2}\) instances whose instructions are empty or non-English and add the task description before the raw input to get the instruction. For MT-Bench, we get the gold preferences by majority vote. We remove all “TIE” instances and randomly sample 200 instances for LLMEval2 and MT-Bench respectively.
**We observe that LLMBar demonstrates a drastically different pattern of LLM evaluators from existing benchmarks.** While different LLMs and prompting strategies perform similarly on the other datasets, LLMBar shows a clear gap between weaker and stronger LLMs, and vanilla vs. improved prompts. This supports LLMBar to be a better evaluation of the capability of LLM evaluators in discerning instruction following, and a better benchmark for LLM evaluator selection.
### Reward Model Performance on LLMBar
LLMBar can be also used for evaluating reward models (RMs), a critical component in _reinforcement learning from human feedback_ (RLHF; Christiano et al., 2017; Ouyang et al., 2022) that are trained on pairwise preference data to rate model outputs. We evaluate the two RMs from Alpaca-Farm15 on LLMBar, reward-model-sim and reward-model-human, trained on preference data annotated by LLMs and humans respectively. Table 3 shows that those two RMs fall significantly short on our benchmark, even on Natural, suggesting that current reward models struggle to identify instruction-following outputs, a finding in line with Shen et al. (2023); Singhal et al. (2023).
Footnote 15: We download parameters of the two RMs from [https://github.com/tatsu-lab/alpaca_farm#downloading-pre-tuned-alpacafarm-models](https://github.com/tatsu-lab/alpaca_farm#downloading-pre-tuned-alpacafarm-models).
### Case Study: A More Challenging Meta-Evaluation Set
In the previous subsections, we showed that most evaluators struggle with LLMBar, but the powerful GPT-4-based evaluators achieve reasonable scores. Are there more challenging tasks that even the most powerful LLM, equipped with advanced prompts, may fail on? In this case study, we explore some more adversarial and synthetic scenarios for meta-evaluation: (1) The Constraint subset, where instructions impose combinatorial lexical constraints on outputs; (2) The Negation subset, where instructions intentionally request unhelpful outputs; (3) The Base-9 and Base-10
Figure 5: Average accuracies of 8 representative LLM-evalators on FairEval, LLMEval\({}^{2}\), MT-Bench, and our Adversarial set. Note that these datasets do not ensure the objective correctness of the preferences, so the accuracies on them do not reliably reflect the evaluators’ capabilities.
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline & **Natural** & \multicolumn{4}{c}{**Adversarial**} & Average \\ \cline{2-7}
**RM Evaluator** & & Neighbor & GPT1st & GPTOut & Manual & Average \\ \hline reward-model-sim & 68.0 & 17.9 & 20.7 & 59.6 & 26.1 & 31.1 & 38.4 \\ reward-model-human & 70.0 & 38.1 & 30.4 & 51.1 & 32.6 & 38.0 & 44.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of AlpacaFarm reward models on LLMBar (accuracy).
subsets, which involve two-digit addition problems in base-9 and base-10, with the former being known as a counterfactual task (Wu et al., 2023b) that deviates from standard assumptions. We evaluate representative prompting strategies on these subsets in Table 13. Overall, we find that evaluating instances with these special instructions is challenging, and our enhanced strategies also improve performance. Further details are available in Appendix F.
## 5 Related Work
The rapid development of open-ended instruction tuning algorithms (Ouyang et al., 2022; Liu et al., 2023a; Rafailov et al., 2023) and models (OpenAI, 2022; Taori et al., 2023; Chiang et al., 2023; Kopf et al., 2023; Touvron et al., 2023b) calls for scalable and cost-effective evaluation methods. Many studies suggest employing LLMs as evaluators for traditional natural language generation tasks, such as summarization, machine translation, and story generation (Chiang and Lee, 2023; Fu et al., 2023; Wang et al., 2023a; Kocmi and Federmann, 2023; Chen et al., 2023b; Liu et al., 2023b), which has been demonstrated to score higher correlations with humans than using conventional reference-based evaluation, _e.g._, BLEU (Papineni et al., 2002). In the context of instruction tuning, to replace the costly and unproducible human evaluation (Ouyang et al., 2022; Liu et al., 2023a; Zhao et al., 2023; Wu et al., 2023a), many recent works take prompted LLMs as evaluators to compare model outputs (Chiang et al., 2023; Peng et al., 2023; Dubois et al., 2023; Zhou et al., 2023; Rafailov et al., 2023; Wang et al., 2023c; Xu et al., 2023; Song et al., 2023; Chen et al., 2023a; Li et al., 2023c, _etc_), or to replace humans for preference data collection (Bai et al., 2022; Lee et al., 2023).
Even though the LLM-as-evaluator paradigm emerged as a promising evaluation method for prototype development, it is found to suffer from a lot of biases and limitations, such as sensitivity to presentation orders (Wang et al., 2023b; Pezeshkpour and Hruschka, 2023), favoring verbose outputs, and favoring outputs from similar models (Zheng et al., 2023). Therefore, several works introduce meta-evaluation benchmarks, including FairEval (Wang et al., 2023b), MT-Bench (Zheng et al., 2023), and LLMEval2 (Zhang et al., 2023), to examine whether LLM evaluators have high agreement with humans. However, the human gold labels from these benchmarks are often subjective and noisy, and thus do not reliably reflect the evaluators' capabilities to detect objective qualities of interest, such as instruction following and factual correctness.
Knowing the limitations of LLM evaluations, recent works explore improving them with better prompting strategies. Wang et al. (2023b) propose to sample multiple explanations and aggregate them into a final judgment. Zheng et al. (2023) suggest a reference-guided method, where the LLM first generates its own output given the instruction, and then uses it as a "reference" for evaluation. Li et al. (2023a); Zhang et al. (2023); Chan et al. (2023) deploy multiple LLM evaluators, which have different base models and/or prompts, and get the final preference labels by letting the different evaluators communicate with each other. Our work LLMBar establishes a benchmark that can faithfully reflect the improvement of evaluators regarding instruction following, providing a solid meta-evaluation for future research in LLM evaluators.
## 6 Conclusion
In this work, we propose LLMBar, a challenging meta-evaluation set to examine whether LLM evaluators can faithfully judge instruction-following outputs. Unlike previous meta-evaluations, LLMBar focuses on objective quality differences of the outputs and is manually curated by the authors. Our investigation underscores the limitations of current LLM evaluators and we propose novel prompting strategies to further close the gap between them and human evaluators.
While we focus on instruction following, there are other important qualities of instruction-tuned models that we should care about, for example, factual correctness and being non-toxic. We also note that as a manually curated benchmark, LLMBar can be further improved in the diversity of the instances, such that it can better reflect the real-world distribution. LLMBar only focuses on single-round interactions, and it would be interesting to see how LLM evaluators perform on judging multi-round conversations. We leave the exploration in those aspects to future work.
## Acknowledgments
We thank Alexander Wettig, Carlos Jimenez, Jun Yan, Mengzhou Xia, Yuxuan Tong, Zhengyan Zhang, and members from the Princeton NLP group for providing helpful feedback. Tianyu Gao is supported by an IBM PhD Fellowship, and Yu Meng is supported by a Google PhD Fellowship. This research is also supported by Microsoft Azure credits through the "Accelerate Foundation Models Academic Research" Initiative.
|
2308.12760 | Network-Device-Independent Certification of Causal Nonseparability | Causal nonseparability is the property underlying quantum processes
incompatible with a definite causal order. So far it has remained a central
open question as to whether any process with a clear physical realisation can
violate a causal inequality, so that its causal nonseparability can be
certified in a device-independent way, as originally conceived. Here we present
a method solely based on the observed correlations, which certifies the causal
nonseparability of all the processes that can induce a causally nonseparable
distributed measurement in a scenario with trusted quantum input states, as
defined in [Dourdent et al., Phys. Rev. Lett. 129, 090402 (2022)]. This notably
includes the celebrated quantum switch. This device-independent certification
is achieved by introducing a network of untrusted operations, allowing one to
self-test the quantum inputs on which the effective distributed measurement
induced by the process is performed. | Hippolyte Dourdent, Alastair A. Abbott, Ivan Šupić, Cyril Branciard | 2023-08-24T13:08:00Z | http://arxiv.org/abs/2308.12760v1 | # Network-Device-Independent Certification of Causal Nonseparability
###### Abstract
Causal nonseparability is the property underlying quantum processes incompatible with a definite causal order. So far it has remained a central open question as to whether any process with a clear physical realisation can violate a causal inequality, so that its causal nonseparability can be certified in a device-independent way, as originally conceived. Here we present a method solely based on the observed correlations, which certifies the causal nonseparability of all the processes that can induce a causally nonseparable distributed measurement in a scenario with trusted quantum input states, as defined in [Dourdent _et al._, Phys. Rev. Lett. **129**, 090402 (2022)]. This notably includes the celebrated quantum switch. This device-independent certification is achieved by introducing a network of untrusted operations, allowing one to self-test the quantum inputs on which the effective distributed measurement induced by the process is performed.
_Introduction.--_For both quantum and classical processes, it is usually assumed that if two parties interact only once with a physical medium, then only one-way influences are possible. Remarkably, this turns out to be an unnecessary restriction on the possible observable correlations. Formalised in the process matrix framework, where quantum theory is taken to hold locally but no global causal structure is assumed, some processes allow parties to establish correlations incompatible with a definite causal order [1].
Causal nonseparability, the property underlying such processes [1; 2], bears a foundational significance and is also the basis for advantages in a number of different tasks [3; 4; 5; 6; 7; 8]. It can be certified in a _device-independent_ (DI) way, i.e. requiring knowledge of the observed statistics only, based on the violation of "causal inequalities" by noncausal correlations [9; 1]. However, not all causally nonseparable process matrices are noncausal in this strong sense [10; 11; 12]. Indeed, a large class of quantum-realisable processes cannot violate causal inequalities [13; 14], and it is still unclear whether any process with a clear physical realisation is able to. This includes in particular the canonical "quantum switch" [3], the resource behind most known advantages arising from causal indefiniteness [3; 4; 5; 6; 7; 8].
In a previous work [15], we showed (with another colleague) that the quantum switch, as well as a large class of bipartite causally nonseparable processes (and even all of them under a reasonable additional assumption), can display a new form of noncausality in a semi-DI with quantum inputs (SDI-QI) scenario. In the present work, we show how the gap between this SDI-QI certification and a fully DI certification can be bridged by self-testing the quantum inputs in a network scenario, where each party that receives a quantum input in the SDI-QI scenario shares an additional uncharacterised bipartite state with an extra untrusted party. We call this certification _network-device-independent (NDI)_ to differentiate it from the standard DI scenario based on causal inequalities. This approach is motivated by the ability to exploit self-testing to transform a universal certification of entanglement in a SDI-QI scenario into a NDI certification [16; 17]. The apparent analogy between entanglement and causal nonseparability, however, was already seen to have severe limitations in the SDI-QI scenario [15], making it unclear whether NDI certification could be applied to this latter resource.
_Causal (non)separability in the SDI-QI scenario.--_The approach we present is a general method for transforming SDI-QI certification of causal nonseparability into NDI certification, applicable to a range of scenarios. For concreteness and clarity, however, we focus our initial presentation here on the bipartite scenario, before making some comments about its applicability to other scenarios and processes later.
Two parties, Alice and Bob, control separate quantum labs with input and output Hilbert spaces \(\mathcal{H}^{A_{I}}\) and \(\mathcal{H}^{A_{O}}\) for Alice, and \(\mathcal{H}^{B_{I}}\) and \(\mathcal{H}^{B_{O}}\) for Bob. They may also receive some auxiliary quantum states in Hilbert spaces \(\mathcal{H}^{A^{\prime}},\mathcal{H}^{B^{\prime}}\).1 Let us consider the SDI-QI scenario introduced in [15] where Alice and Bob are provided with quantum input states \(\rho_{x}^{A^{\prime}}\in\mathcal{L}(\mathcal{H}^{A^{\prime}})\) and \(\rho_{w}^{B^{\prime}}\in\mathcal{L}(\mathcal{H}^{B^{\prime}})\), respectively, indexed by the labels \(z\) and \(w\). They each perform some fixed quantum operations described
as quantum instruments [18], i.e., sets of completely positive (CP) maps \(\mathcal{M}_{a}:\mathcal{L}(\mathcal{H}^{A^{\prime}A_{I}})\to\mathcal{L}( \mathcal{H}^{A_{O}})\) and \(\mathcal{M}_{b}:\mathcal{L}(\mathcal{H}^{B^{\prime}B_{I}})\to\mathcal{L}( \mathcal{H}^{B_{O}})\), whose indices \(a,b\) refer to some (classical) outcomes for Alice and Bob, and whose sums \(\sum_{a}\mathcal{M}_{a}\) and \(\sum_{b}\mathcal{M}_{b}\) are trace-preserving (TP). Using the Choi isomorphism [19] (see the Supplemental Material (SM) of [15], Sec. A), the CP maps \(\mathcal{M}_{a}\), \(\mathcal{M}_{b}\) can be represented as positive semidefinite matrices \(M_{a}^{A^{\prime}A}\) and \(M_{b}^{B^{\prime}B}\).
Within the process matrix framework, the correlations established by Alice and Bob in such a scenario with quantum inputs are given by the probabilities
\[P(a,b|\rho_{z}^{A^{\prime}},\rho_{w}^{B^{\prime}})=\left(M_{a}^{ A^{\prime}A}\otimes M_{b}^{B^{\prime}B}\right)*\left(\rho_{z}^{A^{\prime}} \otimes\rho_{w}^{B^{\prime}}\otimes W^{AB}\right)\] \[\quad=E_{a,b}^{A^{\prime}B^{\prime}}*\left(\rho_{z}^{A^{\prime}} \otimes\rho_{w}^{B^{\prime}}\right)=\mathrm{Tr}\left[\left(E_{a,b}^{A^{ \prime}B^{\prime}}\right)^{T}\left(\rho_{z}^{A^{\prime}}\otimes\rho_{w}^{B^{ \prime}}\right)\right] \tag{1}\]
with
\[E_{a,b}^{A^{\prime}B^{\prime}}=\left(M_{a}^{A^{\prime}A}\otimes M_{b}^{B^{ \prime}B}\right)*W^{AB}. \tag{2}\]
Here, \(W^{AB}\in\mathcal{L}(\mathcal{H}^{AB})\) is the so-called "process matrix", a positive semidefinite matrix satisfying nontrivial linear constraints in order to always generate valid probabilities [1] (see e.g. [15], SM, Sec. B); and \(*\) is the "link product" [20; 21], a convenient tool for calculations defined for any matrices \(M^{XY}\in\mathcal{L}(\mathcal{H}^{XY})\), \(N^{YZ}\in\mathcal{L}(\mathcal{H}^{YZ})\) as \(M^{XY}*N^{YZ}=\mathrm{Tr}_{Y}[(M^{XY}\otimes\mathbb{1}^{Z})^{Y}(\mathbb{1}^{ X}\otimes N^{YZ})]\in\mathcal{L}(\mathcal{H}^{XZ})\) (where \(T_{Y}\) is the partial transpose over \(\mathcal{H}^{Y}\); see also [15], SM, Sec. A).2 The family \(\mathbb{E}^{A^{\prime}B^{\prime}}\coloneqq(E_{a,b}^{A^{\prime}B^{\prime}})_{a,b}\) defines an effective distributed measurement [22; 23] on the quantum inputs, termed a "distributed positive-operator-valued measure" (D-POVM); see Fig. 1. Note that by choosing \(\{\rho_{z}^{A^{\prime}}\}_{z}\) and \(\{\rho_{w}^{B^{\prime}}\}_{w}\) to be tomographically complete sets of states, one can completely reconstruct \(\mathbb{E}^{A^{\prime}B^{\prime}}\) from the statistics obtained through Eq. (1).
Footnote 2: A full trace \(\mathrm{Tr}[(M^{Y})^{T}N^{Y}]\) and a tensor product \(M^{X}\otimes N^{Z}\) can both be written as a link product. Moreover, the link product is commutative and associative.
The process matrix formalism makes no _a priori_ assumption of a global causal structure relating Alice and Bob. In fact, assuming such a structure imposes further constraints, due to the inability for a party to signal to the causal past. Process matrices compatible, for example, with Alice acting before Bob (denoted \(A\prec B\)) are of the form \(W^{A\prec B}=W^{A\prec B_{I}}\otimes\mathbb{1}^{B_{O}}\), and similarly \(W^{B\prec A}=W^{B\prec A_{I}}\otimes\mathbb{1}^{A_{O}}\) for Bob before Alice (\(B\prec A\)), with \(W^{A\prec B_{I}}\in\mathcal{L}(\mathcal{H}^{AB_{I}})\) and \(W^{B\prec A_{I}}\in\mathcal{L}(\mathcal{H}^{A_{I}B})\) being themselves valid process matrices [1]. Process matrices that can be written as a convex mixture of matrices compatible with \(A\prec B\) and \(B\prec A\), i.e., of the form
\[W^{AB}=q\,W^{A\prec B_{I}}\otimes\mathbb{1}^{B_{O}}+(1\!-\!q)\,W^{B\prec A_{I} }\otimes\mathbb{1}^{A_{O}}, \tag{3}\]
with \(q\in[0,1]\), are said to be "causally separable". They can be interpreted as being compatible with a definite (although perhaps probabilistic) causal order. Remarkably, there exist "causally nonseparable" process matrices that cannot be decomposed as in Eq. (3), and are thus incompatible with any definite causal order [1].
Causal nonseparability can always be certified in a device-dependent manner using a "causal witness" [10; 24], while only some processes can be certified in a DI way through the violation of a causal inequality [1; 10; 11; 12]. Here, we start by considering a relaxation of this DI scenario, by providing the parties with quantum inputs instead of classical ones. In [15], we showed that causally separable processes necessarily generate D-POVMs that are themselves causally separable in the sense that they can be written
\[\mathbb{E}^{A^{\prime}B^{\prime}}=q\,\mathbb{E}^{A^{\prime}\prec B^{\prime}}+(1 \!-\!q)\,\mathbb{E}^{B^{\prime}\prec A^{\prime}} \tag{4}\]
with \(q\in[0,1]\), where \(\mathbb{E}^{A^{\prime}\prec B^{\prime}}=(E_{a,b}^{A^{\prime}\prec B^{\prime}})_{a,b}\) and \(\mathbb{E}^{B^{\prime}\prec A^{\prime}}=(E_{a,b}^{B^{\prime}\prec A^{\prime}})_{a,b}\) are D-POVMs satisfying
\[\sum_{b}E_{a,b}^{A^{\prime}\prec B^{\prime}}=E_{a}^{A^{\prime}} \otimes\mathbb{1}^{B^{\prime}}\quad\forall a,\quad\sum_{a}E_{a}^{A^{\prime}}= \mathbb{1}^{A^{\prime}}, \tag{5}\] \[\sum_{a}E_{a,b}^{B^{\prime}\prec A^{\prime}}=E_{b}^{B^{\prime}} \otimes\mathbb{1}^{A^{\prime}}\quad\forall b,\quad\sum_{b}E_{b}^{B^{\prime}}= \mathbb{1}^{B^{\prime}}, \tag{6}\]
and are respectively compatible with the causal order where Alice receives her quantum input and acts before Bob (\(A^{\prime}\prec B^{\prime}\)) and _vice versa_ (\(B^{\prime}\prec A^{\prime}\)). Crucially, some causally nonseparable processes which cannot violate any causal inequality in the DI setting can still generate causally nonseparable D-POVMs. This causal nonseparability--and hence that of the process matrix inducing it through Eq. (2)--can then be certified by reconstructing the D-POVM (as mentioned above) and verifying that it cannot be decomposed as in Eq. (4) or, more directly, using these quantum inputs to violate a witness
Figure 1: The SDI-QI scenario: A process matrix \(W^{AB}\) connects two parties who receive trusted (orange) quantum inputs \(\rho_{z}^{A^{\prime}}\) and \(\rho_{w}^{B^{\prime}}\), resp. They each perform an untrusted (black) joint operation (\((M_{a}^{A^{\prime}A})_{a}\) and \((M_{b}^{B^{\prime}B})_{b}\), resp.), producing the classical outcomes \(a\) and \(b\). The purple semicircle represents the D-POVM \((E_{a,b}^{A^{\prime}B^{\prime}})_{a,b}\) induced by these instruments and the process matrix.
inequality of the form \(\mathcal{J}\geq 0\), where \(\mathcal{J}\) denotes some linear combination of the probabilities \(P(a,b|\rho_{z}^{A^{\prime}},\rho_{w}^{B^{\prime}})\)[15].
_NSDI-QI and NDI causal scenarios.--_A bipartite SDI-QI scenario can be transformed into a 4-partite network scenario (NSDI-QI) where two additional separated parties--Charlie and Daisy--remotely prepare the quantum inputs. They control separate labs with input Hilbert spaces \(\mathcal{H}^{C}\) and \(\mathcal{H}^{D}\), and share maximally entangled pairs of \(\mathcal{d}\)-dimensional states,3\(|\phi^{+}\rangle^{C^{\prime}A^{\prime}}\) and \(|\phi^{+}\rangle^{B^{\prime}D^{\prime}}\) (with \(|\phi^{+}\rangle=\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}|j,j\rangle\)) with Alice and Bob, respectively; see Fig. 2 (top). Charlie and Daisy can then remotely prepare the quantum inputs \(\rho_{z}^{A^{\prime}}\) and \(\rho_{w}^{B^{\prime}}\) by applying measurements with Choi representations \((M_{c|i}^{C^{\prime}})_{c}=(\rho_{z}^{C^{\prime}},\mathbb{1}^{C^{\prime}}- \rho_{z}^{C^{\prime}})\) and \((M_{d|w}^{D^{\prime}})_{d}=(\rho_{w}^{D^{\prime}},\mathbb{1}^{D^{\prime}}- \rho_{w}^{D^{\prime}})\) on their shares of entangled qudit pairs. When they observe the outcomes \(c=0\) and \(d=0\) corresponding to the POVM elements \(\rho_{z}^{C^{\prime}}\) and \(\rho_{w}^{D^{\prime}}\), respectively, then Alice and Bob receive precisely the desired quantum inputs since \(\rho_{z}^{A^{\prime}}\propto M_{0|z}^{C^{\prime}}*|\phi^{+}\rangle\!\langle \phi^{+}|^{C^{\prime}A^{\prime}}\) and \(\rho_{w}^{B^{\prime}}\propto M_{0|w}^{D^{\prime}}*|\phi^{+}\rangle\!\langle \phi^{+}|^{B^{\prime}D^{\prime}}\). Thereby, the observed correlations
Footnote 3: Here \(\mathcal{d}\) is the dimension of the quantum input spaces, which for simplicity we assume to be the same for both parties (although this can be easily relaxed without consequence).
\[P(c,a,b,d|z,w)= \left(M_{c|z}^{C^{\prime}}\otimes E_{a,b}^{A^{\prime}B^{\prime}} \otimes M_{d|w}^{D^{\prime}}\right)\] \[*\left(|\phi^{+}\rangle\!\langle\phi^{+}|^{C^{\prime}A^{\prime}} \otimes|\phi^{+}\rangle\!\langle\phi^{+}|^{B^{\prime}D^{\prime}}\right) \tag{7}\]
satisfy \(P(0,a,b,0|z,w)=P(0|z)P(0|w)P(a,b|\rho_{z}^{A^{\prime}},\rho_{w}^{B^{\prime}})\) and the causal nonseparability of \(\mathbb{E}^{A^{\prime}B^{\prime}}\) can be certified by the violation of a witness inequality as in the SDI-QI scenario.
This NSDI-QI scenario so far assumes the sharing of maximally entangled states and trusts Charlie's and Daisy's quantum measurements in the "reference scenario" described above. However, these assumptions can be relaxed by exploiting self-testing techniques. Self-testing exploits the fact that quantum correlations which maximally violate a Bell inequality can sometimes be used to uniquely determine--up to local transformations--which state and measurements produced them [25; 26]. To self-test the maximally entangled states and Charlie and Daisy's measurements, Alice, Bob, Charlie and Daisy are all given (new) classical inputs \(x,y,z,w\), resp., and perform untrusted operations \((\mathsf{M}_{a|x}^{\tilde{A}})_{a},(\mathsf{M}_{b|y}^{\tilde{B}})_{b},(\mathsf{ M}_{c|z}^{\tilde{C}})_{c},(\mathsf{M}_{d|w}^{\tilde{D}})_{d}\), producing outcomes \(a,b,c,d\), resp. on the process matrix \(W^{AB}\) and the untrusted auxiliary states \(\mathsf{\Psi}_{1}^{\tilde{A}}\) and \(\mathsf{\Psi}_{2}^{\tilde{B}}\) to be self-tested, which are now in the (untrusted) "physical" Hilbert spaces \(\mathcal{H}^{\tilde{A}},\mathcal{H}^{\tilde{B}},\mathcal{H}^{\tilde{C}}, \mathcal{H}^{\tilde{D}}\) (denoted with tildes)--that, _a priori_, may differ from those in the reference scenario \(\mathcal{H}^{A^{\prime}},\mathcal{H}^{B^{\prime}},\mathcal{H}^{C^{\prime}}, \mathcal{H}^{D^{\prime}}\) (with primes); see Fig. 2 (bottom). When applied to the process matrix, Alice and Bob's instruments generate a D-POVM \(\mathbb{E}_{x,y}^{\tilde{A}\tilde{B}}=(\mathbb{E}_{a,b|x,y}^{\tilde{A}\tilde{B }})_{a,b}\) for each \(x\) and \(y\) with elements \(\mathbb{E}_{a,b|x,y}^{\tilde{A}\tilde{B}}=\left(\mathsf{M}_{a|x}^{\tilde{A} \tilde{A}}\otimes\mathsf{M}_{b|y}^{\tilde{B}\tilde{B}})*W^{AB}\), similarly to Eq. (2). We will show that in this scenario, the causal nonseparability of a D-POVM generated from a causally nonseparable process matrix \(W^{AB}\) can be certified solely based on the correlations \(P(c,a,b,d|z,x,y,w)\), thus achieving a NDI certification of causal nonseparability.
_NDI certification of causal nonseparability.--_As suggested above, the idea to achieve this goal is to combine the SDI-QI certification with self-testing. For the protocol we present to function, it will be necessary for the quantum inputs in the initial SDI-QI setting to be chosen such that that the reference measurements \((M_{c|z}^{C^{\prime}})_{c}=(\rho_{z}^{C^{\prime}},\mathbb{1}^{C^{\prime}}-\rho_{ z}^{C^{\prime}})\) and \((M_{d|w}^{D^{\prime}})_{d}=(\rho_{w}^{D^{\prime}},\mathbb{1}^{D^{\prime}}-\rho_{ w}^{D^{\prime}})\)
Figure 2: From the NSDI-QI to the NDI scenario: In a NSDI-QI scenario (top), the quantum inputs of the SDI-QI scenario (Fig. 1) are prepared remotely by two other separated parties—Charlie and Daisy—performing suitable operations on their part of a maximally entangled state shared with Alice and Bob, so that \(\rho_{z}^{A^{\prime}}\propto M_{0|z}^{C^{\prime}}*|\phi^{+}\rangle\!\langle \phi^{+}|^{C^{\prime}A^{\prime}}\) and \(\rho_{w}^{B^{\prime}}\propto M_{0|w}^{D^{\prime}}*|\phi^{+}\rangle\!\langle \phi^{+}|^{B^{\prime}D^{\prime}}\) resp. (orange). These quantum inputs can be self-tested (bottom): Alice, Bob, Charlie and Daisy are given classical inputs \(x,y,z,w\), and perform untrusted operations that produce outcomes \(a,b,c,d\), resp. Verifying that the correlations \(P(c,a|z,x)\) and \(P(b,d|y,w)\) both maximally violate a Bell inequality allows one to self-test the preparation of the quantum inputs; from this, one can certify the causal nonseparability of the D-POVM \((\mathbb{E}_{a,b|x,y}^{\tilde{A}\tilde{B}})_{a,b}\) (purple semicircle) for \(x=y=\star\), and thus the causal nonseparability of the process matrix \(W^{AB}\), in a NDI scenario.
of the NSDI-QI setting are precisely those involved in the self-testing procedure we use. For simplicity, let us assume that \(\mathpzc{d}=2^{m}\) for some \(m\geq 1\). This can be done without loss of generality by embedding the reference experiment, i.e., the spaces \(\mathcal{H}^{C^{\prime}}\) and \(\mathcal{H}^{\bar{D}^{\prime}}\), into larger \(2^{m}\)-dimensional spaces if needed (see Appendix A). We then require not only that \(\rho_{z}^{C^{\prime}}\) and \(\rho_{w}^{D^{\prime}}\) form tomographically complete sets, but more precisely that they are (projectors onto the) eigenstates of orthogonal Pauli measurements (or, beyond the 2-dimensional case, tensor products thereof). We will assume henceforth that this is indeed the case.
When Alice and Bob receive generic classical inputs \(x,y\) (other than the specific \(x,y=\star\) that we shall consider below), we interpret the experiment as a self-testing round, in which the correlations are used to test a Bell inequality: reaching its maximal quantum violation certifies that the untrusted states \(\Psi_{1}^{\bar{C}\bar{A}},\Psi_{2}^{\bar{B}\bar{D}}\) and measurements \((\mathsf{M}_{c|z}^{\bar{C}})_{c},(\mathsf{M}_{d|w}^{\bar{D}})_{d}\) have the form of those used in the NSDI-QI setup. Such a statement can only be true up to local transformations, but these transformations relating the "physical", uncharacterised, states and measurements to their idealised "reference" counterparts can be explicitly incorporated into the analysis. When they both receive the special input \(x=y=\star\), we instead interpret it as a certification round, in which one follows the same procedure as in a NSDI-QI certification of causal nonseparability using these self-tested quantum inputs. Crucially, the aforementioned local transformations preserve causal (non)separability, allowing the causal nonseparability of the physical process to be certified, despite lacking any knowledge of the Hilbert spaces upon which it acts.
The general protocol can thus be described as follows.
* (_Correlation generation_) Alice, Bob, Charlie and Daisy perform local operations and measurements on their respective subsystems to obtain the statistics \(P(c,a,b,d|z,x,y,w)\).
* (_Self-testing_) One certifies that the auxiliary states each contain a maximally entangled state and that Charlie and Daisy each effectively perform the desired tomographically complete, orthogonal Pauli measurements on their subsystems by verifying that the distributions \(P_{y}(c,a|z,x):=P(c,a|z,x,y)\) and \(P_{x}(b,d|y,w):=P(b,d|x,y,w)\) both maximally violate a Bell inequality \(\mathcal{I}\geq 0\) for some \(y\) and \(x\), resp.4 Footnote 4: Note that these distributions are independent of \(w\) and \(z\), respectively, due to the network structure: e.g., Daisy only shares a (nonsignalling) quantum state with Alice, Bob and Charlie. However, contrary to the scenario of entanglement certification [16; 17], these may still depend on \(y\) and \(x\), resp., as the process matrix may allow for signalling.
* (_Causal nonseparability certification_) One verifies that the correlations \(P_{*}(a,b|z,w):=P(a,b|z,x{=}\star,y{=}\star,w,c{=}0,d{=}0)\) violate a witness inequality \(\mathcal{J}\geq 0\), thus certifying that the process matrix is causally nonseparable.
Let us now explain these steps in more detail. We use the fact that, for \(\mathpzc{d}=2^{m}\), one can self-test a \(\mathpzc{d}\)-dimensional maximally entangled state and orthogonal Pauli measurements (or tensor products thereof, for \(\mathpzc{d}>2\)) by the maximal quantum violation of a Bell inequality \(\mathcal{I}\geq 0\)[17]. Further discussion of the inequality is provided in Appendix A, but its exact form (which depends on \(\mathpzc{d}\)) is not important for the argument here.
The first step of the protocol simply consists in generating the correlations \(P(c,a,b,d|z,x,y,w)\), which can be written as
\[P(c,a,b,d|z,x,y,w)\] \[=\!\left(\mathsf{M}_{c|z}^{\bar{C}}\otimes\mathsf{M}_{a|x}^{\bar{A }A}\otimes\mathsf{M}_{b|y}^{\bar{B}B}\otimes\mathsf{M}_{d|w}^{\bar{D}}\right)\! \ast\!\left(\mathsf{\psi}_{1}^{\bar{C}\bar{A}}\!\otimes\!W^{AB}\!\otimes\! \mathsf{\psi}_{2}^{\bar{B}\bar{D}}\right)\] \[=P(c|z)\,P(d|w)\,\mathsf{E}_{a,b|x,y}^{\bar{A}\bar{B}}\ast\left( \mathsf{\rho}_{c|z}^{\bar{A}}\otimes\mathsf{\rho}_{d|w}^{\bar{B}}\right) \tag{8}\]
with the remotely prepared states satisfying \(P(c|z)\,\mathsf{\rho}_{c|z}^{\bar{A}}=\mathsf{M}_{c|z}^{\bar{C}}*\mathsf{\psi }_{1}^{\bar{C}\bar{A}}\) and \(P(d|w)\,\mathsf{\rho}_{d|w}^{\bar{B}}=\mathsf{M}_{d|w}^{\bar{D}}*\mathsf{\psi }_{2}^{\bar{B}\bar{D}}\), and with the D-POVM elements \(\mathsf{E}_{a,b|x,y}^{\bar{A}\bar{B}}=\left(\mathsf{M}_{a|x}^{\bar{A}A}\otimes \mathsf{M}_{b|y}^{\bar{B}B}\right)*W^{AB}\), as given above.
The second step is the self-testing procedure. The players verify that the correlations \(P_{y}(c,a|z,x)\) and \(P_{x}(b,d|y,w)\) reach the maximum quantum violation of the Bell inequality \(\mathcal{I}\geq 0\). As we recall in Appendix A, it is known that if this is the case, then the physical states \(\mathsf{\psi}_{1}^{\bar{C}\bar{A}}\) and \(\mathsf{\psi}_{2}^{\bar{B}\bar{D}}\) and measurements \((\mathsf{M}_{c|z}^{\bar{C}})_{c}\) and \((\mathsf{M}_{d|w}^{\bar{D}})_{d}\) can be related to the ideal reference states and operations of the NSDI-QI scenario in the \(\mathpzc{d}\)-dimensional spaces \(\mathcal{H}^{C^{\prime}},\mathcal{H}^{A^{\prime}},\mathcal{H}^{B^{\prime}}, \mathcal{H}^{D}\) via local isometries [16; 17]. We then show (see Appendix A) that these local isometries also relate the remotely prepared inputs5\(\mathsf{\rho}_{0|z}^{\bar{A}}\) and \(\mathsf{\rho}_{0|w}^{\bar{B}}\) to the reference quantum inputs \(\rho_{z}^{A^{\prime}}\propto M_{0|z}^{C^{\prime}}\ast|\phi^{+}\rangle\! \langle\phi^{+}|^{C^{\prime}A^{\prime}}\) and \(\rho_{w}^{B^{\prime}}\propto M_{0|w}^{D^{\prime}}\ast|\phi^{+}\rangle\! \langle\phi^{+}|^{B^{\prime}D^{\prime}}\). For example, for Alice's quantum inputs we prove that
Footnote 5: In fact, we show that each \(\mathsf{\rho}_{c|z}^{\bar{A}}\) and \(\mathsf{\rho}_{d|w}^{\bar{B}}\) (for all \(c,d\)) can be related to a corresponding state in the reference scenario, but this stronger result is not needed here.
\[\rho_{0|z}^{\bar{A}}=V^{\mathbf{A}\,\dagger}\left(\rho_{z}^{A^{\prime}}\otimes \mathsf{\xi}_{0}^{A^{\prime}\bar{\mathsf{\xi}}}+(\rho_{z}^{A^{\prime}})^{T} \otimes\mathsf{\xi}_{1}^{A^{\prime}\bar{\mathsf{\xi}}}\right)V^{\mathbf{A}}, \tag{9}\]
where \(V^{\mathbf{A}}:\mathcal{H}^{\bar{A}}\to\mathcal{H}^{A^{\prime}A^{\prime}\bar{ \mathsf{\xi}}}\) is an isometry and \(\mathsf{\xi}_{0}^{A^{\prime}\bar{\mathsf{\xi}}}\), \(\mathsf{\xi}_{1}^{A^{\prime}\bar{\mathsf{\xi}}}\) are (subnormalised) "junk and flag" states on a Hilbert space \(\mathcal{H}^{A^{\prime}\bar{\mathsf{\xi}}}\); a similar relation holds for Bob. This allows us to write Eq. (8), when \(c=d=0\), in terms of the reference scenario as
\[P(0,a,b,0|z,x,y,w)\] \[\qquad=P(0|z)\,P(0|w)\,E_{a,b|x,y}^{A^{\prime}B^{\prime}}\ast\big{(} \mathsf{\rho}_{z}^{A^{\prime}}\otimes\mathsf{\rho}_{w}^{B^{\prime}}\big{)}, \tag{10}\]
where the \(E^{A^{\prime}B^{\prime}}_{a,b|x,y}\) are elements of an effective D-POVM (which \(\mathbb{E}^{A\hat{B}}_{a,b|x,y}\) can likewise be related to through local isometries; see Eq. (B3) in Appendix B). Eq. (10) thus relates directly the observed correlations to those obtainable in the NSDI-QI setup.
The last step consists in certifying causal nonseparability. The self-testing step ensures that, when Charlie and Daisy get the outputs \(c=d=0\), Alice and Bob receive, up to local isometries, the desired quantum inputs \(\rho^{A^{\prime}}_{z}\) and \(\rho^{B^{\prime}}_{w}\) from the SDI-QI scenario. We thus focus on the correlations observed when \(x=y=\star\) which, in the reference experiment, should correspond to the measurements used to obtain a causally non-separable D-POVM. These correlations, following Eq. (10), are given by
\[P_{\star}(a,b|z,w) =P(a,b|z,x{=}\star,y{=}\star,w,c{=}0,d{=}0)\] \[=E^{A^{\prime}B^{\prime}}_{a,b|\star,\star}\ast(\rho^{A^{\prime}}_ {z}\otimes\rho^{B^{\prime}}_{w}), \tag{11}\]
which precisely defines, following Eq. (1), a distribution \(P^{\prime}(a,b|\rho^{A^{\prime}}_{z},\rho^{B^{\prime}}_{w})\) in a SDI-QI setting with the reference quantum inputs for the certification of the effective D-POVM \(\mathbb{E}^{A^{\prime}B^{\prime}}_{\rm eff}:=(E^{A^{\prime}B^{\prime}}_{a,b| \star,\star})_{a,b}\). In Appendix B we show that if \(W^{AB}\) is causally separable then \(\mathbb{E}^{A^{\prime}B^{\prime}}_{\rm eff}\) is a causally separable D-POVM. Hence, if \(P^{\prime}(a,b|\rho^{A^{\prime}}_{z},\rho^{B^{\prime}}_{w})\) violates the witness inequality, yielding \(\mathcal{J}<0\), this certifies the causally nonseparability of \(W^{AB}\). By following the ideal reference protocol of the NSDI-QI scenario, one can thus obtain such a NDI certification for any process that can be certified in the bipartite SDI-QI scenario, i.e., which can generate a causally nonseparable D-POVM.
The NDI certification of causal nonseparability described above assumes that the self-testing is perfect: the maximal quantum violation of the Bell inequality \(\mathcal{I}\geq 0\) is obtained, meaning that noiseless maximally entangled states and perfect measurements are realised (up to local isometries). Such requirements are experimentally unobtainable. However, the self-testing procedure and subsequent certification of causal nonseparability can be rendered robust to noise in a similar way to that of [17]; see Appendix C for further explanation.
_Applications and scope.--_The protocol described in the previous section exploits a general approach to transforming a SDI-QI scenario into a NDI one. Indeed, it can be readily generalised to provide NDI certification in settings beyond the basic bipartite one we described. For instance, in [15] we showed that all bipartite causally nonseparable process matrices can be certified in a SDI-QI way if we assume an additional natural assumption on the form of Alice's and Bob's instruments: that they factorise as a joint measurement on one subsystem of their quantum input and the system they receive from the process matrix, and a channel taking the rest of their quantum input into the process matrix. By assuming the same "measurement device and channel independent" (MDCI) structure for the physical instruments \((\mathsf{M}^{\hat{A}A}_{a|\star})_{a}\) and \((\mathsf{M}^{\hat{B}B}_{b|\star})_{b}\), and using independent auxiliary sources to self-test Alice and Bob's quantum input subsystems, our method here can readily be used to show that all bipartite causally nonseparable processes can also be certified in a NDI way (modulo these assumptions); see Appendix D for details on the soundness of the NDI certification in this MDCI scenario.
A process of particular interest is the quantum switch [3], which can be described in either a "(2+\(F\))-partite" or "(\(P{+}2{+}F\))-partite scenario" in which, in addition to Alice and Bob, a third party Fiona with no output Hilbert space performs a measurement in their causal future, and (in the (\(P{+}2{+}F\))-partite scenario), a fourth party Phil with no input Hilbert space acts in their causal past. In the quantum switch, Phil prepares a target system and a control qubit, the latter of which coherently controls the order that Alice and Bob then act on the target system, before both the target and control are received by Fiona. (In the (2+\(F\))-partite scenario, Phil's operations are fixed and absorbed into the process matrix.) The quantum switch is the canonical example of a physically realisable causally nonseparable process [13], and the resource behind most known advantages arising from causal indefiniteness [3, 4, 5, 6, 7, 8]. Certifying its causal nonseparability is thus a key problem, and several experiments have done this in a device-dependent way using causal witnesses [27, 28]. In [15], the SDI-QI scenario was extended to the (2+\(F\))-scenario to certify the causal nonseparability of the quantum switch, and the generality of our NDI approach can again be illustrated in these scenarios. In Appendix E we further extend the SDI-QI framework to the (\(P{+}2{+}F\))-partite scenario--in which the SDI-QI approach can then be transformed into a NDI one by having Phil and Fiona also share maximally entangled states with additional parties, and following the natural generalisation of our bipartite protocol. In fact, for the quantum switch, in the SDI-QI setting it is not necessary to provide all parties with a quantum input: some parties may do with classical inputs, or no input at all. For example, in [15] the case where only Alice and Bob have quantum inputs was considered; in Appendix E we show that one can also certify the causal nonseparability of the quantum switch in a SDI-QI manner while providing only Phil with quantum inputs--a scenario that can then be turned into a NDI one by introducing just one additional party. We thus see that, despite its inability to violate any causal inequality [10, 11], the quantum switch can be certified in a particular type of DI scenario; see also the discussion below.
_Discussion.--_As we emphasised, our certification of causal nonseparability is _device-independent_, in the sense that it does not assume any knowledge about the underlying process or operations involved. Every element involved in the protocol is treated as a (quantum) black box with classical inputs and outputs, and all conclusions are drawn from the observed correlations. However, we called it _network-device-independent_ to differentiate it from the standard approach to DI certification, which requires the violation of a causal inequality [1].
Indeed, our approach differs from the standard one as it assumes a network structure involving two separated untrusted parties, each independently performing operations on a system that is independently shared with a party involved in the process being certified. Moreover, unlike the standard approach which certifies causal ininfiniteness in a theory-independent manner, our method certifies the theory-dependent notion of causal nonseparability. Indeed, it is based on the violation of a witness inequality by a causally nonseparable process matrix and on the self-testing of quantum input states through the maximal quantum violation of Bell inequalities. It thus assumes the validity of quantum theory and of its extension through the process matrix formalism. This, and the simple _network_ structural assumption, is what allows one to certify--without trusting any device--the causal nonseparability of process matrices which cannot violate a causal inequality, such as the quantum switch [10; 11].
Finally, we note that some recent works have also proposed another approach to certify the indefinite causal order in the quantum switch in a device-independent way [29; 30]. In the scenario presented in [30] (also considered in [29]), an additional space-like separated party is introduced along with Alice, Bob and Fiona, and an inequality is derived based on the assumptions of "Definite causal order, Relativistic causality, and Free interventions" (DRF). It is then shown that the correlations generated from a quantum switch in which the control system is entangled with the newly introduced party can violate this "DRF inequality". Notably, our approach allows us to consider a scenario very close to that of [30] by transforming a SDI-QI test of a (\(P\)+2+\(F\))-partite quantum switch, with only Phil (in the causal past) being given a quantum input, into a NDI test (as briefly considered above, and further discussed in Appendix E). It is then interesting to compare the assumptions and conclusions one may draw from these two approaches in which the causal indefiniteness of the quantum switch can be certified in a DI manner (see Appendix F for further discussion). A first important difference is that the approach of [30], and the inequality they derive, is not only device-independent, but also theory-independent--like standard causal inequalities, but contrary to our approach. Furthermore, while [30] explicitly introduced some form of no-signalling assumption at the ontological level ("Relativistic causality") between the new party and those involved in the switch, we did not need such an assumption. Instead, we make the simple network structure assumption which implies just the weaker operational notion of no-signalling. With a violation of a "DRF inequality", the conclusion that there is indefinite causal order can hence be escaped by rejecting this relativistic causality assumption; in our case, one would need to reject entirely the network structure assumption to salvage definite causal order (or, in both cases, one could reject the "Free interventions assumption", which we indeed also made implicitly). It will be insightful to further investigate the connections between the two approaches, possibly in different scenarios. Recall indeed that our approach allows one to turn any SDI-QI certification of causal nonseparability into a NDI one; it would be interesting to see if the approach of [30] can similarly be made more systematic.
_Acknowledgements.--_We thank Jean-Daniel Bancal, Nicolas Brunner and Tein van der Lugt for enlightening discussions, and acknowledge financial support from the EU NextGen Funds, the Government of Spain (FIS2020-TRANQI and Severo Ochoa CEX2019-000910-S), Fundacio Cellex, Fundacio Mir-Puig, Generalitat de Catalunya (CERCA program), the PEPR integrated project EPiG ANR-22-PETQ-0007 part of Plan France 2030, the ERC Starting grant QUSCO, and the Agence Nationale de la Recherche (project ANR-22-CE47-0012).
For the purpose of open access, the authors have applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
|
2303.10205 | Local structure of homogeneous $ANR$-spaces | We investigate to what extend finite-dimensional homogeneous locally compact
$ANR$-spaces have common properties with Euclidean manifolds. Specially, the
local structure of homogeneous $ANR$-spaces is described. Using that
description, we provide a positive solution of the problem whether every
finite-dimensional homogeneous metric $ANR$-compactum $X$ is dimensionally
full-valued, i.e. $\dim X\times Y=\dim X+\dim Y$ for any metric compactum $Y$. | Vesko Valov | 2023-03-17T18:51:26Z | http://arxiv.org/abs/2303.10205v5 | # Local structure of homogeneous \(Ann\)-spaces
###### Abstract.
We investigate to what extend finite-dimensional homogeneous locally compact \(ANR\)-spaces have common properties with Euclidean manifolds. Specially, the local structure of homogeneous \(ANR\)-spaces is described. Using that description, we provide a positive solution of the problem whether every finite-dimensional homogeneous metric \(ANR\)-compactum \(X\) is dimensionally full-valued, i.e. \(\dim X\times Y=\dim X+\dim Y\) for any metric compactum \(Y\).
Key words and phrases:absolute neighborhood retracts, cohomological dimension, cohomology groups, homogeneous spaces 2000 Mathematics Subject Classification: Primary 55M15; Secondary 55M10 The author was partially supported by NSERC Grant 261914-19.
## 1. Introduction
By a _space_ we mean a locally compact separable metric space, and _maps_ are continuous mappings. Reduced Cech homology \(H_{n}(X;G)\) and cohomology groups \(H^{n}(X;G)\) with coefficient from an abelian group \(G\) are considered everywhere below. A space is said to be _homogeneous_ provided that, for every two pints \(x,y\in X\) there exists a homeomorphism \(h\) mapping \(X\) onto itself with \(h(x)=y\). Homogeneity of \(X\) implies _local homogeneity_, that is for every \(x,y\in X\) there exists a homeomorphism \(h\) mapping a neighborhood of \(x\) onto a neighborhood of \(y\) such that \(h(x)=y\).
One of the reasons to investigate the homogeneous \(ANR\)-spaces is the well-known Bing-Borsuk [1] conjecture that every finite-dimensional homogeneous metric \(ANR\)-compactum is an Euclidean manifold. According to Jacobsche [14], a positive solution of that conjecture in dimension three implies the celebrated Poincare conjecture. J. Bryant and S. Ferry [4] announced the existence of many counter-examples to that conjecture for every \(n\geq 6\), but to the best of author's knowledge, the Bryant-Ferry paper is not published yet. Despite that the status of the Bing-Borsuk conjecture is unknown, it is still interesting to
investigate the extend to which finite-dimensional homogeneous \(ANR\)-spaces have common properties with Euclidean manifolds. We show that homogeneous \(ANR\)-spaces have indeed some properties typical for Euclidean manifolds.
The local structure of homogeneous \(ANR\)-compacta \(X\) with a finite cohomological dimension \(\dim_{G}X=n\) was established in [29, Theorem 1.1], where \(G\) is a countable principal ideal domain with unity and \(n\geq 2\). In the present paper we improve the results in [29] by considering locally compact homogeneous \(ANR\)-spaces and replacing the principal ideal domains with countable groups. Moreover, we also provide some new properties of homogeneous \(ANR\)'s or locally homogeneous \(ANR\)'s.
The next theorem describes the local structure of homogeneous \(ANR\)-spaces and shows their similarity with Euclidean manifolds:
**Theorem 1.1**.: _Let \(X\) be a homogeneous connected \(ANR\)-space with \(\dim_{G}X=n\geq 2\), where \(G\) is a countable group. Then every point \(x\in X\) has a base \(\mathcal{B}_{x}\) of connected open sets \(U\subset X\) each having a compact closure and satisfying the following conditions:_
1. \(\operatorname{Int}\overline{U}=\operatorname{U}\)_, the boundary_ \(\operatorname{bd}\overline{U}\) _of_ \(\overline{U}\) _is connected and its complement in_ \(X\) _has exactly two components;_
2. \(H^{n-1}(\operatorname{bd}\overline{U};G)\neq 0\)_,_ \(H^{n-1}(\overline{U};G)=0\) _and_ \(\overline{U}\) _is an_ \((n-1)\)_-cohomology membrane spanned on_ \(\operatorname{bd}\overline{U}\) _for any non-zero_ \(\gamma\in H^{n-1}(\operatorname{bd}\overline{U};G)\)_;_
3. \(\operatorname{bd}\overline{U}\) _is an_ \((n-1,G)\)_-bubble._
Recall that for any nontrivial abelian group \(G\) the Cech cohomology group \(H^{n}(X;G)\) is isomorphic to the group of pointed homotopy classes of maps from \(X\) to \(K(G,n)\), where \(K(G,n)\) is a \(CW\)-complex of type \((G,n)\), see [17]. The cohomological dimension \(\dim_{G}X\) is the largest number \(n\) such that there exists a closed set \(A\subset X\) with \(H^{n}(X,A;G)\neq 0\). Equivalently, for a metric space \(X\) we have \(\dim_{G}X\leq n\) if and only if for any closed pair \(A\subset B\) in \(X\) the homomorphism \(j_{B,A}^{n}:H^{n}(B;G)\to H^{n}(A;G)\), generated by the inclusion \(A\hookrightarrow B\), is surjective, see [10]. This means that every map from \(A\) to \(K(G,n)\) can be extended over \(B\). If \(X\) is a finite-dimensional space, then for every \(G\) we have \(\dim_{G}X\leq\dim_{\mathbb{Z}}X=\dim X\)[21]. On the other hand, there is a compactum \(X\)[8] with \(\dim X=\infty\) and \(\dim_{\mathbb{Z}}X=3\). A space \(A\) is a \((k,G)\)_-bubble_ if \(H^{k}(A;G)\neq 0\) but \(H^{k}(B;G)=0\) for every closed proper subset \(B\) of \(A\).
The following property of homogeneous \(ANR^{\prime}s\) is essential, see [1], [2]: If \((K,A)\) is a pair of closed subsets of \(X\) such that \(K\) is an \((n-1)\)-cohomological membrane for some \(\gamma\in H^{n-1}(A;G)\), then
\(\overline{X\setminus K}=\varnothing\). We call this property the \(n\)_-cohomology membrane property_. Recall that \(K\) is said to be an \(k\)_-cohomology membrane spanned on a closed set \(A\subset K\) for an element \(\gamma\in H^{k}(A;G)\)_ if \(\gamma\) is not extendable over \(K\), but it is extendable over every proper closed subset of \(K\) containing \(A\). Here, \(\gamma\in H^{k}(A;G)\) is not extendable over \(K\) means that \(\gamma\) is not contained in the image \(j^{k}_{K,A}\big{(}H^{k}(K;G)\big{)}\).
Everywhere below, \(\mathcal{B}_{x}\) stands for the family of all open neighborhoods of a point \(x\in X\). We say that a space \(X\) has an \(n\)_-dimensional \(G\)-obstruction_ at a point \(x\in X\)[21] if there is \(W\in\mathcal{B}_{x}\) such that the homomorphism \(j^{n}_{U,W}:H^{n}(X,X\backslash U;G)\to H^{n}(X,X\backslash W;G)\) is nontrivial for every \(U\in\mathcal{B}_{x}\) with \(U\subset W\). Kuzminov [21], [22], proved that every compactum \(X\) with \(\dim_{G}X=n\) contains a compact set \(Y\) with \(\dim_{G}Y=n\) such that \(X\) has an \(n\)-dimensional \(G\)-obstruction at any point of \(Y\). According to the next theorem, \(n\)-dimensional homogeneous spaces have an obstruction at every point and the corresponding homomorphisms are surjective.
**Theorem 1.2**.: _Let \(X\) be a locally homogeneous \(ANR\)-space such that \(\dim_{G}X=n\), where \(G\) is a countable group. Then the following holds:_
1. \(X\) _has the_ \(n\)_-cohomology membrane property;_
2. \(X\) _has an_ \(n\)_-dimensional_ \(G\)_-obstruction at every_ \(x\in X\)_. Moreover, there is_ \(W\in\mathcal{B}_{x}\) _such that the homomorphism_ \(j^{n}_{U,V}\) _is surjective for any_ \(U,V\in\mathcal{B}_{x}\) _with_ \(\overline{U}\subset V\subset\overline{V}\subset W\)_._
Here is another common property of locally homogeneous \(ANR\)'s and Euclidean manifolds.
**Corollary 1.3**.: _Let \(X\) be as in Theorem 1.2. If \(U\subset X\) is open and \(f:U\to X\) is an injective map, then \(f(U)\) is also open in \(X\)._
One of the important questions concerning homogeneous \(ANR\)'s is whether every finite-dimensional homogeneous \(ANR\)-compactum is dimensionally full-valued. Recall that a locally compact space \(X\) is _dimensionally full-valued_ if \(\dim X\times Y=\dim X+\dim Y\) for any compact space \(Y\). Let us note that there are metric \(ANR\)-compacta which are not dimensionally full-valued, see [7] and [9]. The question for the dimensional full-valuedness of homogeneous \(ANR\)-compacta goes back to [3] and was also discussed in [5] and [13]. A positive answer of that question for \(3\)-dimensional spaces was given in [29]. Now, we provide a complete solution.
**Theorem 1.4**.: _Let \(X\) be a finite-dimensional locally homogeneous \(ANR\)-space. Then the following holds:_
1. \(X\) _is dimensionally full-valued;_
2. _Providing_ \(X\) _is homogeneous and connected, every_ \(x\in X\) _has a neighborhood_ \(W_{x}\) _such that_ \(\operatorname{bd}\overline{U}\) _is dimensionally full-valued for all_ \(U\in\mathcal{B}_{x}\) _with_ \(\overline{U}\subset W_{x}\)_._
## 2. The cohomology membrane property
If, in the definition of the \(n\)-cohomology membrane property, we additionally require \(K\) to be a compact set contractible in a proper subset of \(X\), then we say that \(X\) has the _weak \(n\)-cohomology membrane property_. We will see that the proof of Theorem 1.1 is based mainly on that property. In this section we prove that any locally homogeneous space \(X\) with \(\dim_{G}X=n\) has the \(n\)-cohomology membrane property, and provide some implications of that property.
A space \(X\) is called a _\((k,G)\)-carrier_ of a nontrivial element \(\gamma\in H^{k}(X;G)\) if \(j^{k}_{X,B}(\gamma)=0\) for every closed proper subset \(B\subset X\).
**Lemma 2.1**.: _Let \(A\subset X\) be a compact set and \(\gamma\) be a non-zero element of \(H^{n}(A;G)\)._
1. _If_ \(\gamma\) _is not extendable over a compact set_ \(P\subset X\) _containing_ \(A\)_, then there exists an_ \(n\)_-cohomology membrane_ \(K\subset P\) _for_ \(\gamma\) _spanned on_ \(A\)_;_
2. _There is a closed set_ \(B\subset A\) _such that_ \(B\) _is a carrier of_ \(j^{n}_{A,B}(\gamma)\)_._
Proof.: \((i)\) Consider the family \(\mathcal{F}\) of all closed subsets \(F\) of \(P\) containing \(A\) such that \(\gamma\) is not extendable over \(F\). Let \(\{F_{\tau}\}\) be a decreasing subfamily of \(\mathcal{F}\) and \(F_{0}=\bigcap_{\tau}F_{\tau}\). Suppose \(\gamma\) is extendable to \(\widetilde{\gamma}\in H^{n}(F_{0};G)\). Then, considering \(\widetilde{\gamma}\) as a map from \(F_{0}\) into \(K(G,n)\) and having in mind that \(K(G,n)\) is an absolute neighborhood for metrizable spaces [19], we can extend \(\widetilde{\gamma}\) over a neighborhood \(W\) of \(F_{0}\) in \(P\). But that is impossible since \(W\) contains some \(F_{\tau}\). Hence, by Zorn's lemma, \(\mathcal{F}\) has a minimal element \(K\) which is an \(n\)-cohomology membrane for \(\gamma\) spanned on \(A\).
\((ii)\) Now, let \(\mathcal{F}\) be the family of closed subsets \(F\) of \(A\) such that \(j^{n}_{A,F}(\gamma)\neq 0\) and \(\{F_{\tau}\}\) be a decreasing subfamily of \(\mathcal{F}\) with \(F_{0}=\bigcap_{\tau}F_{\tau}\). If \(\gamma_{0}=j^{n}_{A,F_{0}}(\gamma)=0\), then there is a homotopy \(H:F_{0}\times[0,1]\to K(G,n)\) connecting the constant map and \(\gamma_{0}\). Using again that \(K(G,n)\) is an absolute neighborhood extensor for metric spaces, we find a closed neighborhood \(W\) of \(F_{0}\) in \(A\) and a homotopy \(\widetilde{H}:W\times[0,1]\to K(G,n)\) connecting \(j^{n}_{A,W}(\gamma)\) and the constant map. This is a contradiction because \(W\) contains some \(F_{\tau}\). Hence, \(\mathcal{F}\) has a minimal element \(B\) which is a carrier of \(j^{n}_{A,B}(\gamma)\).
We also need the following version of Effros' theorem [11] for locally compact spaces (see [20, Proposition 1.4]):
**Theorem 2.2**.: _Let \(X\) be a homogeneous space with a metric \(\rho\), \(a\in X\) and \(\varepsilon>0\). Then there is \(\delta>0\) such that for every \(x\in X\) with \(\rho(x,a)<\delta\) there exists a homeomorphism \(h:X\to X\) with \(h(a)=x\) and \(\rho(h(y),y)<\varepsilon\) for all \(y\in X\)._
**Proposition 2.3**.: _Let \(X\) be a homogeneous \(ANR\)-space with \(\dim_{G}X=n\), where \(G\) is an arbitrary group. Then \(H^{n}(P;G)=0\) for any proper compact set \(P\subset X\)._
Proof.: This was established in [30] in case \(X\) is compact using the following proposition, see [30, Proposition 2.3]: If \(X\) is a metric compact with \(\dim_{G}X=n\), \(A\subset X\) is carrier of a nontrivial element of \(H^{n}(X;G)\) and there is a map \(f:X\to X\) homotopic to the identity \(\operatorname{id}_{X}\) of \(X\), then \(A\subset f(A)\). This statement remains true if \(X\) is an \(ANR\)-space with \(\dim_{G}X=n\), \(A\) is a carrier of a nontrivial element of \(H^{n}(Q;G)\), where \(Q\subset X\) is compact, and there exists a homotopy \(F:A\times[0,1]\to X\) with \(F(x,0)=x\) and \(F(x,1)=f(x)\), \(x\in A\).
Fix a metric \(\rho\) on \(X\) generating its topology and suppose there exists a proper compact set \(P\subset X\) with \(H^{n}(P;G)\neq 0\). So, by Lemma 2.1(ii), \(P\) contains a compact set \(K\) such that \(K\) is a carrier for \(j^{n}_{P,K}(\alpha)\), where \(\alpha\in H^{n}(P;G)\) is nontrivial. Take two open sets \(U,V\subset X\) containing \(P\) and having compact closures in \(X\) with \(\overline{U}\subset V\). Since \(X\in ANR\), there is an open cover \(\omega\) of \(X\) such that any two \(\omega\)-close maps \(f_{1},f_{2}:\overline{U}\to X\) are homotopic. Restricting \(\omega\) on \(\overline{V}\), we find \(\varepsilon>0\) such that any maps \(f_{1},f_{2}:\overline{U}\to X\) with \(\rho(f_{1}(x),f_{2}(x))<\varepsilon\), \(x\in\overline{U}\), are homotopic. Moreover, we can assume \(\varepsilon<\min\{\rho(P,X\backslash U),\rho(\overline{U},X\backslash V)\}\). Next, take a point \(a\) from the boundary \(\operatorname{bd}\operatorname{K}\) of \(K\) in \(X\) and a point \(b\notin K\) with \(\rho(a,b)<\delta\), where \(\delta>0\) is the Effros' number corresponding to \(\varepsilon\) and the point \(a\). Accordingly, there is homeomorphism \(h:X\to X\) such that \(h(a)=b\) and \(\rho(x,h(x))<\varepsilon\) for all \(x\in X\). So, \(h(K)\subset U\) and \(h(\overline{U})\subset V\). Since \(h\) generates an isomorphism \(h^{*}:H^{n}(h(P);G)\to H^{n}(P;G)\), the set \(A=h(K)\) is a carrier for \(j^{n}_{h(P),h(K)}(\beta)\), where \(\beta=(h^{*})^{-1}(\alpha)\). Consider the map \(g=h^{-1}|\overline{U}:\overline{U}\to h^{-1}(\overline{U})\). Observe that \(\rho(g(x),x)<\varepsilon\) for all \(x\in\overline{U}\). Hence, \(g\) and the identity \(\operatorname{id}_{\overline{U}}\) on \(\overline{U}\) are homotopic. In particular, there is a homotopy \(F:g(A)\times[0,1]\to X\) such that \(F(x,0)=x\) and \(F(x,1)=g(x)\), \(x\in g(A)\). Therefore, we can apply the modification of [30, Proposition 2.3] stated above to conclude that \(A\subset g(A)\). Because \(b=h(a)\in A\), the last inclusion implies \(b\in g(A)=K\), a contradiction.
**Remark**.: _Proposition 2.3 remains true without homogeneity of \(X\) provided \(P\) is a closed set contractible in \(X\)_
This is true if \(H^{n}(X;G)=0\) because \(\dim_{G}X\leq n\). If \(H^{n}(X;G)\neq 0\) this is also true. Indeed, since \(\dim_{G}X\leq n\), every \(\alpha\in H^{n}(P;G)\) is extendable to \(\widetilde{\alpha}\in H^{n}(X;G)\). On the other hand \(P\) is contractible in \(X\), so every such \(\widetilde{\alpha}\) is zero.
A compact pair \(A\subset K\) is called a \(k\)_-homological membrane spanned on \(A\)_ for a nontrivial element \(\gamma\in H_{k}(A;G)\) if \(i^{k}_{A,K}(\gamma)=0\) but \(i^{k}_{A,B}(\gamma)\neq 0\) for any proper closed subset \(B\) of \(K\) containing \(A\). Here, \(H_{*}\) denotes the Cech homology and \(i^{k}_{A,K}:H_{k}(A;G)\to H_{k}(K;G)\) is the homomorphism induced be the inclusion \(A\hookrightarrow K\). According to Bing-Borsuk [1], if \(A\subset K\) is a compact pair and \(\gamma\in H_{k}(A;G)\) is a nontrivial element homologous to zero in \(K\), then there is a closed set \(B\subset K\) containing \(A\) such that \(A\subset B\) a \(k\)-homological membrane for \(\gamma\) spanned on \(A\). Bing-Borsuk [1] considered the Vietoris homology which is isomorphic with the Cech homology in the realm of metric compacta. So, homology membranes with respect to Cech homology always exist in the class of metric compacta.
**Proposition 2.4**.: _Any locally homogeneous \(ANR\)-space with \(\dim_{G}X=n\), where \(G\) is a countable group, has the weak \(n\)-cohomological membrane property._
Proof.: Suppose there exists a compact pair \(A\subset K\) such that \(K\) is contractible in a proper subset of \(X\) and \(K\) is an \((n-1)\)-cohomological membrane for some nontrivial \(\gamma\in H^{n-1}(A;G)\), but \((K\setminus A)\cap\overline{X\setminus K}\neq\varnothing\). Since \(G\) is countable, the homology group \(H_{n-1}(Y;G^{*})\) is isomorphic to \(H^{n-1}(Y;G)^{*}\) for any metric compactum \(Y\), see [16]. Here, \(G^{*}\) and \(H^{n-1}(Y;G)^{*}\) are the character groups of \(G\) and \(H^{n-1}(Y;G)\), considered as discrete groups. Then \(K\) is an \((n-1)\)-homological membrane spanned on \(A\) for some nontrivial \(\beta\in H_{n-1}(A;G^{*})\), see the proof of [29, Proposition 2.1]. Hence, following the proof of [1, Theorem 8.1], we can find a compact set \(P\subset X\) contractible in \(X\) with \(H_{n}(P;G^{*})\neq 0\). So, \(H^{n}(P;G)\neq 0\), which contradicts the remark after Proposition 2.3.
We also need the following properties of cohomology membranes, see Corollary 2.2 and Lemma 2.4 from [29], respectively.
**Lemma 2.5**.: _For any space \(X\) with the weak \(n\)-cohomology membrane property the following conditions hold:_
* _If_ \(K\) _is an_ \((n-1)\)_-cohomology membrane spanned on a set_ \(A\subset K\) _for some_ \(\gamma\in H^{n-1}(A;G)\)_, where_ \(K\) _is a compactum contractible in a proper subset of_ \(X\)_, then_ \(K\setminus A\) _is a connected open subset of_ \(X\)_;_
* _Let_ \(A\subset P\) _be a compact pair such that_ \(P\) _is contractible in a proper subset of_ \(X\) _and there exists a non-zero_ \(\gamma\in H^{n-1}(A;G)\)
not extendable over \(P\). Then \(A\) separates every set \(\Gamma\subset X\) containing \(P\) as a proper subset._
Let \((X,\rho)\) be a metric space. We say that \(\rho\) is _convex_ if for each \(x,y\in X\) there exists an arc \(A\subset X\) with end-points \(x\) and \(y\) such that \(A\) with the restriction of the metric \(\rho\) is isometric to the interval \([0,\rho(x,y)]\) in the real line (where the real line is considered with its usual metric). According to [27], every connected locally connected space admits a convex metric.
**Proposition 2.6**.: _Let \(\{G_{i}\}_{i=1}^{k}\) be a finite family of countable groups and \(X\) be a locally homogeneous \(ANR\)-space with \(\dim_{G_{i}}X=n_{i}\). Then we have:_
1. _Every closed set_ \(P\subset X\) _with_ \(\dim_{G_{i}}P=n_{i}\)_,_ \(i\leq k\)_, has a non-empty interior;_
2. _Every_ \(x\in X\) _has a neighborhood_ \(W\) _such that any connected_ \(U\in\mathcal{B}_{x}\) _with_ \(U=\operatorname{int}\overline{\operatorname{U}}\subset\operatorname{W}\) _satisfies the following for all_ \(i\)_:_ * \(H^{n_{i}}(\overline{U};G_{i})=0\) _and_ \(H^{n_{i}-1}(\operatorname{bd}\overline{U};G_{i})\neq 0\) _contains an element not extendable over_ \(\overline{U}\)_;_ * \(\overline{U}\) _is an_ \((n_{i}-1)\)_-cohomology membrane spanned on_ \(\operatorname{bd}\overline{U}\) _for any nontrivial_ \(\alpha_{i}\in H^{n_{i}-1}(\operatorname{bd}\overline{U};G_{i})\) _not extendable over_ \(\overline{U}\)_._
Proof.: \((i)\) This was established in [29, Corollary 2.3] in case \(X\) is compact homogeneous \(ANR\). For the covering dimension \(\dim\) and locally homogeneous \(ANR\)'s it was established in [25, Theorem A] (see also [24] for a property stronger than locally homogeneity). Suppose \(P\subset X\) is a closed set with \(\dim_{G_{i}}P=n_{i}\) for some \(i\). Then \(P\) is the union of countably many compact sets \(F_{j}\) each contractible in a proper subset of \(X\). By the countable sum theorem for \(\dim_{G_{i}}\), we have \(\dim_{G_{i}}F_{j}=n\) for at least one \(j\). So, we can assume that \(P\) is a compact set contractible in a proper subset of \(X\). Since \(\dim_{G_{i}}P=n_{i}\), there is a closed set \(A\subset P\) and an element \(\gamma_{i}\in H^{n_{i}-1}(A;G_{i})\) not extendable over \(P\). Then, according to Lemma 2.1(i), there exists a closed set \(K\subset P\) such that \(K\) is an \((n_{i}-1)\)-cohomology membrane for \(\gamma_{i}\) spanned on \(A\). Because \(X\) has the weak \(n_{i}\)-cohomological membrane property, \((K\setminus A)\cap\overline{X\setminus K}=\varnothing\), which implies \(K\setminus A\) is open in \(X\). Finally, the inclusion \(K\setminus A\subset P\) completes the proof.
\((ii)\) Since \(X\) is a countable union of compact sets each contractible in a proper subset of \(X\), as in the previous paragraph, for every \(i\) there exits a compact pair \(A_{i}\subset K_{i}\) such that \(\dim_{G_{i}}K_{i}=n_{i}\), \(K_{i}\) is an \((n_{i}-1)\)-cohomology membrane for some \(\gamma_{i}\in H^{n_{i}-1}(A_{i};G_{i})\) spanned on \(A_{i}\) and \(K_{i}\) is contractible in a proper subset of \(X\). Then each \(K_{i}\setminus A_{i}\) is a connected open set in \(X\), see Lemma 2.5(i). Let \(x_{i}\in K_{i}\setminus A_{i}\) and let
\(W_{i}\in\mathcal{B}_{x_{i}}\) with \(\overline{W}_{i}\subset K_{i}\setminus A_{i}\). Because of the local homogeneity of \(X\), we can assume that all \(W_{i}\) are homeomorphic.
Let \(U_{i}\in\mathcal{B}_{x_{i}}\) be connected with \(U_{i}=\operatorname{int}\overline{\operatorname{U}}_{\operatorname{i}}\subset \operatorname{W}_{\operatorname{i}}\). Such sets \(U_{i}\) exist. Indeed, since \(X\) is locally connected, each its component of connectedness \(X_{c}\) is open, and according to [27], there is a convex metric generating the topology of \(X_{c}\). On the other hand, if \(d\) is a convex metric, then every open ball \(B(x,\delta)=\{y\in X_{c}:d(x,y)<\delta\}\) is connected and \(\operatorname{int}\overline{\operatorname{B}(\operatorname{x},\delta)}= \operatorname{B}(\operatorname{x},\delta)\).
Since each \(\overline{U}_{i}\) is contractible in a proper subset of \(X\), \(H^{n_{i}}(\overline{U}_{i};G_{i})=0\). We claim that \(H^{n_{i}-1}(\operatorname{bd}\overline{U}_{i};G)\neq 0\) and contains an element not extendable over \(\overline{U}_{i}\). Indeed, \(K_{i}\setminus U_{i}\) is a proper closed subset of \(K_{i}\) containing \(A_{i}\). So, \(\gamma_{i}\) can be extended to \(\widetilde{\gamma}_{i}\in H^{n_{i}-1}(K_{i}\setminus U_{i};G_{i})\). Then \(\gamma_{U_{i}}=j_{K_{i}\setminus U_{i},\operatorname{bd}\overline{U}_{i}}^{ n_{i}-1}(\widetilde{\gamma}_{i})\) is a non-zero element of \(H^{n_{i}-1}(\operatorname{bd}\overline{U}_{i};G)\) (otherwise \(\gamma_{i}\) would be extendable over \(K_{i}\)), and \(\gamma_{U_{i}}\) is not extendable over \(\overline{U}_{i}\). Let show that \(\overline{U}_{i}\) is an \((n_{i}-1)\)-cohomology membrane spanned on \(\operatorname{bd}\overline{U}_{i}\) for every \(\alpha_{i}\in H^{i-1}(\operatorname{bd}U_{i};G_{i})\) not extendable over \(\overline{U}_{i}\). By Lemma 2.1, for any such \(\alpha_{i}\) there is an \((n_{i}-1)\)-cohomology membrane \(K_{\alpha_{i}}\subset\overline{U}_{i}\) for \(\alpha_{i}\) spanned on \(\operatorname{bd}\overline{U}_{i}\). Hence, \((K_{\alpha_{i}}\setminus\operatorname{bd}\overline{U}_{i})\cap\overline{X \setminus K_{\alpha_{i}}}=\varnothing\). In particular, \(K_{\alpha_{i}}\setminus\operatorname{bd}\overline{U}_{i}\) is open in \(U_{i}\). Thus, \(K_{\alpha_{i}}=\overline{U}_{i}\), otherwise \(U_{i}\) would be the union of the non-empty disjoint open sets \(U_{i}\setminus K_{\alpha_{i}}\) and \((K_{\alpha_{i}}\setminus\operatorname{bd}\overline{U}_{i})\). Finally, since \(X\) is locally homogeneous, every \(x\in X\) has a neighborhood \(W\) satisfying condition (2).
**Lemma 2.7**.: _Let \(X\) be a locally homogeneous \(ANR\)-space and \(x\in X\). If \(G\) is a countable group and \(H^{n-1}(\operatorname{bd}\overline{U};G)\neq 0\) for all sufficiently small neighborhoods \(U\in\mathcal{B}_{x}\), then \(\dim_{G}X\geq n\)._
Proof.: Suppose \(\dim_{G}X\leq n-1\). Since the interior of \(\operatorname{bd}\overline{U}\) is empty, \(\dim_{G}\operatorname{bd}\overline{U}\leq n-2\) for all \(U\in\mathcal{B}_{x}\). This, according to the definition of cohomological dimension, implies \(H^{n-1}(\operatorname{bd}\overline{U};G)=0\), a contradiction.
## 3. Proof of Theorem 1.1
The next lemma was established in [6, Theorem 8] for locally connected continua. The same proof works for locally connected and connected spaces \(X\) by passing to the one-point compactification of \(X\).
**Lemma 3.1**.: _Let \(X\) be a connected and locally connected space and \(\{K_{\alpha}:\alpha\in\Lambda\}\) be an uncountable collection of disjoint continua such that for each \(\alpha\) the set \(X\setminus K_{\alpha}\) has more than one component. Then there exists \(\alpha_{0}\in\Lambda\) such that \(X\setminus K_{\alpha_{0}}\) has exactly two components._
The proof of Theorem 1.1 follows from Proposition 2.6 and Proposition 3.2 below.
**Proposition 3.2**.: _Let \(X\) be a homogeneous connected \(ANR\)-space with \(\dim_{G}X=n\geq 2\), where \(G\) is a countable group. Then every point \(x\in X\) has a basis \(\mathcal{B}_{x}\) of open connected sets \(U\) each with a compact closure satisfying the following conditions:_
1. \(H^{n-1}(\overline{U};G)=0\) _and_ \(X\setminus\operatorname{bd}\overline{U}\) _has exactly two components;_
2. \(\operatorname{bd}\overline{U}\) _is an_ \((n-1,G)\)_-bubble._
Proof.: By Proposition 2.6, every point \(x\in X\) has a neighborhood \(W_{x}\) with a compact closure such any connected neighborhood \(U=\operatorname{int}\overline{\operatorname{U}}\subset\operatorname{W_{x}}\) of \(x\) satisfies the following condition: \(H^{n}(\overline{U};G)=0\), \(H^{n-1}(\operatorname{bd}\overline{U};G)\) contains elements \(\gamma\) not extendable over \(\overline{U}\) and for each such \(\gamma\) the set \(\overline{U}\) is an \((n-1)\)-cohomology membrane for \(\gamma\) spanned on \(\operatorname{bd}\overline{U}\). We assume also that \(\overline{W}_{x}\) is a connected set contractible in a proper subset of \(X\) and there is \(\alpha_{x}\in H^{n-1}(\operatorname{bd}\overline{W}_{x};G)\) such that \(\overline{W}_{x}\) is an \((n-1)\)-cohomology membrane for \(\alpha_{x}\) spanned on \(\operatorname{bd}\overline{W}_{x}\). We fix \(x\in X\) and let \(\mathcal{B}^{\prime}_{x}\) be the family of all open connected neighborhoods \(U\) of \(x\) such that \(U=\operatorname{int}\overline{\operatorname{U}}\) and \(\overline{U}\) is contractible in \(W_{x}\).
_Claim 1_.: For every \(U\in\mathcal{B}^{\prime}_{x}\) there exists a non-zero \(\gamma_{U}\in H^{n-1}(\operatorname{bd}\overline{U};G)\) such that \(\gamma_{U}\) is extendable over \(\overline{W}_{x}\setminus U\) and \(\overline{U}\) is an \((n-1)\)-cohomology membrane for \(\gamma_{U}\) spanned on \(\operatorname{bd}\overline{U}\).
Indeed, since \(\overline{W}_{x}\) is an \((n-1)\)-cohomology membrane for \(\alpha_{x}\) spanned on \(\operatorname{bd}\overline{W}_{x}\), \(\alpha_{x}\) is not extendable over \(\overline{W}_{x}\), but it is extendable over every closed proper subset of \(\overline{W}_{x}\) containing \(\operatorname{bd}\overline{W}_{x}\). So, \(\alpha_{x}\) is extendable to an element \(\widetilde{\alpha}_{x}\in H^{n-1}(\overline{W}_{x}\setminus U;G)\) and the element \(\gamma_{U}=j_{\overline{W}_{x}\setminus U,\operatorname{bd}\overline{U}}^{n -1}(\widetilde{\alpha}_{x})\in H^{n-1}(\operatorname{bd}\overline{U};G)\) is not extendable over \(\overline{U}\) (otherwise \(\alpha_{x}\) would be extendable over \(\overline{W}_{x}\)), in particular \(\gamma_{U}\neq 0\). Therefore, according to the choice of \(W_{x}\), \(\overline{U}\) is an \((n-1)\)-cohomology membrane for \(\gamma_{U}\) spanned on \(\operatorname{bd}\overline{U}\).
Let \(\mathcal{B}^{\prime\prime}_{x}\) be the family of all \(U\in\mathcal{B}^{\prime}_{x}\) satisfying the following condition: \(\operatorname{bd}\overline{U}\) contains a continuum \(F_{U}\) such that \(X\setminus F_{U}\) has exactly two components and \(F_{U}\) is an \((n-1,G)\)-carrier of \(j_{\operatorname{bd}\overline{U},F_{U}}^{n-1}(\gamma_{U})\).
_Claim 2_.: \(\mathcal{B}^{\prime\prime}_{x}\) is a local base at \(x\).
Since \(X\) is arc connected and locally arc connected, there is a convex metric \(d\) generating the topology of \(X\), see [27]. We fix \(W_{0}\in\mathcal{B}^{\prime}_{x}\) and for every \(\delta>0\) denote by \(B(x,\delta)\) the open ball in \(X\) with a center \(x\) and a radius \(\delta\). There exists \(\varepsilon_{x}>0\) such that \(B(x,\delta)\subset W_{0}\) for all \(\delta\leq\varepsilon_{x}\). We already observed in the proof of Proposition 2.6 that
any \(B(x,\delta)\) is connected and \(\operatorname{int}(\overline{\operatorname{B(x,\delta)}})=\operatorname{B(x,\delta)}\). Moreover, \(\overline{B(x,\delta)}\) is contractible in \(W_{x}\). Hence, all \(U_{\delta}=B(x,\delta)\), \(\delta\leq\varepsilon_{x}\), belong to \(\mathcal{B}_{x}^{\prime}\). Consequently, by Claim 1, for every \(\delta\) there exists a non-zero \(\gamma_{\delta}\in H^{n-1}(\operatorname{bd}\overline{U}_{\delta};G)\) such that \(\overline{U}_{\delta}\) is an \((n-1)\)-cohomology membrane for \(\gamma_{\delta}\) spanned on \(\operatorname{bd}\overline{U}_{\delta}\) and \(\gamma_{\delta}\) is extendable over \(\overline{W}_{x}\setminus U_{\delta}\). By Lemma 2.1(ii), there exists a closed subset \(F_{\delta}\) of \(\operatorname{bd}\overline{U}_{\delta}\) which is a carrier of \(\gamma_{\delta}^{*}=j_{\operatorname{bd}\overline{U}_{\delta},F_{\delta}}^{n- 1}(\gamma_{\delta})\). Since \(n\geq 2\) and \(F_{\delta}\) is a carrier of \(\gamma_{\delta}^{*}\in H^{n-1}(F_{\delta};G)\), \(F_{\delta}\) is a continuum, see [29, Lemma 2.7]. Let us show that the family \(\{F_{\delta}:\delta\leq\varepsilon_{x}\}\) is uncountable. Since the function \(f\colon X\to\mathbb{R}\), \(f(y)=d(x,y)\), is continuous and \(W_{0}\) is connected, \(f(W_{0})\) is an interval containing \([0,\varepsilon_{x}]\) and \(f^{-1}([0,\varepsilon_{x}))=B(x,\varepsilon_{x})\subset W_{0}\). So, \(f^{-1}(\delta)=\operatorname{bd}\overline{U}_{\delta}\neq\varnothing\) for all \(\delta\leq\varepsilon_{x}\). Hence, the family \(\{F_{\delta}:\delta\leq\varepsilon_{x}\}\) is uncountable and consist of disjoint continua. Moreover, \(\gamma_{\delta}^{*}\) is a non-zero element of \(H^{n-1}(F_{\delta};G)\) not extendable over \(\overline{W}_{x}\) because \(F_{U}\) is contractible in \(\overline{W}_{x}\). Thus, by Lemma 2.5(ii), \(F_{\delta}\) separates \(X\). So, each \(X\setminus F_{\delta}\) has at least two components. Then, by Lemma 3.1, there exists \(\delta_{0}\leq\varepsilon_{x}\) such that \(X\setminus F_{\delta_{0}}\) has exactly two components. Therefore, \(U_{\delta_{0}}=B(x,\delta_{0})\in\mathcal{B}_{x}^{\prime\prime}\) and it is contained in \(W_{0}\). This completes the proof of Claim 2.
Now, let \(\widetilde{\mathcal{B}}_{x}\) be the family of all \(U\in\mathcal{B}_{x}^{\prime\prime}\) with \(H^{n-1}(\operatorname{bd}\overline{U};G)\neq 0\) such that both \(U\) and \(X\setminus\overline{U}\) are connected.
_Claim 3_.: \(\widetilde{\mathcal{B}}_{x}\) is a local base at \(x\).
Let \(U_{0}\) be an arbitrary neighborhood of \(x\) such that \(\overline{U}_{0}\) is contractible in \(W_{x}\). We are going to find a member of \(\widetilde{\mathcal{B}}_{x}\) contained in \(U_{0}\). To this end, let \(\varepsilon=d(x,X\setminus U_{0})\). According to Theorem 2.2 there is \(\eta>0\) corresponding to \(\varepsilon/2\) and the point \(x\) (i.e., for every \(y\in X\) with \(d(y,x)<\eta\), there exists a homeomorphism \(h:X\to X\) with \(h(y)=x\) and \(d(h(z),z)<\varepsilon/2\) for all \(z\in X\)). Now, choose a connected neighborhood \(W\) of \(x\) with \(\overline{W}\subset B(x,\varepsilon/2)\) and \(\operatorname{diam}(\overline{\operatorname{W}})<\eta\). Finally, take \(U\in\mathcal{B}_{x}^{\prime\prime}\) such that \(\overline{U}\) is contractible in \(W\). By Claim 2, there exists a continuum \(F_{U}\subset\operatorname{bd}\overline{U}\) such that \(X\setminus F_{U}\) has exactly two components and \(F_{U}\) is an \((n-1,G)\)-carrier for \(\gamma_{U}^{*}=j_{\operatorname{bd}\overline{U},F_{U}}^{n-1}(\gamma_{U})\). If \(F_{U}=\operatorname{bd}\overline{U}\), then \(U\) is the desired member of \(\widetilde{\mathcal{B}}_{x}\). Indeed, since \(X\setminus bd\overline{U}=U\cup X\setminus\overline{U}\) with \(U\cap(X\setminus\overline{U})=\varnothing\) and \(U\) is connected, then \(X\setminus\overline{U}\) should be also connected (recall that \(X\setminus F_{U}\) has exactly two components).
Suppose that \(F_{U}\) is a proper subset of \(\operatorname{bd}\overline{U}\). Because \(F_{U}\) (as a subset of \(\overline{U}\)) is contractible in a compact set \(\Gamma\subset W\), \(\gamma_{U}^{*}\) is not extendable over \(\Gamma\). Thus, we can apply Lemma 2.5(ii) to conclude that \(F_{U}\) separates \(\overline{W}\). So, \(\overline{W}\setminus F_{U}=V_{1}\cup V_{2}\) for some open, non-empty disjoint subsets \(V_{1},V_{2}\subset\overline{W}\). Since \(U\) is a connected subset of \(\overline{W}\setminus F_{U}\), \(U\) is contained
in one of the sets \(V_{1},V_{2}\), say \(U\subset V_{1}\). Hence, \(F_{U}\cup\overline{V}_{2}\subset\overline{W}_{x}\setminus U\). Since \(\gamma_{U}\) is extendable over \(\overline{W}_{x}\setminus U\) (see Claim 1), \(\gamma_{U}^{*}\) is also extendable over \(\overline{W}_{x}\setminus U\), in particular \(\gamma_{U}^{*}\) is extendable over \(F_{U}\cup\overline{V}_{2}\). On the other hand, \(\gamma_{U}^{*}\) is not extendable over \(\overline{W}\) because \(F_{U}\) is contractible in \(\overline{W}\). The last fact, together with the equality \((F_{U}\cup\overline{V}_{1})\cap(F_{U}\cup\overline{V}_{2})=F_{U}\), yields that \(\gamma_{U}^{*}\) is not extendable over \(F_{U}\cup\overline{V}_{1}\). Let \(\beta=j_{F_{U},F^{\prime}}^{n-1}(\gamma_{U}^{*})\), where \(F^{\prime}=\overline{V}_{1}\cap F_{U}\) (observe that \(F^{\prime}\neq\varnothing\) because \(\overline{W}\) is connected). If \(F^{\prime}\) is a proper subset of \(F_{U}\), then \(\beta=0\) since \(F_{U}\) is a carrier for \(\gamma_{U}^{*}\). So, \(\beta\) would be extendable over \(\overline{V}_{1}\), which implies \(\gamma_{U}^{*}\) is extendable over \(F_{U}\cup\overline{V}_{1}\), a contradiction. Therefore, \(F^{\prime}=F_{U}\subset\overline{V}_{1}\) and \(\gamma_{U}^{*}\) is not extendable over \(\overline{V}_{1}\). Consequently, there exists an \((n-1)\)-cohomology membrane \(P_{\beta}\subset\overline{V}_{1}\) for \(\gamma_{U}^{*}\) spanned on \(F_{U}\). By Lemma 2.5(i), \(V=P_{\beta}\setminus F_{U}\) is a connected open set in \(X\) whose boundary is the set \(F^{\prime\prime}=\overline{X\setminus P_{\beta}\cap P_{\beta}\setminus F_{U} }\subset F_{U}\). As above, using that \(\gamma_{U}^{*}\) is not extendable over \(P_{\beta}\) and \(j_{F_{U},Q}^{n-1}(\gamma_{U}^{*})=0\) for any proper closed subset \(Q\subset F_{U}\), we can show that \(F^{\prime\prime}=F_{U}\) and \(\operatorname{bd}\overline{V}=F_{U}\). Summarizing the properties of \(V\), we have that \(\overline{V}\) is contractible in \(W_{x}\) (because so is \(\overline{U}_{0}\)), \(V=\operatorname{int}(\overline{V})\) (because \(F_{U}=\operatorname{bd}\overline{V}\)) and \(V\) is connected. Moreover, since \(X\setminus F_{U}\) is the union of the open disjoint non-empty sets \(V\) and \(X\setminus P_{\beta}\) such that \(V\) is connected and \(X\setminus F_{U}\) has exactly two components, \(X\setminus\overline{V}\) is also connected. Finally, because \(F_{U}\) is an \((n-1,G)\)-carrier for the nontrivial \(\gamma_{U}^{*}\), \(H^{n-1}(\operatorname{bd}\overline{V};G)\neq 0\). Thus, if \(V\) contains \(x\), then \(V\) is a member of \(\widetilde{\mathcal{B}}_{x}\).
If \(V\) does not contain \(x\), we take a point \(y\in V\) with \(d(x,y)<\eta\). This is possible because \(V\subset\overline{W}\) and \(\operatorname{diam}(\overline{W})<\eta\). So, according to the choice of \(\eta\), there is a homeomorphism \(h\) on \(X\) such that \(h(y)=x\) and \(d(z,h(z))<\varepsilon/2\) for all \(z\in X\). Then \(h(V)\subset U_{0}\) (this inclusion follows from the choice of \(\varepsilon\) and the fact that \(h\) is \((\varepsilon/2)\)-close to the identity on \(X\)). So, \(\overline{h(V)}\) is contractible in \(W_{x}\). Since the remaining properties from the definition of \(\widetilde{\mathcal{B}}_{x}\) are invariant under homeomorphisms, \(h(V)\) is the required member of \(\widetilde{\mathcal{B}}_{x}\), which provides the proof of Claim 3.
The next claim completes the proof of Proposition 3.2.
_Claim 4_.: \(H^{n-1}(\overline{U};G)=0\) and \(\operatorname{bd}\overline{U}\) is an \((n-1,G)\)-bubble for every \(U\in\widetilde{\mathcal{B}}_{x}\).
Because each \(\overline{U}\), \(U\in\widetilde{\mathcal{B}}_{x}\), is contractible in \(\overline{W}_{x}\), any nontrivial \(\gamma\in H^{n-1}(\overline{U};G)\) cannot be extendable over \(\overline{W}_{x}\). Since \(\overline{W}_{x}\) is contractible in a proper subset of \(X\), by Lemma 2.5(ii), \(\overline{U}\) would separate \(X\) provided \(H^{n-1}(\overline{U};G)\neq 0\). On the other hand, each \(X\setminus\overline{U}\) is connected. Therefore, \(H^{n-1}(\overline{U};G)=0\) for all \(U\in\widetilde{\mathcal{B}}_{x}\). Suppose
there exists a proper closed subset \(F\subset\operatorname{bd}\overline{U}\) and a nontrivial element \(\alpha\in H^{n-1}(F;G)\). Since \(H^{n-1}(\overline{U};G)=0\), \(\alpha\) is not extendable over \(\overline{U}\). Hence, there is an \((n-1)\)-cohomology membrane \(K_{\alpha}\subset\overline{U}\) for \(\alpha\) spanned on \(F\). Therefore, \((K_{\alpha}\setminus F)\cap\overline{X\setminus K_{\alpha}}=\varnothing\). In particular, \(K_{\alpha}\setminus F\) is open in \(\overline{U}\setminus F\). On the other hand, \(K_{\alpha}\setminus F\) is also closed in \(\overline{U}\setminus F\). Because \(\overline{U}\setminus F\) is connected (recall that \(U\) is a dense connected subset of \(\overline{U}\setminus F\)), we obtain \(K_{\alpha}=\overline{U}\). Finally, observe that any point from \(\operatorname{bd}\overline{U}\setminus F\) belongs to \((K_{\alpha}\setminus F)\cap\overline{X\setminus K_{\alpha}}\), a contradiction. Therefore, \(\operatorname{bd}\overline{U}\) is an \((n-1,G)\)-bubble.
## 4. Proof of Theorem 1.2 and Corollary 1.3
_Proof of Theorem 1.2._\((i)\) Suppose \(K\) is an \((n-1)\)-cohomology membrane spanned on \(A\) for some \(\gamma\in H^{n-1}(A;G)\) and there is a point \(a\in(K\setminus A)\cap\overline{X\setminus K}\). Take \(V\in\mathcal{B}_{a}\) with \(\overline{V}\cap A=\varnothing\), where \(\mathcal{B}_{a}\) is a local base at \(a\) satisfying the hypotheses of Proposition 2.6(2). Since \(K\setminus V\) is a proper subset of \(K\) containing \(A\), \(\gamma\) is extendable to \(\gamma^{*}\in H^{n-1}(K\setminus V;G)\). Then \(\alpha_{V}=j_{K\setminus V,K\cap\operatorname{bd}\overline{V}}^{n-1}(\gamma^ {*})\) is not extendable over \(K\cap\overline{V}\) (otherwise \(\gamma\) would be extendable over \(K\)). In particular, \(\alpha_{V}\) a non-zero element of \(H^{n-1}(K\cap\operatorname{bd}\overline{V};G)\). Because \(\operatorname{bd}\overline{V}\) has an empty interior in \(X\), \(\dim_{G}\operatorname{bd}\overline{V}\leq n-1\), see Proposition 2.6(1). Consequently, \(\alpha_{V}\) can be extended to \(\widetilde{\alpha}_{V}\in H^{n-1}(\operatorname{bd}\overline{V};G)\). Moreover, since \(\alpha_{V}\) is not extendable over \(K\cap\overline{V}\), \(\widetilde{\alpha}_{V}\) is not extendable over \(\overline{V}\). Finally, using that \(\overline{V}\) is an \((n-1)\)-cohomology membrane spanned on \(\operatorname{bd}\overline{V}\) for \(\widetilde{\alpha}_{V}\) and, since \(a\in(K\setminus A)\cap\overline{X\setminus K}\) implies that \(\operatorname{bd}\overline{V}\cup(K\cap\overline{V})\) is a proper closed subset of \(\overline{V}\) containing \(\operatorname{bd}\overline{V}\), we can find \(\beta\in H^{n-1}(\operatorname{bd}\overline{V}\cup(K\cap\overline{V});G)\) extending \(\widetilde{\alpha}_{V}\). Thus, \(\alpha_{V}\) is extendable over \(K\cap\overline{V}\), a contradiction. Therefore, \(X\) has the \(n\)-cohomology membrane property.
\((ii)\) We need the following result [22, Theorem 1]: For every metric compactum \(K\) with \(\dim_{G}K=n\) there exist a closed set \(Y\subset K\) and a dense set \(D\subset Y\) such that \(\dim_{G}Y=n\) and \(K\) has an \(n\)-dimensional \(G\)-obstruction at every \(y\in D\). In our situation we take a compact set \(K\subset X\) with \(\dim_{G}K=n\) and find corresponding sets \(D\subset Y\subset K\). Since \(\dim_{G}Y=n\), \(\operatorname{intY}\neq\varnothing\) (Proposition 2.6) and there is \(y\in D\cap\operatorname{intY}\). Because \(K\) has an \(n\)-dimensional \(G\)-obstruction at \(y\), we can find a neighborhood \(W\subset K\) of \(y\) such that for any open in \(K\) neighborhood \(V\) of \(y\) contained in \(W\) the homomorphism \(H^{n}(K,K\setminus V;G)\to H^{n}(K,K\setminus W;G)\) is not trivial. This implies that for every open in \(K\) neighborhoods \(U,V\) of \(y\) with \(\overline{U}\subset V\) the homomorphism \(H^{n}(K,K\setminus U;G)\to H^{n}(K,K\setminus V;G)\) is also nontrivial. Since \(y\in\operatorname{intY}\), we can suppose that \(W\in\mathcal{B}_{y}\) such that \(\overline{W}\) is contractible in
\(X\) and satisfying the hypotheses of Proposition 2.6(2). Then, by the excision axiom, for every \(U,V\in\mathcal{B}_{y}\) with \(\overline{U}\subset V\subset\overline{V}\subset W\), the groups \(H^{n}(K,K\setminus U;G)\) and \(H^{n}(K,K\setminus V;G)\) are isomorphic to \(H^{n}(X,X\setminus U;G)\) and \(H^{n}(X,X\setminus V;G)\), respectively. So, \(j_{U,V}:H^{n}(X,X\setminus U;G)\to H^{n}(X,X\setminus V;G)\) is a nontrivial homomorphism for all \(U,V\in\mathcal{B}_{y}\) with \(\overline{U}\subset V\subset\overline{V}\subset W\). Finally, because \(X\) is locally homogeneous, it has an \(n\)-dimensional \(G\)-obstruction at every \(x\in X\).
To prove the second half of condition \((ii)\), let \(U,V\in\mathcal{B}_{x}\) with \(\overline{U}\subset V\subset\overline{V}\subset W\).
_Claim 5_.: The homomorphism \(j_{\overline{W}\setminus U,\overline{W}\setminus V}^{n-1}:H^{n-1}(\overline{ W}\setminus U;G)\to H^{n-1}(\overline{W}\setminus V;G)\) is surjective and \(H^{n-1}(\overline{W}\setminus V;G)\neq 0\).
Indeed, consider the Mayer-Vietoris exact sequence below, where the coefficient group \(G\) is suppressed,
\[H^{n-1}(\overline{W}\backslash U)\ \xrightarrow{\varphi}\ H^{n-1}(\overline{V} \backslash U)\oplus H^{n-1}(\overline{W}\backslash V)\ \xrightarrow{\psi}\ H^{n-1}(\operatorname{bd}\overline{V})...\]
The maps \(\varphi\) and \(\psi\) are defined by \(\varphi(\gamma)=(j_{\overline{W}\setminus U,\overline{V}\setminus U}^{n-1}( \gamma),j_{\overline{W}\setminus U,\overline{W}\setminus V}^{n-1}(\gamma))\) and \(\psi((\beta,\alpha))=j_{\overline{V}\setminus U,\operatorname{bd}\overline{ V}}^{n-1}(\beta)-j_{\overline{W}\setminus U,\operatorname{bd}\overline{V}}^{n-1}(\alpha)\). For every \(\alpha\in H^{n-1}(\overline{W}\setminus V;G)\) the element \(\beta_{\alpha}^{\prime}=j_{\overline{W}\setminus V,\operatorname{bd} \overline{V}}^{n-1}(\alpha)\in H^{n-1}(\operatorname{bd}\overline{V}; \operatorname{G})\) is extendable to an element \(\beta_{\alpha}\in H^{n-1}(\overline{V}\setminus U;G)\). Indeed, there are two possibilities: either \(\beta_{\alpha}^{\prime}\) is extendable over \(\overline{V}\) or it is not extendable over \(\overline{V}\). The first case obviously implies that \(\beta_{\alpha}^{\prime}\) is extendable over \(\overline{V}\setminus U\). In the second case, by Proposition 2.6(2), \(\overline{V}\) is an \((n-1)\)-cohomological membrane spanned on \(\operatorname{bd}\overline{V}\) for \(\beta_{\alpha}^{\prime}\). Then, since \(\overline{V}\setminus U\) is a proper closed subset of \(\overline{V}\), \(\beta_{\alpha}^{\prime}\) is extendable over \(\overline{V}\backslash U\). Hence, \(\psi(\beta_{\alpha},\alpha)=0\) for any \(\alpha\in H^{n-1}(\overline{W}\setminus V;G)\). Consequently, there is \(\gamma_{\alpha}\in H^{n-1}(\overline{W}\backslash U;G)\) with \(\varphi(\gamma_{\alpha})=(\beta_{\alpha},\alpha)\). In particular, \(j_{\overline{W}\setminus U,\overline{W}\setminus V}^{n-1}(\gamma_{\alpha})=\alpha\), which shows the surjectivity of \(j_{\overline{W}\setminus U,\overline{W}\setminus V}^{n-1}\). The nontriviality of \(H^{n-1}(\overline{W}\setminus V;G)\) follows from the Mayer-Vietoris exact sequence
\[\to H^{n-1}(\overline{V};G)\oplus H^{n-1}(\overline{W}\backslash V;G)\to H^{n -1}(\operatorname{bd}\overline{V};\operatorname{G})\to\operatorname{H}^{n}( \overline{W};\operatorname{G})\to\]
Indeed, \(\overline{W}\) being contractible in \(X\) implies that \(H^{n}(\overline{W};G)=0\) (see the remark after Proposition 2.3). Hence, the homomorphism \(j_{\overline{V},\operatorname{bd}\overline{V}}^{n-1}\) would be surjective provided \(H^{n-1}(\overline{W}\backslash V;G)=0\). This would imply that every non-trivial element of \(H^{n-1}(\operatorname{bd}\overline{V};\operatorname{G})\) is extendable over \(\overline{V}\), which contradicts Proposition 2.6(2).
To complete the proof, consider the commutative diagram whose rows are parts of exact sequences
\[\begin{CD}H^{n-1}(\overline{W}\backslash U;G)@>{\delta_{U}}>{}>H^{n}(\overline{W},\overline{W}\backslash U;G)@>{i_{U}}>{}>H^{n}(\overline{W};G)\\ @V{}V{j_{\overline{W}\backslash U,\overline{W}\backslash V}^{n-1}}V@V{}V{j_{U,V}^{ \prime}}V@V{}V{\operatorname{id}}V\\ H^{n-1}(\overline{W}\backslash V;G)@>{\delta_{V}}>{}>H^{n}(\overline{W}, \overline{W}\backslash V;G)@>{i_{V}}>{}>H^{n}(\overline{W};G).\end{CD}\]
Since \(H^{n}(\overline{W};G)=0\), both \(\delta_{U}\) and \(\delta_{V}\) are surjective. This, combined with the surjectivity of \(j_{\overline{W}\backslash U,\overline{W}\backslash V}^{n-1}\) and nontriviality of \(H^{n-1}(\overline{W}\setminus V;G)\), implies that \(j_{U,V}^{\prime}\) is also surjective. Finally, by the excision axiom the groups \(H^{n}(\overline{W},\overline{W}\backslash U;G)\) and \(H^{n}(\overline{W},\overline{W}\backslash V;G)\) are isomorphic to \(H^{n}(X,X\backslash U;G)\) and \(H^{n}(X,X\backslash V;G)\), respectively. Therefore, the homomorphism \(j_{U,V}^{n}:H^{n}(X,X\backslash U;G)\to H^{n}(X,X\backslash V;G)\) is surjective.
_Proof of Corollary 1.3._ The conclusion of this corollary implies the well-known invariance of domains property, which was established in [23] and [25], respectively, for compact homogeneous and locally homogeneous \(ANR\)-spaces \(X\) with \(\dim X<\infty\). For homogeneous \(ANR\)-compacta \(X\) with \(\dim_{G}X<\infty\), where \(G\) is a countable principal ideal domain, it was established in [29]. The arguments from [29] also work in our situation.
## 5. Proof of Theorem 1.4.
\((i)\) It suffices to show that the one-point compactification \(bX=X\cup\{b\}\) of \(X\) is dimensionally full-valued. To this end, we use the following result of Dranishnikov [7, Theorem 12.3-12.4]: If \(Y\) is a finite-dimensional \(ANR\)-compactum, then \(\dim_{\mathbb{Z}_{(p)}}Y=\dim_{\mathbb{Z}_{p}}Y\) for all prime \(p\). Moreover, \(\dim_{\mathbb{Q}}Y\leq\dim_{G}Y\) for any group \(G\neq 0\) and there exists a prime number \(p\) with \(\dim Y=\dim_{\mathbb{Z}_{(p)}}Y=\dim_{\mathbb{Z}_{p}}Y\). The proof presented in [7] works also when \(Y\) is the one-point compactification of an \(ANR\)-space. Here \(\mathbb{Z}_{p}=\mathbb{Z}/p\mathbb{Z}\) is the cyclic group and \(\mathbb{Z}_{(p)}=\{m/n:n\text{ is not divisible by }p\}\subset\mathbb{Q}\), where \(\mathbb{Q}\) is the field of rational numbers. We also consider the quotient group \(\mathbb{Z}_{p^{\infty}}=\mathbb{Q}/\mathbb{Z}_{(p)}\). It is well known [22] that the so called Bockstein basis consists of the groups \(\sigma=\{\mathbb{Q},\mathbb{Z}_{p},\mathbb{Z}_{(p)},\mathbb{Z}_{p^{\infty}}:p \in\mathcal{P}\}\), \(\mathcal{P}\) is the set of all primes. For any group (not necessarily countable) there exists a collection \(\sigma(G)\subset\sigma\) such that \(\dim_{G}X=\sup\{\dim_{H}X:H\in\sigma(G)\}\) for any space \(X\).
Let \(X\) be a locally homogeneous \(ANR\)-space with \(\dim X=n\). According to the mentioned above Dranishnikov's result, there exists \(p\in\mathcal{P}\) with \(\dim_{\mathbb{Z}_{(p)}}bX=\dim_{\mathbb{Z}_{p}}bX=n\). If \(\dim_{\mathbb{Z}_{p^{\infty}}}bX=n\), we are
done. Indeed, according to [7, Lemma 2.6], \(bX\) is \(p\)-regular, i.e.
\[\dim_{\mathbb{Z}_{(p)}}bX=\dim\mathbb{Z}_{p^{\infty}}bX=\dim_{\mathbb{Z}_{p}}bX= \dim_{\mathbb{Q}}bX=n.\]
Then, applying again Dranishnikov's result [7, Theorem 12.3], we obtain \(\dim_{\mathbb{Q}}bX\leq\dim_{G}bX\leq n\) for any group \(G\neq 0\). Hence, \(\dim_{G}bX=\dim bX=n\) for all nontrivial groups \(G\), and by [21, Theorem 11], \(bX\) is dimensionally full-valued. Therefore, the next claim completes the proof.
_Claim 6_.: If \(\dim_{\mathbb{Z}_{(p)}}bX=\dim_{\mathbb{Z}_{p}}bX=n\) for some prime number \(p\), then \(\dim\mathbb{Z}_{p^{\infty}}bX=n\).
Suppose \(\dim\mathbb{Z}_{p^{\infty}}bX\leq n-1\). According to the Bockstein inequalities (see, for example [7], or [21]), we have \(\dim_{\mathbb{Z}_{p}}bX\leq\dim\mathbb{Z}_{p^{\infty}}bX+1\), which in our case implies \(\dim\mathbb{Z}_{p^{\infty}}bX=\dim\mathbb{Z}_{p^{\infty}}X=n-1\). Since \(\dim_{\mathbb{Z}_{p}}(bX)^{2}=2\dim_{\mathbb{Z}_{p}}X=2n\), using the next equality (see [21])
\[\dim_{\mathbb{Z}_{p^{\infty}}}(bX)^{2}=\max\{2\dim_{\mathbb{Z}_{p^{\infty}}}bX,\dim_{\mathbb{Z}_{p}}(bX)^{2}-1\},\]
we obtain \(\dim\mathbb{Z}_{p^{\infty}}X^{2}=\dim\mathbb{Z}_{p^{\infty}}(bX)^{2}=2n-1\). Then, by Proposition 2.6(2), for every \((x,y)\in X^{2}\) there are neighborhoods \(U_{x}\) and \(V_{y}\) of \(x\) and \(y\) with compact closures such that \(W=U\times V\subset U_{x}\times V_{y}\) yields \(H^{2n-2}(\operatorname{bd}\overline{W};\mathbb{Z}_{p^{\infty}})\neq 0\) for any \(W\in\mathcal{B}_{(x,y)}\). Since \(\operatorname{bd}\overline{W}=(\overline{U}\times\operatorname{bd}\overline{ V})\cup(\operatorname{bd}\overline{U}\times\overline{V})\), we have the Meyer-Vietoris exact sequence (in all exact sequences below the coefficient groups \(Z_{p^{\infty}}\) are suppressed)
\[H^{2n-3}(\Gamma_{U}\times\Gamma_{V})\to H^{2n-2}(\Gamma_{W})\to H^{2n-2}( \overline{U}\times\Gamma_{V})\oplus H^{2n-2}(\Gamma_{U}\times\overline{V}) \to H^{2n-2}(\operatorname{bd}\overline{W};\mathbb{Z}_{p^{\infty}})\neq 0\]
for any \(W\in\mathcal{B}_{(x,y)}\). Since \(\operatorname{bd}\overline{W}=(\overline{U}\times\operatorname{bd}\overline{ V})\cup(\operatorname{bd}\overline{U}\times\overline{V})\), we have the Meyer-Vietoris exact sequence (in all exact sequences below the coefficient groups \(Z_{p^{\infty}}\) are suppressed)
\[H^{2n-3}(\Gamma_{U}\times\Gamma_{V})\to H^{2n-2}(\Gamma_{W})\to H^{2n-2}( \overline{U}\times\Gamma_{V})\oplus H^{2n-2}(\Gamma_{U}\times\overline{V}) \to H^{2n-2}(\operatorname{bd}\overline{W};\mathbb{Z}_{p^{\infty}})\neq 0\]
for any \(W\in\mathcal{B}_{(x,y)}\). Since \(\dim_{\mathbb{Z}_{p^{\infty}}}bX=n-1\), we have \(\dim_{\mathbb{Z}_{p^{\infty}}}bX=n-1\). Since \(\dim_{\mathbb{Z}_{p^{\infty}}}bX=n-1\), we have \(\dim_{\mathbb{Z}_{p^{\infty}}}bX=n-1\).
the group \(H^{2n-2}(\Gamma_{W};Z_{p^{\infty}})\), a contradiction. Therefore, \(\dim\mathbb{Z}_{p^{\infty}}bX=n\).
\((ii)\) To prove the second half of Theorem 1.4, denote by \(\widehat{H}_{*}\) the exact homology developed in [26] for locally compact spaces. The homological dimension \(h\dim_{G}Y\) of a space \(Y\) is the largest number \(n\) such that \(\widehat{H}_{n}(Y,\Phi;G)\neq 0\) for some closed set \(\Phi\subset Y\). According to [15] and [26], \(h\dim_{G}Y\) is the greatest \(m\) such that the local homology group \(\widehat{H}_{m}(Y,Y\setminus y;G)=\varinjlim_{y\in U}\widehat{H}_{m}(Y,Y \setminus U;G)\) is not trivial for some \(y\in Y\). Moreover, for any field \(F\) we have \(h\dim_{F}Y=\dim_{F}Y\), see [15]. Therefore, since \(\dim_{\mathbb{Q}}X=\dim X=n\), we have \(h\dim_{\mathbb{Q}}X=n\) and \(\widehat{H}_{n}(X,X\setminus x;\mathbb{Q})\neq 0\) for all \(x\in X\). This means that every \(x\in X\) has a neighborhood \(W\) such that \(\widehat{H}_{n}(X,X\setminus U;\mathbb{Q})\neq 0\) for all open sets \(U\subset W\) containing \(x\). We assume that \(W\) satisfies conditions \((1)-(3)\) from Theorem 1.1 such that \(\overline{W}\) is contractible in a proper set in \(X\) and \(H^{n-1}(\operatorname{bd}\overline{U};\mathbb{Z})\neq 0\) for all neighborhoods \(U\) of \(x\) with \(U\subset W\), see Proposition 2.6(ii). Let \(V\) be a neighborhood of \(x\) with \(\overline{V}\subset W\).
_Claim 7_.: The pair \(\overline{W}\backslash V\subset\overline{W}\) is an \((n-1)\)-homology membrane spanned on \(\overline{W}\backslash V\) for any nontrivial \(\gamma\in H_{n-1}(\overline{W}\backslash V;\mathbb{Q})\).
Since \(\widehat{H}_{n}(X,X\backslash V;\mathbb{Q})\neq 0\), by the excision axiom \(\widehat{H}_{n}(\overline{W},\overline{W}\backslash V;\mathbb{Q})\neq 0\). Consider the exact sequences (see [26] for the existence of such sequences)
\[\to\widehat{H}_{n}(\overline{W};\mathbb{Q})\to\widehat{H}_{n}(\overline{W}, \overline{W}\backslash V;\mathbb{Q})\to\widehat{H}_{n-1}(\overline{W} \backslash V;\mathbb{Q})\to\widehat{H}_{n-1}(\overline{W};\mathbb{Q})\]
and
\[0\to\operatorname{Ext}(H^{n+1}(\overline{W};\mathbb{Z}),\mathbb{Q})\to \widehat{H}_{n}(\overline{W};\mathbb{Q})\to\operatorname{Hom}(H^{n}(\overline {W};\mathbb{Z}),\mathbb{Q})\to 0\]
\[0\to\operatorname{Ext}(H^{n}(\overline{W};\mathbb{Z}),\mathbb{Q})\to\widehat{ H}_{n-1}(\overline{W};\mathbb{Q})\to\operatorname{Hom}(H^{n-1}(\overline{W}; \mathbb{Z}),\mathbb{Q})\to 0.\]
Since \(H^{n+1}(\overline{W};\mathbb{Z})=H^{n}(\overline{W};\mathbb{Z})=H^{n-1}( \overline{W};\mathbb{Z})=0\), both \(\widehat{H}_{n}(\overline{W};\mathbb{Q})\) and \(\widehat{H}_{n-1}(\overline{W};\mathbb{Q})\) are trivial. Therefore, the group \(\widehat{H}_{n-1}(\overline{W}\backslash V;\mathbb{Q})\) is nontrivial. Moreover, the exact sequence
\[\operatorname{Ext}(H^{n}(\overline{W}\backslash V;\mathbb{Z}),\mathbb{Q})\to \widehat{H}_{n-1}(\overline{W}\backslash V;\mathbb{Q})\to\operatorname{Hom}(H^ {n-1}(\overline{W}\backslash V;\mathbb{Z}),\mathbb{Q})\to 0\]
shows that \(\widehat{H}_{n-1}(\overline{W}\backslash V;\mathbb{Q})\) is isomorphic to \(\operatorname{Hom}(H^{n-1}(\overline{W}\backslash V;\mathbb{Z}),\mathbb{Q})\) because \(H^{n}(\overline{W}\backslash V;\mathbb{Z})=0\) according to the remark after by Proposition 2.3.
To complete the proof of Claim 7, we need to show that if \(B\) is a proper closed subset of \(\overline{W}\) containing \(\overline{W}\backslash V\), then \(i_{\overline{W}\backslash V,B}^{n-1}(\gamma)\neq 0\) for every nontrivial \(\gamma\in H_{n-1}(\overline{W}\backslash V;\mathbb{Q})\). To this end, consider the exact
sequence
\[0\to\operatorname{Ext}(H^{n}(B;\mathbb{Z}),\mathbb{Q})\to\widehat{H}_{n-1}(B; \mathbb{Q})\to\operatorname{Hom}(H^{n-1}(B;\mathbb{Z}),\mathbb{Q})\to 0.\]
Since \(B\) is contractible in \(X\) (as a subset of \(W\)), \(H^{n}(B;\mathbb{Z})=0\). Hence, \(\widehat{H}_{n-1}(B;\mathbb{Q})\) is isomorphic to \(\operatorname{Hom}(H^{n-1}(B;\mathbb{Z}),\mathbb{Q})\). Because \(B=\overline{W}\backslash\Gamma\) for some open set \(\Gamma\subset V\), passing to a smaller subset of \(\Gamma\), we can assume that \(\Gamma\) is a neighborhood of some \(y\in V\) such that \(\overline{\Gamma}\subset V\) and \(\Gamma\) satisfies conditions \((1)-(3)\) from Theorem 1.1. Then, the arguments from the proof of Claim 5 show that \(H^{n-1}(B;\mathbb{Z})=H^{n-1}(\overline{W}\backslash\Gamma;\mathbb{Z})\) is not trivial and there is a surjective homomorphism \(H^{n-1}(B;\mathbb{Z})\to H^{n-1}(\overline{W}\backslash V;\mathbb{Z})\). Therefore, there exists an injective homomorphism from \(\widehat{H}_{n-1}(\overline{W}\backslash V;\mathbb{Q})\) into \(\widehat{H}_{n-1}(B;\mathbb{Q})\). Because we already saw that \(\widehat{H}_{n-1}(\overline{W};\mathbb{Q})=0\), this shows that \(\overline{W}\backslash V\subset\overline{W}\) is an \((n-1)\)-homology membrane spanned on \(\overline{W}\backslash V\) for any nontrivial \(\gamma\in\widehat{H}_{n-1}(\overline{W}\backslash V;\mathbb{Q})\), but with respect to the exact homology \(\widehat{H}_{*}\). To complete the proof of Claim 7, we observe that for every pair \(A\subset Y\) of a space \(Y\) and its closed set \(A\), any integer \(k\) and a group \(G\) there is an epimorphism \(T_{Y,A}^{k}:\widehat{H}_{k}(Y,A;G)\to H_{k}(Y,A;G)\), which is an isomorphism when \(G\) is a field, see [26, Theorem 4].
Suppose \(U\) is a neighborhood of \(x\) with \(\overline{U}\subset V\). Since the pair \(\overline{W}\backslash V\subset\overline{W}\) is an \((n-1)\)-homology membrane spanned on \(\overline{W}\backslash V\) for any nontrivial \(\gamma\in H_{n-1}(\overline{W}\backslash V;\mathbb{Q})\), according to [1], \(H_{n-1}(\operatorname{bd}\overline{U};\mathbb{Q})\neq 0\) and \(\overline{U}\) is an \((n-1)\)-homology membrane spanned on \(\operatorname{bd}\overline{U}\) for some nontrivial \(\gamma^{\prime}\in H_{n-1}(\operatorname{bd}\overline{U};\mathbb{Q})\). Most important for us is the non-triviality of \(H_{n-1}(\operatorname{bd}\overline{U};\mathbb{Q})\) because it implies also the nontriviality of \(H_{n-1}(\operatorname{bd}\overline{U};\mathbb{Z})\), see [28, Proposition 4.5]. Finally, since \(\dim\operatorname{bd}\overline{U}=n-1\), [18, Corollary] shows that \(\operatorname{bd}\overline{U}\) is dimensionally full-valued. \(\Box\)
**Acknowledgments.** The author would like to express his gratitude to A. Dranishnikov, K. Kawamura, A. Koyama and E. Tymchatyn for their helpful comments.
|
2307.05499 | A Natural Lane Changing Decision Model For Mixed Traffic Flow Based On
Extreme Value Theory | With the high frequency of highway accidents,studying how to use connected
automated vehicle (CAV) to improve traffic efficiency and safety will become an
important issue. In order to investigate how CAV can use the connected
information for decision making, this study proposed a natural lane changing
decision model for CAV to adapt the mixed traffic flow based on extreme value
theory. Firstly, on the bias of the mixed vehicle behavior analysis, the
acceleration, deceleration, and randomization rules of the cellular automata
model of mixed traffic flow in two lanes are developed. Secondly,the maximum
value of CAV's lane change probability at each distance by extreme value
distribution are modeled. Finally, a numerical simulation is conducted to
analyze the trajectory-velocity diagram of mixed traffic flow, average travel
time and average speed under different penetration rates of CAV. The result
shows that our model can avoid the traffic risk well and significantly improve
traffic efficiency and safety. | Jiali Peng, Wei Shangguan, Linguo Chai, Rui Luo, Ke Gao | 2023-06-25T08:20:48Z | http://arxiv.org/abs/2307.05499v3 | # A Natural Lane Changing Decision Model For Mixed Traffic Flow Based On Extreme Value Theory
###### Abstract
With the high frequency of highway accidents, studying how to use connected automated vehicle (CAV) to improve traffic efficiency and safety will become an important issue. In order to investigate how CAV can use the connected information for decision making, this study proposed a natural lane changing decision model for CAV to adapt the mixed traffic flow based on extreme value theory. Firstly, on the bias of the mixed vehicle behavior analysis, the acceleration, deceleration, and randomization rules of the cellular automata model of mixed traffic flow in two lanes are developed. Secondly,the maximum value of CAV's lane change probability at each distance by extreme value distribution are modeled. Finally, a numerical simulation is conducted to analyze the trajectory-velocity diagram of mixed traffic flow, average travel time and average speed under different penetration rates of CAV. The result shows that our model can avoid the traffic risk well and significantly improve traffic efficiency and safety.
Lane changing decision; Mixed traffic; Traffic accident; Extreme value theory
## I Introduction
### _Overview_
The last decade has witnessed the evolution and advanced of new technologies for Intelligent Transportation Systems (ITS) [1]. The most technologies are deployed as part of Information Technologies systems aiming to improve road safety, driver comfort, transport efficiency and conduct to refinements in secondary effects at environmental level as well as in energy management [2]. Among them The emergence of connected and automated vehicle (CAV) has given a unique opportunity to improve mobility through vehicle-to-vehicle (V2V) and vehicle to-infrastructure (V2I) communications [3]. CAV can collect environmental information about local traffic, weather and infrastructure using different technologies such as cameras, lasers, sensors, radar, etc [4]. The collected information is then shared between the individual groups of vehicles through V2V or V2I communications to facilitate efficient driving. On the other hand, the same collected data may be processed by roadside units(RSU) and communicated to different CAV either directly or via a public network to improve traffic management [5]. However, relevant reports show that the penetration rate of CAV on the road can only reach \(24.8\%\) in 2045 [6], indicating that with the current high utilization of human-driven vehicles (HDV), CAV and HDV will coexist on the road in the long term, and the future traffic flow will be randomly mixed, which is also called mixed traffic environment [7]. This gives rise to a significant challenge for avoiding accidents of CAV in a limited length of highway. When risky situations such as accidents, weather and road conditions or specific events occur. Messages of this nature about them will be broadcast towards CAV currently using a road network about emerging situations that could have potential impact on the traffic condition. The information is propagated upstream from the location of a specific situation (accident, work zones, slippery road, adverse weather conditions, etc.) with the support of technology infrastructure [8].
Our paper proposes a natural lane changing decision model for CAV to adapt the mixed traffic flow based on extreme value theory. Specifically, the motion decisions of the CAV act as constraints and guidance to the other CAV and HDV. It will increase the average speed as to avoid traffic jam, and decrease the average time and distance in the risk zone. And there is a growing interest in the application of extreme value theory(EVT) in the recent literature. It provides a single dimension to identify crashes that can well fit in the hierarchy
of safety pyramid. Due to this unique feature of EVT, this study adopts EVT to establishing mandatory lane-changing and speed guidance process for CAV, which is called 'natural'.
### _Related Works_
A review of literature indicate that mixed traffic flow modeling can be classified into numerical simulation-based studies and cellular automata-based studies. This section covers some representative studies related to cellular automata-based studies in the ensuing paragraphs.
As an important tool of microscopic traffic flow simulation, cellular automata (CA) can simulate complex traffic phenomena based on some simple rules. Nagel and Schrekenberg [9] proposed the classic NS model. With the development of CAV technology and the high efficiency of CA model in computer operations. In addition, as alluded above, the popularization of CAV will take a long time, so the impact of it can only be analyzed through simulation. Some scholars have gradually studied the mixed traffic flow based on CA in the intelligent transportation system. These studies involve traffic flow characteristics (including single-lane and multi-lane), intersection traffic control, traffic safety issues, and public transportation.
On the issue of traffic efficiency. For example, Zhao et al respectively proposed the CE-NS model and the CI-NS model. The CI-NS model considered the communication between CAV. The results showed that CAV can significantly optimize the traffic flow at intersections only when the number of CAV exceeds a certain percentage [10];Jiang et al proposes a cellular automata model of mixed traffic flow considering the driving behavior of the platoon of CAV. The penetration rate of CAV and the maximum platoon size increase can improve the road capacity [11].
On the issue of traffic safety. For example, Chain and Wong compared CA and surrogate safety assessment models (SSAM) in terms of safety assessment. They showed that CA model can replicate realistic conflicts at signalized intersection [12]. Marzoug et al propose a two-lane cellular automata model that explains the relationship between traffic-related parameters and likelihood of accidents at a signalized intersection. The model results illustrated that the traffic at the intersection is more dangerous adopting asymmetric lane-changing rules than symmetric ones [13].
The contribution of this study is twofold. In Fig.1.First, by modeling and simulation of traffic accidents at highway using cellular automata, this study explains how Impact of traffic accidents on the upstream of the road varies in a mixed traffic environment as a function of time and different CAV penetration rate. Second, as one of the first studies on the application of Extreme Value Theory for decision-making of CAV in a mixed traffic environment, this study provides valuable insights into traffic risk when accidents for a connected environment are occured which can better guide mixed traffic flow, thus reducing the severity of traffic congestion caused by traffic accidents. Such findings will help us to better understand the impact of a mixed traffic environment on diverse vehicles and suggest natural decision for a group of CAV who facing traffic risks.
## II Methodology
### _Cellular automaton model of mixed traffic flow_
The CA model can simulate complex traffic phenomena based on some simple rules. For the characteristics of the mixed traffic which mainly contains HDV and CAV, corresponding rules are developed respectively. Finally, the CA model of mixed traffic flow is obtained. To distinguish different car-following and lane-changing modes, let \(\alpha_{n}\) and \(\beta_{n}\) are \(0-1\) variable used to judge whether the type of the vehicle n is HDV and CAV, respectively.
\[\alpha_{n}=\left\{\begin{array}{l}1,\text{ if the type of the vehicle }n\text{ is HDV}\\ 0,\text{ else },\end{array}\right. \tag{1}\]
\[\beta_{n}=\left\{\begin{array}{l}1,\text{ if the type of the vehicle }n\text{ is CAV}\\ 0,\text{ else }\end{array}\right. \tag{2}\]
\[\alpha_{n}+\beta_{n}=1 \tag{3}\]
When the driving behavior of the leading vehicle changes, the HDV needs time to perceive, recognize and judge the change in the driving state of the leading vehicle before taking measures; this time is called reaction time \(\tau\) in this study. Besides, the vehicle may produce random deceleration because of uncertain factors such as the driver's mentality. The CAV uses the on-board sensing system or V2V/V2I technologies to obtain the status information of the leader. Therefore, it can quickly capture the change of the leader's behavior and take corresponding measures. The reaction time is the processing time of the on-board sensing system, which is shorter than that of the HDV.
Hence, before proceeding with the design of the CA model rules, it is necessary to introduce the concept of safety distance, which is extended from the Gipps model. It is the minimum distance that the follower does not collide with the leader vehicle when the leader vehicle brakes suddenly. The safety distance of the vehicle defined in this way can ensure that the vehicle is safe in all cases. According to Newton's second law, we know that the safe distance between adjacent vehicles in mixed traffic flow can be determined by
\[d_{n}^{safe}=v_{n}(t)\left(\alpha_{n}\tau^{\text{HDV}}+\beta_{n}\tau^{\text{ CAV}}\right)\]
\[+\left(\alpha_{n}+\beta_{n}\right)\left(\frac{v_{n}(t)^{2}-v_{n-1}(t)^{2}}{2B }\right) \tag{4}\]
where \(d_{n}^{safe}\) is the safe distance between the vehicle \(n\) and vehicle \(n-1\). \(v_{n}(t)\) is the speed of the vehicle \(n\) at time \(t\) (m/s). \(\tau^{\text{HDV}}\) and \(\tau^{\text{CAV}}\) are the reaction time (s) of the HDV and the CAV, respectively. \(B\) is the maximum deceleration of the vehicle. Then based on the safe distance of vehicles with different type, the acceleration, deceleration, randomization, and position update rules of the CA model of mixed traffic flow are proposed as follows.
(a) Acceleration
For HDV and CAV, when the \(\text{distanced}_{n}\)between \(\text{vehicle}n\)and its leading \(\text{vehicle}n-1\text{is}\) greater than the safety distance \(d_{n}^{safe}\), \(\text{vehicle}n\)will accelerate due to the pursuit of higher speed. To meet the rationality of simulation acceleration, the speed of vehicle\(n\)at the next time step is the minimum of \(v_{n}(t)+\alpha_{n}\Delta t,V_{\max}\), and \(d_{n}/\Delta t\). \(v_{n}(t+\Delta t)=(\alpha_{n}+\beta_{n})\min\left(v_{n}(t)+a\Delta t,V_{\max},d_{n}/\Delta t\right)\)\(d_{n}=x_{n-1}(t)-x_{n}(t)-l_{n-1}\), where \(\Delta t\) is the time step\(.v_{n}(t+\Delta t)\) is the speed of vehicle \(n\) at time \(t+\Delta t.a\) is the acceleration of the vehicle \(V_{\max}\) is the maximum speed of the vehicle \(d_{n}\) is the distance between vehicle \(n\) and vehicle \(n-1\). \(v_{n-1}(t+\Delta t)\) is the speed of vehicle \(n-1\) at time \(t+\Delta t\). \(d_{n}^{safe}\)is the safe distance of vehicle \(n\), which can be obtained by Eq. (4). \(x_{n-1}(t)\) is the position of vehicle \(n-1\) at time \(t.x_{n}(t)\) is the position of vehicle \(n\) at time \(t\). \(l_{n-1}\) is the length of vehicle \(n-1\).
(b) Deceleration
When the distance dn between the vehicle \(n\) and the vehicle \(n-1\) is less than or equal to the safe distance \(d_{n}^{safe}\). the vehicle will decelerate to ensure driving safety. Therefore, the speed of vehicle \(n\) at the next moment is \(v_{n-1}(t+\Delta t)\). The form of the deceleration rule is
\[v_{n}(t+\Delta t)=(\alpha_{n}+\beta_{n})\min\left(v_{n}(t),d_{n}/\Delta t\right)\]
(c) Randomization
For better simulation of HDV, random slowing probability \(p_{slow}\)is introduced in this study, and the vehicle will be slowed down according to the random probability. The random deceleration state of the vehicle is the same in the same reaction time and all HDV in each simulation step will make this determination. Since there is no instability in the CAV, it is specified that there is no randomization process. The randomization rule for HDV is obtained by the equation
\[v_{n}(t+\Delta t)=\max\left(v_{n}(t)-b\Delta t,0\right)\]
where \(b\) is the random deceleration of the vehicle.\(p_{slow}\)is the randomization probability.
(d) Lane-changing
Drivers are always changing lanes to maintain the maximum speed possible or to avoid some accidents. Usually, lane change rules can be symmetrical or asymmetrical rules related to vehicles or lanes. The symmetric rules consider both lanes and all vehicles equally. and in addition, lane change is allowed only when certain safety conditions are satisfied. This paper uses symmetric rules which they can change the lane if they satisfied the lane-changing rules. All steps are applied to all vehicles including CAV and HDV in parallel manner. Here, vehicles can change the lane according to the following symmetric rules (with respect to the vehicles kinds as well as with respect to the lanes):
\[\left\{\begin{array}{l}rand\leq p_{lane-change}^{HDV/CAV}\\ \\ gap_{\infront}^{current}\left(t\right)<v_{n}\left(t\right)+a\Delta t\\ \\ gap_{\infront}^{other}\left(t\right)>gap_{\infront}^{current}\left(t\right)\\ \\ v_{n-1}^{other}\left(t\right)<g_{back}^{other}\left(t\right)\end{array}\right.\]
Where: \(gap_{\infront}^{current}\left(t\right)\) and \(gap_{\infront}^{other}\left(t\right)\) are the gap in front in the current lane (left or right) and the other lane at the step t, respectively. \(g_{back}^{other}\left(t\right)\) is the back gap in the other lane (left or right) at the step t. \(v_{n-1}^{other}\left(t\right)\) is the maximum velocity of the vehicle behind in the other lane at the step t. \(rand\leq p_{lane-change}^{HDV/CAV}\) is the lane-changing probability of HDV and CAV. If the above lane-changing rules are satisfied, then the vehicle can change the lane.
(e)Position update
After the next time step's speed and lane is updated, the position of the vehicle is updated
\[x_{n}(t+\Delta t)=x_{n}(t)+v_{n}(t+\Delta t)\cdot\Delta t\]
### _EVT models for CAV lane changing decision_
With the pioneering work of Tarko et al [14], EVT paved its way in traffic safety analysis about two decades ago. With some preliminary analysis on estimating crash risk using traffic conflicts, the efficacy of EVT was confirmed in that study. Several attempts have been made in recent years to obtain real trajectory data in a connected environment and assess safety [15][16].
The synthesis of the literature suggests that majority of the studies employed EVT to estimate crash risk probability. The application of EVT seems to be most relevant in this case
Fig. 1: The natural lane changing decision model of CAV under mixed traffic flow
because of its capability of estimating crash risk without using historical crash records.
However, the real-time application factor is missing in the existing EVT model, which serves as the primary tool for studying the probability of extreme events and is able to adequately represent the extreme variability of random variables. Only through real-time application can we gain insight into the risky behavior of traffic subjects and reduce such risks, thus improving the behavioral reliability and safety of these vehicles, futhermore, and helping to improve our understanding of the relationship between traffic environment and traffic safety. These issues motivate the study in this paper.
When a CAV is driving on a highway where it has been informed of an accident ahead, the traffic risk will increase with proximity to the accident site, but may not be linearly approximated. Therefore, in order to adapt natural driving behavior, the risk-distance relationship can be estimated to determine when the CAV should make a lane change to induce the HDV, to ensure maximum safety and efficiency, and may to avoid traffic congestion or even secondary accidents due to accidents.To this end, Generalised Pareto distribution is applied because of its upper tail that deals with extreme values. More specifically, consider the maximum \(M_{n}=\max\left(X_{1},X_{2},\cdots,X_{n}\right)\). Among them, we define \(X\) as the maximum value of CAV lane change probability \(p_{lane-change}^{CAV}\) at different distances from the accident site. For example, at 1km from the accident site, the possibility that CAV can bear the maximum risk (lane change avoidance) in the face of accidents ahead is \(X_{1000}\). And this definition is made because the closer the distance to the accident site, the greater the corresponding traffic risk (vehicle driving safety, traffic congestion, etc.), which can also be characterized as an increasingly extreme and less probable event, because as a CAV it has the ability to obtain a large amount of information, for example, it is basically impossible for a CAV to slow down urgently at 100m before the accident and then change lanes to avoid the accident, the probability is very small. Then if a sequence of constants \(a_{n}>0\) and \(b_{n}>0\) exists, then \(Pr\left[\frac{\left(M_{n}-b_{n}\right)}{a_{n}\leq z}\right]\to G(z)\) as \(n\rightarrow\infty\) for a nondegenerate distribution function \(G\), then \(G\) belongs to the Generalised Extreme Value family, where the distributional function is
\[G(z)=\exp\left\{-\left[1+\xi\left(\frac{z-\mu}{\sigma}\right)\right]^{-1/\xi}\right\} \tag{9}\]
defined on \(z:1+\xi\left[\frac{z-\mu}{\sigma}\right]>0\), as shown in Fig. 2, where, \(-\infty<\mu<\infty\) indicates the location parameter, \(\sigma>0\) denotes the scale parameter, and \(\xi\) represents the shape of a Generalised Extreme Value distribution. As a result of assuming that the parent distribution \(Pr\left[\frac{\left(M_{n}-b_{n}\right)}{a_{n}\leq z}\right]\to G(z)\) is known, the distribution of lane change probability would also be know,then we can use the \(rand\leq p_{lane-change}^{CAV}\) to make CAV perform lane change.
## III Simulation&results
In this paper, a mixed traffic environment on a two-lane highway is constructed using numerical simulation, and the usability of the proposed CAV natural lane changing decision model is verified based on this simulation environment. In the numerical simulation, the lane consists of \(10^{5}\) cells, each cell length is 0.1 m, so the road length is L=10km. The traffic accident occurs in the right lane at 10km, where it is impassable, and the vehicle needs to change lanes in advance to avoid the accident when it is observed, where the observation distance of HDV is 1km according to the forward-looking effect, that is, HDV can observe the traffic accident in front of it at 9km. The HDV can observe the traffic accident in front of it at 9km, and then prepare to start the next operation. The simulation time step is set to 0.1 s, and the total step length is 3000. the last 1000 steps
Fig. 2: Extreme value distribution under different parameter values
are recorded to investigate the traffic flow characteristics. The simulation uses the periodic boundary. In the initial state, the vehicles are uniformly distributed on the road and the speed is generated randomly. The vehicle length is 50 cells (5 m). The speed limit is Vmax=330 cells/step (the general speed limit of Chinese highways is 120 km/h). The general acceleration, random deceleration and maximum deceleration of the vehicles are a=\(30cells/step^{2}\) (\(3m/s^{2}\)), b=30 cells/step1 (\(3m/s^{2}\)) and b=\(50cells/step^{2}\) (\(5m/s^{2}\)), respectively, and the randomization probability of HDV is set to 0.2. In the simulation, the response times of HDV and CAV are set to 20 steps (2 s) and 6 steps (0.6 s).The values of parameters in the numerical simulation are shown in Table 1. In order to study the influence of the penetration rate of CAVs on the characteristics of mixed traffic flow, the penetration rates are set to 0%, 20%, 40%, 60%, 80% and 100%.
### _Trajectory-velocity Diagram Analysis_
Figure 3 shows the speed and trajectory of vehicles on two lanes with different CAV penetration rates at a traffic density of 50 vehicles/km, where the darker color indicates lower speed, which means greater traffic congestion. From Fig. 3(a)-(f), it can be seen that as the penetration rate increases, the congestion area on the two lanes becomes smaller, the distance of congestion propagation becomes shorter, the impact area is smaller, and the congestion in the right lane where the accident occurs will be relieved. As shown in Figure 3(a), when the penetration rate of CAV is 0%, the congestion due to the traffic accident in the right lane propagates to almost the whole road section, causing a substantial traffic efficiency reduction. As shown in Figure 3(f), when the penetration of CAV is 100%, traffic congestion exists only in the accident area of the right lane, and there is no traffic congestion in other areas. The main reason is that the CAV decision of applying extreme value distribution proposed in this paper still has a very small number of CAVs to make speed reduction and lane change only before 10 KM (where the accident occurred), thus causing traffic congestion, and the rest of CAV can all adjust quickly during the driving process and then drive at the highest speed.
Fig. 3: Trajectory-velocity diagram of mixed traffic flow
### _Traffic Efficiency Analysis_
In this paper, the average travel time and average speed of vehicles with different CAV penetration rates are calculated for a traffic density of 50 vehicles/km, and the results are shown in Table 2. Table 2 shows that as the penetration rate keeps increasing, the average travel time of all vehicles keeps getting shorter and the average speed keeps increasing. Among them, CAV and HDV also lead to shorter average travel time and higher average speed as the CAV penetration rate increases, mainly because all vehicles form a more stable flow state under the influence of the CAV decision method in this paper, which continuously reduces the impact of congestion propagation caused by traffic accidents. As can be seen from the table, when the penetration rate is 100%, the average travel time is reduced by 16.42% and the average speed is increased by 18.29% compared with the pure HDV. When the penetration rate is 60%, the average travel time of CAV is reduced by 12.21% and the average speed is increased by 12.95% compared to pure HDV. the average travel time of HDV is reduced by 11.76% and the average speed is increased by 12.69% compared to pure HDV. Combined with the above analysis, the following conclusions can be drawn: the large-scale application of CAV can effectively reduce traffic congestion, increase the efficiency of traffic and improve the driving experience of HDV.
## IV Conclusions
This study proposed a natural lane changing decision model for CAV to adapt the mixed traffic flow based on extreme value theory. We designed to model the maximum value of CAV's lane change probability at each distance by extreme value distribution, and evaluated our method by setting different CAV penetration rates in a mixed traffic environment. The results showed that the model can better improve the average travel time and average vehicle speed, and CAV can also improve the travel efficiency of HDV. This work only considered the CAV lane change rule setting through a fixed extreme value distribution. In the future, a more realistic extreme value distribution will be obtained by fitting the natural driving dataset. Futhermore, It is not enough to model the lane change probability as an extreme value distribution only, we will continue to explore the application of extreme value theory in the CAV decision making process to improve the operational safety and efficiency of the mixed traffic environment.
|
2306.14865 | Nonvolatile Tuning of Bragg Structures Using Transparent Phase-Change
Materials | Bragg gratings offer high-performance filtering and routing of light on-chip
through a periodic modulation of a waveguide's effective refractive index.
Here, we model and experimentally demonstrate the use of Sb2Se3, a nonvolatile
and transparent phase-change material, to tune the resonance conditions in two
devices which leverage periodic Bragg gratings: a stopband filter and
Fabry-Perot cavity. Through simulations, we show that similar refractive
indices between silicon and amorphous Sb2Se3 can be used to induce broadband
transparency, while the crystalline state can enhance the index contrast in
these Bragg devices. Our experimental results show the promise and limitations
of this design approach and highlight specific fabrication challenges which
need to be addressed in future implementations. | Nicholas A. Nobile, Chuanyu Lian, Hongyi Sun, Yi-Siou Huang, Brian Mills, Cosmin Constantin Popescu, Dennis Callahan, Juejun Hu, Carlos A. Ríos Ocampo, Nathan Youngblood | 2023-06-26T17:24:08Z | http://arxiv.org/abs/2306.14865v1 | # Nonvolatile Tuning of Bragg Structures Using Transparent Phase-Change Materials
###### Abstract
Bragg gratings offer high-performance filtering and routing of light on-chip through a periodic modulation of a waveguide's effective refractive index. Here, we model and experimentally demonstrate the use of Sb\({}_{2}\)Se\({}_{3}\), a nonvolatile and transparent phase-change material, to tune the resonance conditions in two devices which leverage periodic Bragg gratings--a stopband filter and Fabry-Perot cavity. Through simulations, we show that similar refractive indices between silicon and amorphous Sb\({}_{2}\)Se\({}_{3}\) can be used to induce broadband transparency, while the crystalline state can enhance the index contrast in these Bragg devices. Our experimental results show the promise and limitations of this design approach and highlight specific fabrication challenges which need to be addressed in future implementations.
1
## 1 Introduction
Wavelength division multiplexing (WDM) with frequency selective routing, filtering, and modulation is one of the core advantages of optics over electronics for data transmission. The ability to control several THz of bandwidth in the telecommunications bands has transformed the way data is sent locally within data centers and globally in transatlantic fiber communications. Frequency selective control of light on-chip is equally important for a variety of commercial and emerging applications, such as on-chip laser cavities [1, 2], resonant modulators [3], pulse shaping [4], LiDAR [5, 6], and even computing [7, 8, 9, 10]. One simple, yet powerful technique to control the wavelength-dependent response of an optical signal is through Bragg gratings which are typically fabricated using periodic perturbations to the waveguide width [11]. By changing the period and modulating the strength of these perturbations, the bandwidth and central frequency of the Bragg grating can be controlled [12]. Additionally, filters comprised of Bragg gratings are not limited in channel density by free spectral range (FSR) effects which are an issue for microring-based WDM filter banks [13].
Due to the fixed nature of geometric patterning, the tunability of Bragg gratings is limited after fabrication. Thermo-optic or electro-optic effects can be used to tune the resonance condition within a limited range [3, 14, 15], although they are volatile and require constant power supply to tune the device. Creating reconfigurable, nonvolatile photonic filters could simplify system design and enable multi-functionality within the same circuit on-chip. Additionally, the grating profile on-chip can be subject to fabrication variations and nonvolatile methods for tuning the resonance frequency of these Bragg gratings could be important for
aligning multiple FP resonators or contra-directional couplers on-chip [3, 8]. Low-loss phase-change materials are ideal for this application as the high index contrast between the amorphous and crystalline phases can be used to create a periodic index perturbation or tune the resonance of a Bragg grating [16, 17]. While prior results have demonstrated this concept experimentally using Ge\({}_{2}\)Sb\({}_{2}\)Te\({}_{5}\), the high absorption in the crystalline state limits the spectral performance of these devices and increases insertion loss [18, 19].
Here, we propose and experimentally explore the ability to switch between enabling and disabling the Bragg resonance within a periodic device that is functionalized with the phase-change material Sb\({}_{2}\)Se\({}_{3}\). Through eigenmode simulations and transfer matrix method (TMM) modeling of these devices, we show that it is possible to use the large and transparent index contrast of Sb\({}_{2}\)Se\({}_{3}\) to either amplify or cancel out the effective index contrast of a Bragg grating by carefully designing the width of the waveguide with and without Sb\({}_{2}\)Se\({}_{3}\). Additionally, we show that the large index contrast between the amorphous and crystalline states can be used to tune the Fabry-Perot (FP) resonance of a phase-shifted Bragg grating and even completely move it beyond the stopband of the grating under certain design conditions.
## 2 Designing tunable Bragg gratings with phase-change materials
Fig. 1a illustrates the overall concept of our design approach. A Bragg grating with Sb\({}_{2}\)Se\({}_{3}\) embedded in the waveguide (dark blue and red regions) can either cancel or enhance the periodic perturbation of the effective refractive index of the waveguide. When the effective refractive index of waveguide with amorphous Sb\({}_{2}\)Se\({}_{3}\) matches that of the silicon waveguide without Sb\({}_{2}\)Se\({}_{3}\), the effective index perturbation is canceled, and the device appears to the propagating optical mode as if no grating exists. However, after crystallization, the regions with embedded Sb\({}_{2}\)Se\({}_{3}\) further enhance the index contrast and perturbation from the patterned Bragg grating is increased. This effect is directly related to fact that in the amorphous state, Sb\({}_{2}\)Se\({}_{3}\) has a slightly lower refractive index than silicon (\(n_{am}=3.27\) at \(\lambda=1550\) nm) while in the crystalline state the refractive index is higher than that of silicon (\(n_{cry}=4.04\) at \(\lambda=1550\) nm). The refractive indices of Sb\({}_{2}\)Se\({}_{3}\) thin films before and after crystallization were measured using ellipsometry and are shown in Fig. 1b.
We explored two design approaches to control the index perturbation of the Bragg gratings. The first design illustrated in Fig. 1c embeds the Sb\({}_{2}\)Se\({}_{3}\) directly in the waveguide, replacing silicon in the embedded regions. Using Ansys Lumerical's eigenmode solver (MODE), we simulated the effective refractive index of a waveguide with Sb\({}_{2}\)Se\({}_{3}\) of various widths embedded 120 nm into a standard 500 nm \(\times\) 220 nm single mode silicon waveguide. For increasing Sb\({}_{2}\)Se\({}_{3}\) widths, the effective refractive index decreases in the amorphous state (due to a lower refractive index compared to that of bulk silicon) and increases in the crystalline state. If we choose to replace a 120 nm \(\times\) 120 nm section of the silicon waveguide with Sb\({}_{2}\)Se\({}_{3}\), \(n_{eff}\) in the amorphous state is equivalent to that of a 480 nm \(\times\) 220 nm silicon waveguide. Thus, patterning a Bragg grating in a 500-nm-wide silicon waveguide with a 20 nm wide sidewall perturbation and embedding amorphous Sb\({}_{2}\)Se\({}_{3}\) in the 500-nm-wide sections results in a waveguide with no effective index perturbation and no Bragg resonance. Crystallization changes \(n_{eff}\) significantly in the embedded regions with \(\Delta n_{eff}\approx\) 0.165 as shown in the righthand graph in Fig. 1d.
In addition to embedding the Sb\({}_{2}\)Se\({}_{3}\) into the waveguide, we also considered a design with Sb\({}_{2}\)Se\({}_{3}\) deposited directly on top of the waveguide as shown in Fig. 1e. This design is easier to fabricate but reduces the change in modulation strength of the Bragg grating to \(\Delta n_{eff}\approx\) 0.074 since the interaction between the PCM and the optical mode is reduced to evanescent coupling. Additionally, since material is being added to the waveguide (rather than replacing the silicon), the effective refractive index is increased for both the amorphous and crystalline phases. Therefore, for this evanescently-coupled design, Sb\({}_{2}\)Se\({}_{3}\) is added to the regions of the waveguide that are narrower than the plain silicon waveguide in order to maintain a constant \(n_{eff}\) in the amorphous phase. The conditions for effective index matching with a 520-nm-wide
waveguide and the effective index change after crystallization are again shown in the righthand graph in Fig. 1f.
## 3 Modeling Results
### Modeling Bragg gratings with Sb2Se3
Using the \(n_{eff}\) results from Fig. 1c-d, we used the transfer matrix method (TMM) to simulate the spectra of our two proposed designs [11]. This modeling approach uses the Fresnel Equation which approximates the Bragg grating as a periodic step perturbation in the waveguide's \(n_{eff}\). These simulations account for the dispersion of the waveguide geometry and refractive index of the Sb\({}_{2}\)Se\({}_{3}\) in both states. Fig. 2a-b show the resulting spectra for the embedded design, while Fig. 2c-d are for the evanescently coupled design for \(N=100\), where
Figure 1: (a) Proposed concept for switching between a resonant Bragg grating with enhanced index contrast (bottom) and broadband transmission with no periodic index contrast (top) using the nonvolatile phase transition of Sb\({}_{2}\)Se\({}_{3}\). (b) Measured refractive index for as-deposited (amorphous) Sb\({}_{3}\)Se\({}_{3}\) and annealed (crystalline) Sb\({}_{2}\)Se\({}_{3}\) using thin-film ellipsometry. (c-f) Illustration of Sb\({}_{2}\)Se\({}_{3}\) embedded in the waveguide (c-d) or deposited on top of the waveguide (e-f). The effective refractive index of the combined waveguide-Sb\({}_{2}\)Se\({}_{3}\) system in both the crystalline and amorphous states is shown in the graphs on the right (c, e). The design conditions where the effective refractive index of the waveguide with amorphous Sb\({}_{2}\)Se\({}_{3}\) matches that of a waveguide without any Sb\({}_{2}\)Se\({}_{3}\) is indicated by the intersection of the dashed green and black lines in the graphs on the right (d, f).
\(N\) is the number of periods. While both designs can achieve broad transmission close to unity over the simulated 1.5-1.6 \(\upmu\)m wavelength range, we see that the stopband in the crystalline state is significantly narrower with a lower peak reflection for the evanescently coupled design (Fig. 2c). This is due to the weaker modulation of \(\Delta n_{eff}\) compared to the embedded design which is directly related to the bandwidth of the Bragg grating [11]:
\[\Delta\lambda=\frac{\lambda_{B}^{2}}{\pi n_{g}}\sqrt{\kappa^{2}+(\pi/L)^{2}}, \text{ with }\lambda_{B}=2\Delta\bar{n}_{eff}\text{ and }\kappa=\frac{2\Delta n_{eff}}{ \lambda_{B}}, \tag{1}\]
where \(\Delta\lambda\) is the bandwidth of the Bragg filter measured between the first nulls around resonance, \(\lambda_{B}\) is the central resonance wavelength, \(\Lambda\) is the grating period, \(\bar{n}_{eff}\) is the average effective index of the grating, \(n_{g}\) is the group index, \(\kappa\) is the grating strength, \(\Delta n_{eff}\) is the difference in \(n_{eff}\) between the areas with and without Sb\({}_{2}\)Se\({}_{3}\), and \(L\) is the length of the grating. For the case where the grating is sufficiently long relative to the grating strength (i.e., \(\kappa\gg\pi/L\)), the bandwidth simplifies to:
\[\Delta\lambda\approx\lambda_{B}\left(\frac{2\Delta n_{eff}}{\pi n_{g}}\right) \tag{2}\]
From the above expression, we can see that \(\Delta\lambda\) is directly proportional to \(\Delta n_{eff}\). Thus, the reduction of \(\Delta\lambda=39.55\) nm in the embedded case (Fig. 2a) to \(\Delta\lambda=20.23\) nm in the evanescent case (Fig. 2c) is due to the \(\sim\)2\(\times\) decrease in \(\Delta n_{eff}\) which can be seen when comparing the \(n_{eff}\) plots in Fig. 1c-d. A weaker grating strength for a fixed grating length will also reduce the peak reflectivity at \(\lambda_{B}\), which is equal to \(R_{peak}=\tanh^{2}(\kappa L)\), as seen in the case of the evanescently coupled design. To account for this offset, the evanescent Bragg filter devices were fabricated instead with twice the number of periods as that of the embedded devices.
### Modeling phase-shifted Bragg gratings with Sb\({}_{2}\)Se
Figure 2: (a)-(b) Simulated spectra of a Bragg grating with crystalline (a) and amorphous Sb\({}_{2}\)Se\({}_{3}\) (b) embedded in the silicon waveguide grating. When the device is in the amorphous state as shown in (b), the transmission across the entire C-band and L-band is almost unity due to the precise matching of the effective refractive indices in the periodic structure. (c)-(d) Simulated spectra for the case of evanescently-coupled Sb\({}_{2}\)Se\({}_{3}\) deposited on top of the waveguide for the crystalline (c) and amorphous phases (d). All simulations used TMM to model the spectra and accounted for the wavelength dispersion of the refractive index for both silicon and Sb\({}_{2}\)Se\({}_{3}\). The number of periods for all simulations was held constant at N = 100.
An FP cavity can be created by placing a phase-shift inducing defect with an optical path length equal to an odd number of half periods within a Bragg grating. The length of the defect without any PCM added can be written in terms of an integer number of odd periods:
\[L_{defect}=\Lambda\left(m+\frac{1}{2}\right)\text{, where }m=0,1,2,... \tag{3}\]
where \(\Lambda\) is the period of the grating and \(m\) is a whole number. When a low-loss PCM is placed on the defect region, transitions between amorphous and crystalline phases will induce a resonance shift due to a change in \(n_{eff}\) as explored by other groups in previous works [16], [17]. In addition to tuning the resonance position, we can also fully shift the resonance out of the stopband of the cavity upon a phase transition, thus removing the defect entirely. For this effect to happen, the change in \(n_{eff}\) for the waveguide in the defect region must result in an odd multiple of \(\pi\)/2 phase shift for one of the phases while also providing an even multiple of \(\pi\)/2 phase shift in the other phase. Therefore, the length of the cavity can be designed at the first length to match the following condition:
\[L_{defect}=\frac{m\lambda_{B}}{2n_{eff,p1}}=\frac{(m+\frac{1}{2})\lambda_{B}}{ 2n_{eff,p2}}\text{, where }m=0,1,2,... \tag{4}\]
where \(n_{eff,p1}\) is the \(n_{eff}\) of the FP defect waveguide in one of the PCM phases and \(n_{eff,p2}\) the other. In order to center the resonance within the stopband, we can adjust the location of the stopband or the \(n_{eff}\) of the FP cavity waveguide. This means the grating period (\(\Lambda\)), average effective index of the grating (\(\bar{n}_{eff}\)), and the width of the FP cavity are the critical dimensions which must be chosen to satisfy the above condition. However, if the length of the defect is too long, multiple FP resonances will be present for both the amorphous and crystalline state since the free spectral range of the cavity will be smaller than the bandwidth of the stopband. This effect can be seen when comparing the evanescently-coupled FP design in Fig. 3c-d with that of the shorter embedded design in Fig. 3a-b. This constraint necessitates a low-loss PCM with a large change in refractive index. We have previously demonstrated that only a \(\sim\)11 \(\upmu\)m length of Sb\({}_{2}\)Se\({}_{3}\) can reversibly induce a \(\pi\) phase shift when deposited on silicon waveguides with integrated microheaters [20], making Sb\({}_{2}\)Se\({}_{3}\) an ideal candidate for inducing transparency within the stopband of a phase-shifted Bragg grating.
Figure 3: (a)-(b) Simulated spectra of a phase-shifted Bragg grating with crystalline (a) and amorphous Sb\({}_{2}\)Se\({}_{3}\) (b) embedded in the phase-shifted defect section. The length of the defect is chosen such that \(\Delta n_{\text{eff}}\) of the defect is equal to \(\pi\) when Sb\({}_{2}\)Se\({}_{3}\) is switched between its amorphous and crystalline phases. (c)-(d) Spectra for a phase-shifted Bragg grating for an evanescently coupled design. The presence of a Fabry-Perot (FP) resonance in stopband when Sb\({}_{2}\)Se\({}_{3}\) is in the crystalline phase is determined by whether the defect is equivalent to either an even or odd number of grating periods.
## 4 Fabrication and Experimental Measurements
Devices were fabricated according to the designs shown in Fig. 2 and Fig. 3. Fifty variations on the Bragg design and two hundred fifty variations of the FP Cavity devices were fabricated to account for fabrication non-idealities. These variations included average waveguide width in the Bragg mirrors, width differences between segments, the total number of periods, and the length of the FP cavity in the FP cavity devices. Note that the embedded Bragg filters were tested with 100 and 200 periods while the evanescent Bragg filters were tested with 200 and 400 periods to account for the reduced contrast \(\Delta n_{eff}\) of the latter design.
The proposed designs were patterned using an ELS-G100 electron-beam lithography system on a silicon-on-insulator (SOI) platform (220 nm Si on 3 \(\upmu\)m buried oxide from University Wafer) using ZEP positive resist. Reactive ion etching (RIE) in SF\({}_{6}\)/C\({}_{4}\)F\({}_{8}\) was then carried out to etch away 220 nm of Si. A 30 nm thin film of Sb\({}_{2}\)Se\({}_{3}\) was thermally evaporated (120-nm-thick Sb\({}_{2}\)Se\({}_{3}\) in the case of the embedded design) and a second electron-beam lithography step was used to pattern the Sb\({}_{2}\)Se\({}_{3}\) layer using MaN-2403. The unexposed regions are subsequently etched away using RIE in CF\({}_{4}\) forming the Sb\({}_{2}\)Se\({}_{3}\) patches on top of the waveguide and finally everything was capped using 10 nm of sputtered SiO\({}_{2}\) to avoid oxidation. A Santec TSL-570 tunable laser was used as the laser source and optical signals were collected by a Santec MPM-210 photodetector to capture transmission spectra of the Bragg devices.
Fig. 4a-b show SEM images of the fabricated devices. Both designs exhibit misalignment between the two electron-beam lithography steps and lithography smoothing, which affected all device types. The embedded PCM devices (see Fig. 4a) exhibit an incomplete and nonuniform filling of the trenches by the phase change material and passivation layer. As a result, the embedded PCM devices did not operate according to their design. The evanescent PCM devices displayed unintentional alignment offsets between the Sb\({}_{2}\)Se\({}_{3}\) and photonic device layers (Fig. 4b) which reduced functionality. Fig. 4c-e show different measured spectra across three evanescent PCM devices, which still displayed an expected increase in \(\overline{n}_{eff}\) of the grating to produce a \(\lambda_{B}\) red shifting of approximately 5 nm. The red shift, in turn, leads to extinction ratios of \(\sim\)20 dB for the wavelengths outside the overlapping stopbands. Upon
Figure 4: Results of fabricated Bragg devices. (a)-(b) SEM Images of fabricated devices. Devices with Sb\({}_{2}\)Se\({}_{3}\) embedded (a) and evanescently coupled (b) on waveguide segments of a Bragg grating. The poor quality of the filling and nonuniformity of the Sb\({}_{2}\)Se\({}_{3}\) in the waveguide resulted in minimal spectral shift after crystallization. (c)-(e) Spectra of devices with Sb\({}_{2}\)Se\({}_{3}\) deposited on top of the waveguide. The device in (b) shows an example of poor alignment between the Sb\({}_{2}\)Se\({}_{3}\) and Bragg grating which reduces effectiveness of the refractive index contrast after phase transition. Multiple devices exhibit a red-shift in Bragg wavelength and increase in bandwidth upon crystallization.
crystallization, Fig. 4c-e also exhibit broadening of the stopband bandwidth, which corresponds to an expected increase in \(\Delta n_{eff}\) between segments. Fig. 4c exhibits a response for a device with only 200 periods while the devices in Fig. 4d-e had 400 periods. The uniform response versus number of periods can be attributed to accumulation of phase error in the Bragg mirrors.
Fig. 5a-b show SEM images of the fabricated Fabry-Perot Etalon (FP) devices. As with the Bragg filter devices, both designs exhibit fabrication imperfections, though the evanescent FP design is much more tolerant to small misalignments than the Bragg filter due to the relative size of the Sb\({}_{2}\)Se\({}_{3}\) area compared to the alignment accuracy. Fig. 5a shows an embedded PCM device with an incomplete filling of the trench by the phase change material and the passivation layer, which similarly led to devices unresponsive to thermal annealing. In contrast, some misalignment of the PCM over the FP cavity in the evanescent devices did not lead to major changes in expected performance. Fig. 5c shows an effective device which shifts the passband out of the stopband's bandwidth upon crystallization, and Fig. 5d exhibits a device with a 1.583 nm shifting of the passband upon crystallization with extinction ratios of exceeding 25 dB. The device in Fig. 5c used an FP defect that was designed to be 5.425 \(\upmu\)m long on a 470 nm wide waveguide. The passband of the device in Fig. 5c exhibits a Q factor of about 1.6 \(\times\) 10\({}^{4}\) while the device in Fig. 5d exhibits Q factors of 8.3 \(\times\) 10\({}^{3}\) and 4.0 \(\times\) 10\({}^{3}\) in the amorphous and crystalline phases respectively. The shift in resonance wavelength of the FP cavity upon phase transition is independent of the average width or number of periods of the Bragg reflectors (minimum number of periods was 100), but is proportional to changes in the FP defect length and modulation strength of \(\Delta n_{eff}\) in the width-modulated passive Bragg mirrors (denoted as
Figure 5: Results of fabricated Fabry-Perot Etalon devices. (a)-(b) SEM Images of fabricated devices. Devices with Sb\({}_{2}\)Se\({}_{3}\) embedded (a) and evanescently coupled (b) on the defect segment of a FP device. The poor quality of the filling of the Sb\({}_{2}\)Se\({}_{3}\) in the embedded waveguide resulted in minimal effects after crystallization. The phase-shifted device shown in (b) is much less sensitive to alignment compared to the Bragg grating design (Fig. 4a). (c)-(d) Spectra of devices with Sb\({}_{2}\)Se\({}_{3}\) deposited on top of the defect. The device in (c) shows an example of expected device behavior. Device (d) exhibits the characteristic red shift of the passband. Passband peak shift was shown to be related to both FP cavity length and \(\Delta w\) of the Bragg reflectors and is shown in (e)-(f).
\(\Delta w\) in Fig. 5f). The increase in red shift with respect to \(\Delta n_{eff}\) (which is proportional to \(\Delta w\)) between segments agrees with TMM simulations where FP length and \(\overline{n}_{eff}\) are held constant for the given geometries that are shown in Fig. 5. The experimental passband red shift was measured in 23 devices, as plotted in Fig. 5e and fitted as a function of FP cavity length (linearly) and to the width difference between segments in the Bragg mirrors (quadratically). Devices which exhibited larger shifts, such as the device in Fig. 5d, did exist but were not plotted in Fig. 5e since the peak was unable to be measured within the stopband region. The derived contour lines for \(\Delta w\) are shown in Fig. 5f.
## 5 Conclusion
We have proposed and experimentally demonstrated nonvolatile switchable devices using Bragg gratings with various designs and functionalized with Sb\({}_{2}\)Se\({}_{3}\). Our experimental results demonstrate the feasibility of the different tunable designs simulated in Section 3. Namely, Bragg grating devices with stopband shifts of \(\sim\)5 nm and extinction rations of 20 dB upon phase transition, and Fabry-Perot etalon devices whose resonances can be shifted as a function of geometrical parameter of the Sb\({}_{2}\)Se\({}_{3}\) cell leading to extinction rations of \(\sim\)20 dB. Moreover, we explored both embedded and evanescently coupled PCM devices; however, the former, while designed to display the strongest modulation, were highly challenging to fabricate and require further refinement. Finally, the devices we propose in this work can be readily integrated into photonic integrated circuits featuring doped-silicon microheaters for electrically controlled reversible switching [18, 20, 21, 22, 23]. Our results pave the way towards zero-static power reconfigurable Bragg gratings for high-performance filtering and routing in photonic integrated circuits.
**Funding.** This work was supported in part by the U.S. National Science Foundation under Grants ECCS-2028624, DMR-2003325, ECCS-1901864, ECCS-2210168/2210169, ECCS-2132929 as well as by the Office of Naval Research (ONR award #N000141410765). N.Y. acknowledges support from the University of Pittsburgh Momentum Fund. C.R acknowledges support from the Mina Martin Foundation through the University of Maryland.
**Disclosures.** The authors declare no conflicts of interest.
**Data availability.** Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. |
2310.07899 | RoboCLIP: One Demonstration is Enough to Learn Robot Policies | Reward specification is a notoriously difficult problem in reinforcement
learning, requiring extensive expert supervision to design robust reward
functions. Imitation learning (IL) methods attempt to circumvent these problems
by utilizing expert demonstrations but typically require a large number of
in-domain expert demonstrations. Inspired by advances in the field of
Video-and-Language Models (VLMs), we present RoboCLIP, an online imitation
learning method that uses a single demonstration (overcoming the large data
requirement) in the form of a video demonstration or a textual description of
the task to generate rewards without manual reward function design.
Additionally, RoboCLIP can also utilize out-of-domain demonstrations, like
videos of humans solving the task for reward generation, circumventing the need
to have the same demonstration and deployment domains. RoboCLIP utilizes
pretrained VLMs without any finetuning for reward generation. Reinforcement
learning agents trained with RoboCLIP rewards demonstrate 2-3 times higher
zero-shot performance than competing imitation learning methods on downstream
robot manipulation tasks, doing so using only one video/text demonstration. | Sumedh A Sontakke, Jesse Zhang, Sébastien M. R. Arnold, Karl Pertsch, Erdem Bıyık, Dorsa Sadigh, Chelsea Finn, Laurent Itti | 2023-10-11T21:10:21Z | http://arxiv.org/abs/2310.07899v1 | # RoboCLIP:
###### Abstract
Reward specification is a notoriously difficult problem in reinforcement learning, requiring extensive expert supervision to design robust reward functions. Imitation learning (IL) methods attempt to circumvent these problems by utilizing expert demonstrations but typically require a large number of in-domain expert demonstrations. Inspired by advances in the field of Video-and-Language Models (VLMs), we present RoboCLIP, an online imitation learning method that uses a single demonstration (overcoming the large data requirement) in the form of a video demonstration or a textual description of the task to generate rewards without manual reward function design. Additionally, RoboCLIP can also utilize out-of-domain demonstrations, like videos of humans solving the task for reward generation, circumventing the need to have the same demonstration and deployment domains. RoboCLIP utilizes pretrained VLMs without any finetuning for reward generation. Reinforcement learning agents trained with RoboCLIP rewards demonstrate 2-3 times higher zero-shot performance than competing imitation learning methods on downstream robot manipulation tasks, doing so using only one video/text demonstration. Visit our website for experiment videos.
## 1 Introduction
Sequential decision-making problems typically require significant human supervision and data. In the context of online reinforcement learning (Sutton and Barto, 2018), this manifests in the design of good reward functions that map transitions to scalar rewards (Amodei et al., 2016; Hadfield-Menell et al., 2017). Extant approaches to manual reward function definition are not very principled and defining rewards for complex long-horizon problems is often an art requiring significant human expertise. Additionally, evaluating reward functions often requires knowledge of the true state of the environment. For example, imagine a simple scenario where the agent must learn to lift an object off the ground. Here, a reward useful for task success would be proportional to the height of the object from the ground -- a quantity non-trivial to obtain without full state information. Thus, significant effort has been expounded in developing methods that can learn reward functions either explicitly or implicitly from demonstrations, i.e., imitation learning (Pomerleau, 1988; Ng and Russell, 2000; Abbeel and Ng, 2004; Ziebart et al., 2008). With these methods, agent policies can either be directly extracted from the demonstrations or trained to optimize rewards functions learned from them.
Imitation learning (IL), however, only somewhat alleviates the need for expert human intervention. First, instead of designing complex reward functions, expert supervision is needed to collect massive
datasets such as RT-1 (Brohan et al., 2022), Bridge Dataset (Ebert et al., 2021), D4RL (Fu et al., 2020), or Robonet (Dasari et al., 2019). The performance of imitation learning algorithms and their ability to generalize hinges on the coverage and size of data (Kumar et al., 2019, 2022), making the collection of large datasets imperative. Second and most importantly, the interface for collecting demonstrations for IL is tedious, requiring expert robot operators to collect thousands of demonstrations. On the contrary, a more intuitive way to define rewards would be in the form of a textual description (e.g., "_robot grasping object_"), or in the form of a naturalistic video demonstration of the task performed by a human actor in an environment separate from the robotic environment. For example, demonstrating to a robot how to open a cabinet door in one's own kitchen is more naturalistic than collecting many thousands of trajectories via teleoperation in the target robotic environment.
Thus, there exists an unmet need for IL algorithms that 1) require very few demonstrations and 2) allow for a natural interface for providing these demonstrations. For instance, algorithms that can effectively learn from language instructions or human demonstrations without the need for full environment state information. Our key insight is that by leveraging Video-and-Language Models (VLMs)--which are already pretrained on large amount of video demonstration and language pairs--we do not need to rely on large-scale and in-domain datasets. Instead, by harnessing the power of VLM embeddings, we treat the mismatch between a single _instruction's_ embedding (provided as a language command or a video demonstration) and the embedding of the video of the current policy's rollout as a proxy reward that will guide the policy towards the desired _instruction_.
To this end, we present RoboCLIP, an imitation learning algorithm that learns and optimizes a reward function based on a single language or video demonstration. The backbone model used in RoboCLIP is S3D (Xie et al., 2018) trained on the Howto100M dataset (Miech et al., 2019), which consists of short clips of humans performing activities with textual descriptions of the activities. These videos typically consist of a variety of camera angles, actors, lighting conditions, and backgrounds. We hypothesize that VLMs trained on such diverse videos are invariant to these extraneous factors and generate an actor-agnostic semantically-meaningful representation for a video, allowing them to generalize to unseen robotic environments.
We present an overview of RoboCLIP in Figure 1. RoboCLIP computes a similarity score between videos of online agent experience with a task descriptor, i.e., a text description of the task or a single human demonstration video, to generate trajectory-level rewards to train the agent. We evaluate RoboCLIP on the Metaworld Environment suite (Yu et al., 2020) and on the Franka Kitchen Environment (Gupta et al., 2019), and find that policies obtained by pretraining on the RoboCLIP reward result in \(2-3\times\) higher zero-shot task success in comparison to state-of-the-art imitation learning baselines. Additionally, these rewards require no experts for specification and can be generated using naturalistic definitions like natural language task descriptions and human demonstrations.
## 2 Related Work
Learning from Human Feedback.Learning from demonstrations is a long-studied problem that attempts to learn a policy from a dataset of expert demonstrations. Imitation learning (IL) methods, such as those based on behavioral cloning (Pomerleau, 1988), formulate the problem as a supervised learning over state-action pairs and typically rely on large datasets of expert-collected trajectories directly demonstrating how to perform the target task (Brohan et al., 2022, Lynch et al., 2022).
Figure 1: **RoboCLIP Overview. A Pretrained Video-and-Language Model is used to generate rewards via the similarity score between the encoding of an episode of interaction of an agent in its environment, \(\mathbf{z}^{v}\) with the encoding of a task specific \(\mathbf{z}^{d}\) such as a textual description of the task or a video demonstrating a successful trajectory. The similarity score between the latent vectors is provided as reward to the agent.**
However, these large demonstration datasets are often expensive to collect. Another IL strategy is _inverse_ RL, i.e., directly learning a reward function from the demonstrations (Ng and Russell, 2000; Abbeel and Ng, 2004; Ziebart et al., 2008; Finn et al., 2016). Inverse RL algorithms are typically difficult to apply when state and action spaces are high-dimensional. Methods such as GAIL (Ho and Ermon, 2016), AIRL (Fu et al., 2017), or VICE (Fu et al., 2018) partially address these issues by assigning rewards which are proportional to the probability of a given state being from the demonstration set or a valid goal state as estimated by a learned discriminator network. However these discriminator networks still require many demonstrations or goal states to train to effectively distinguish between states from agent-collected experience and demonstration or goal states. On the other hand, RoboCLIP's use of pretrained video-and-language models allows us to train agents that learn to perform target tasks with just _one demonstration_ in the form of a video or a language description. Other works instead use human feedback in the form of pairwise comparisons or rankings to learn preference reward functions (Christiano et al., 2023; Sadigh et al., 2017; Biyk et al., 2019; Myers et al., 2021; Byik et al., 2022; Brown et al., 2019; Biyik et al., 2020; Lee et al., 2021; Hejna and Sadigh, 2022). These preferences may require less human effort to obtain than reward functions, e.g., through querying humans to simply rank recent trajectories. Yet individual trajectory preferences convey little information on their own (less than dense reward functions) and therefore humans need to respond to many preference queries for the agent to learn useful reward functions. In contrast, RoboCLIP is able to extract useful rewards from a single demonstration or single language instruction.
Large Vision and Language Models as Reward Functions.Kwon et al. (2023) and Hu and Sadigh (2023) propose using large language models (LLMs) for designing and regularizing reward functions that capture human preferences. These works study the reward design problem in text-based games such as negotiations or card games, and thus are not grounded in the physical world. RoboCLIP instead leverages video-and-language models to assess if video demonstrations of robot policies align with an expert demonstration. Prior work has demonstrated that video models can be used as reward functions. For example, Chen et al. (2021) learn a visual reward function using human data and then utilize this reward function for visual model-based control of a robot. However, they require training the reward model on paired human and robot data from the deployment environment. We demonstrate that this paired data assumption can be relaxed by utilizing large-scale vision-language models pretrained on large corpora of human-generated data. The most well-known of these is CLIP (Radford et al., 2021), which is trained on pairs of images and language descriptions scraped from the internet. While CLIP is trained only on images, video-language-models (VLMs) trained on videos of humans performing daily tasks such as S3D (Xie et al., 2018) or XCLIP (Ni et al., 2022) are also widely available. These models utilize language descriptions while training to supervise their visual understanding so that _semantically_ similar vision inputs are embedded close together in a shared vector space. A series of recent works demonstrate that these VLMs can produce useful rewards for agent learning. Fan et al. (2022) finetune CLIP on YouTube videos of people playing Minecraft and demonstrate that the finetuned CLIP model can be used as a language-conditioned reward function to train an agent. DECKARD (Nottingham et al., 2023) then uses the fine-tuned reward function of Fan et al. (2022) to reward an agent for completing tasks proposed by a large-language model and abstract world model. PAFF (Ge et al., 2023) uses a fine-tuned CLIP model to align videos of policy rollouts with a fixed set of language skills and relabel experience with the best-aligned language label. We demonstrate that _videos_ and multi-modal task specifications can be utilized to learn reward functions allowing for training agents. Additionally, we present a method to test the alignment of pretrained VLMs with deployment environments.
## 3 Method
Overview.RoboCLIP utilizes pretrained video-and-language models to generate rewards for online RL agents. This is done by providing a sparse reward to the agent at the end of the trajectory which describes the similarity of the agent's behavior to that of the demonstration. We utilize video-and-language models as they provide the flexibility of defining the task in terms of natural language descriptions or video demonstrations sourced either from the target robotic domain or other more naturalistic domains like human actors demonstrating the target task in their own environment. Thus, a demonstration (textual or video) and the video of an episode of robotic interaction are embedded into the semantically meaningful latent space of S3D (Xie et al., 2018), a video-and-language model pretrained on diverse videos of human actors performing everyday tasks taken from the HowTo100M
dataset (Miech et al., 2019). The two vectors are subsequently multiplied using a scalar product generating a similarity score between the 2 vectors. This similarity score (without scaling) is returned to the agent as a reward for the episode.
Notation.We formulate the problem in the manner of a POMDP (Partially Observable Markov Decision Process) with (\(\mathcal{O}\), \(\mathcal{S}\), \(\mathcal{A}\), \(\phi\), \(\theta\), \(r\), \(T\), \(\gamma\)) representing an observation space \(\mathcal{O}\), state space \(\mathcal{S}\), action space \(\mathcal{A}\), transition function \(\phi\), emission function \(\theta\), reward function \(r\), time horizon \(T\), and discount factor \(\gamma\). An agent in state \(\mathbf{s}_{t}\) takes an action \(\mathbf{a}_{t}\) and consequently causes a transition in the environment through \(\phi(\mathbf{s}_{t+1}\mid\mathbf{s}_{t},\mathbf{a}_{t})\). The agent receives the next state \(\mathbf{s}_{t+1}\) and reward \(r_{t}=r(\mathbf{o}_{t},\mathbf{a}_{t})\) calculated using the observation \(\mathbf{o}_{t}\). The goal of the agent is to learn a policy \(\pi\) which maximizes the expected discounted sum of rewards, i.e., \(\sum_{t=0}^{T}\gamma^{t}r_{t}\). Note that all of our baselines utilize the true state for reward generation and for policy learning. To examine the effect of using a video-based reward, we also operate our policy on the state space while using the pixel observations for reward generation. Thus, \(r_{t}\) uses \(\mathbf{o}_{t}\) while \(\pi\) uses \(\mathbf{s}_{t}\) for RoboCLIP while for all other baselines, both \(r_{t}\) and \(\pi\) utilize \(\mathbf{s}_{t}\). This of course is unfair to our method, but we find that in spite of the advantage provided to the baselines, RoboCLIP rewards still generate higher zero-shot success.
Reward Generation.During the pretraining phase, we supply the RoboCLIP reward to the agent in a sparse manner at the end of each episode. This is done by storing the video of an episode of the interaction of the agent with the environment into a buffer as seen in Figure 1. A sequence of observations of length \(128\) are saved in a buffer corresponding to the length of the episode. S3D is trained on videos length \(32\) frames and therefore the episode video is subsequently downsampled to result in a video of length \(T=32\). The video is subsequently center-cropped to result in frames of size \((250,250)\). This is done to ensure that the episode video is preprocessed to match the specifications of the HowTo100M preprocessing used to train the S3D model. Thus the tensor of a sequence of \(T\) observations \(\mathbf{o}_{0:T}\) is encoded as the latent video vector \(\mathbf{z}^{v}\) using
\[\mathbf{z}^{v}=S3D^{\text{video-encoder}}(\mathbf{o}_{0:T}) \tag{1}\]
The task specification is also encoded into the same space. If it is defined using natural language, the language encoder in S3D encodes a sequence of \(K\) textual tokens \(\mathbf{d}_{0:K}\) into the latent space using:
\[\mathbf{z}^{d}=S3D^{\text{text-encoder}}(\mathbf{d}_{0:K}) \tag{2}\]
If the task description is in the form of a video of length \(K\), then we preprocess and encode it using the video-encoder in S3D just as in Equation (1). For intermediate timesteps, i.e., timesteps other than the final one in an episode, the reward supplied to the agent is zero. Subsequently, at the end of the episode, the similarity score between the encoded task descriptor \(\mathbf{z}^{d}\) and the encoded video of the episode \(\mathbf{z}^{v}\) is used as reward \(r^{\text{RoboCLIP}}(T)\). Thus the reward is:
\[r^{\text{RoboCLIP}}(t)=\begin{cases}0,&t\neq T\\ \mathbf{z}^{d}\cdot\mathbf{z}^{v}&t=T\end{cases}\]
where \(\mathbf{z}^{d}\cdot\mathbf{z}^{v}\) corresponds to the scalar product between vectors \(\mathbf{z}^{d}\) and \(\mathbf{z}^{v}\).
Agent Training.Using \(r^{\text{RoboCLIP}}\) defined above, we then train an agent online in the deployment environment with any standard reinforcement learning (RL) algorithm by labeling each agent experience trajectory with \(r^{\text{RoboCLIP}}\) after the agent collects it. In our paper, we train with PPO (Schulman et al., 2017), an on-policy RL algorithm, however, RoboCLIP can also be applied to off-policy algorithms. After training with this reward, the agent can be zero-shot evaluated or fine-tuned on true environment reward on the target task in the deployment environment.
## 4 Experiments
We test out each of the hypotheses defined in Section 1 on simulated robotic environments. Specifically, we ask the following questions:
1. _Do existing pretrained VLMs semantically align with robotic manipulation environments?_
2. _Can we utilize natural language to generate reward functions?_
3. _Can we use videos of expert demonstrations to generate reward functions?_
4. _Can we use out-of-domain videos to generate reward functions?_
5. _Can we generate rewards using a combination of demonstration and natural language?_
6. _What aspects of our method are crucial for success?_
We arrange this section to answer each of these questions. Both RoboCLIP and baselines utilize PPO (Schulman et al., 2017) for policy learning.
Baselines.We use 2 state-of-the-art methods in inverse reinforcement learning: **GAIL**, or Generative Adversarial Imitation Learning (Ho and Ermon, 2016) and **AIRL** or Adversarial Inverse Reinforcement Learning (Fu et al., 2017). Both of these methods attempt to learn reward functions from demonstrations provided to the agent. Subsequently, they train an agent using this learned reward function to imitate the expert behavior. Both methods receive a single demonstration, consistent with our approach of using a single video imitation. However, since they both operate on the ground-truth environment state, we provide them with a **trajectory of states**, instead of images, thereby providing them privileged state information that our method does not receive.
### Domain Alignment
Pretrained vision models are often trained on a variety of human-centric activity data, such as Ego4D (Grauman et al., 2022). Since we are interested in solving robotic tasks with view from third person perspectives, we utilize the S3D (Xie et al., 2018) VLM pretrained on HowTo100M (Miech et al., 2019), a dataset of short third-person clips of humans performing everyday activities. This dataset, however, contains no robotic manipulation data.
To analyze the alignment of the VLM to different domains, we perform a confusion matrix analysis using videos from Metaworld (Yu et al., 2020). We collect 10 videos per task with varying values of true reward. For each video, we also collect the true reward. We then compute the RoboCLIP reward for each video using VLM alignment between the textual description of the task and the video. We visualize the correlations between the RoboCLIP and true rewards in the form of an \(n\times n\) matrix where entry \((i,j)\) corresponds to the correlation between the true reward and the RoboCLIP reward generated for the \(i^{\text{th}}\) task using the \(j^{\text{th}}\) text description. As one can see, for a given task, the highest correlation in the matrix is for the correct textual description. We visualize one such similarity matrix in Figure 2 for Metaworld. We find that Metaworld seems to align well in the latent space of the model with a more diagonal-heavy confusion matrix. The objects are all correctly identified.
### Language for Reward Generation
The most naturalistic way to define a task is through natural language. We do this by generating a sparse reward signal for the agent as described in Section 3: the reward for an episode is the similarity score between its encoded video and the encoded textual description of the expected behavior in the VLM's latent space. The reward is provided to the agent at the end of the episode. For RoboCLIP, GAIL, and AIRL, we first pretrain the agents online with their respective reward functions and then perform finetuning with the true task reward in the deployment environment. We perform this analysis on 3 Metaworld Environments: Drawer-Close, Door-Close and Button-Press. We use the textual descriptions, "_robot closing green drawer_", "_robot closing black box_", and "_robot pushing red button_" for each environment, respectively. Figure 3 plots returns on the target tasks while finetuning on the depoloyment environment after pretraining (with the exception of the Dense Task Reward baseline). Our method outperforms the imitation learning baselines with online exploration in terms of true task rewards in all environments. Additionally our baselines utilize the full state information in the environment for reward generation where RoboCLIP uses only the pixels to infer state. RoboCLIP also achieves more than double zero-shot rewards in all environments -- importantly,
Figure 2: **Domain Alignment Confusion Matrix. We perform a confusion matrix analysis on a subset of the data on collected on Metaworld (Yu et al., 2020) environments by comparing the pair-wise similarities between the latent vectors of the strings describing the videos and those of the videos. We find that Metaworld is well-aligned with higher scores along the diagonal than along the off-diagonal elements.**
the RoboCLIP-trained agent is able to complete the tasks even before finetuning on true task rewards.
### In-Domain Videos for Reward Generation
Being able to use textual task descriptors for reward generation can only work in environments where there is domain alignment between the pretrained model and the visual appearance of the environment. Additionally, VLMs are large models often with billions of parameters making it computationally expensive to fine tune for domain alignment. The most naturalistic way to define a task in such a setting is in the form a single demonstration in the robotic environment which can be collected using teleoperation. We study how well this works in the Franka Kitchen (Gupta et al., 2019) environment. We consider access to a single demonstration per task whose video is used to generate rewards for online RL.
Quantitative Results.We measure the zero-shot task reward, which increases as the task object (i.e., Kettle, Slide and Hinge Cabinets) gets closer to its goal position. This reward does not depend on the position of the end-effector, making the tasks difficult. Figure 4 shows the baselines perform poorly as they generally do not interact with the target objects, while RoboCLIP is able to solve the task using the reward generated using the video of a single demonstration.
Qualitative Results.We find that RoboCLIP allows for mimicking the "style" of the source demonstration, with idiosyncrasies of motion from the source demonstration generally transferring to the policy generated. We find this to occur in the kitchen environment's Slide and Hinge task as seen in Figure 5. The first row of the subfigures in Figure 5 are visualizations of the demonstration video used to condition the VLM for reward generation. The bottom rows correspond to the policies that are trained with the generated rewards of RoboCLIP. As can be seen, the Slide demonstration consists of a wide circular arc of motion. This is mimicked in the learned policy, although the agent misses the cabinet in the first swipe and readjusts to make contact with the handle.
Figure 4: Using In-Domain Videos for Reward Generation.The pretrained VLM is used to generate rewards via the similarity score of the encoding of an episode of interaction of an agent in its environment, \(\mathbf{z}^{v}\) with the encoding of a video demonstration of expert behavior in the same environment. The similarity score between the latent vectors is provided as reward to the agent and is used to train online RL methods. We study this setup in the Kettle, Hinge and Slide Tasks in the Franka Kitchen Environment (Gupta et al., 2019). We find that policies trained on the RoboCLIP reward are able to learn to complete the task in all three setups without any need for external rewards using just a single in-domain demonstration.
Figure 3: Language-Conditioned Reward Generation.The pretrained VLM is used to generate rewards via the similarity score of the encoding of an episode of interaction of an agent in its environment, \(\mathbf{z}^{v}\) with the encoding of a task specific \(\mathbf{z}^{d}\) specified in natural language. We use the strings, “_robot closing black box_”, “_robot closing green drawer_” and “_robot pushing red button_” for conditioning for the 3 environments respectively. We find that agents pretrained on these language-conditioned rewards outperform imitation learning baselines like GAIL (Ho and Ermon, 2016) and AIRL (Fu et al., 2017).
This effect is even more pronounced in the Hinge example where the source demonstration consists of twirling wrist-rotational behavior, which is subsequently imitated by the learned policy. The downstream policy misses the point of contact with the handle but instead uses the twirling motion to open the hinged cabinet in an unorthodox manner by pushing near the hinge. We posit that the VLMs used in RoboCLIP contain a rich latent space encoding these various motions, and so even if they cannot contain semantically meaningful latent vectors in the Franka Kitchen environments due to domain mismatch, they are still able to encode motion information allowing them to be used for RoboCLIP with a single demonstration video.
### Out-of-Domain Videos for Reward Generation
Another natural way to define a task is to demonstrate it yourself. To this end, we try to use demonstrations of humans or animated characters acting in separate environments as task specification.
For this, we utilize animated videos of a hand pushing a red button and opening a green drawer and a real human video of opening a fridge door (see Figure 7). The animated videos are collected from stock image repositories and the human video is collected using a phone camera in our lab kitchen.
Figure 5: **Qualitative Inspection of Imitation. The first row in each subfigure shows the visualizations of the demonstration video used for reward generation via the VLM. The second rows are videos taken from policy recovered from training on the RoboCLIP reward generated using the videos in the first rows. The quick swipping motion demonstrated in Slide demonstration is mimicked well in the resultant policy while the wrist-rotational “trick-shot” behavior in the demonstration for Hinge appears in the resultant learned policy.**
Figure 6: **Finetuning for Harder Environments: In harder environments, like Coffee-Push and Faucet-Open, we find that RoboCLIP rewards do not solve the task completely. We test whether providing a single demonstration in the environment (using observations and actions) is enough to finetune this pretrained policy, a setup identical to our baselines. Thus, we pre-train on the RoboCLIP reward from language and then finetune using a single robotic demonstration. This improves performance by \(\sim 200\)%. See videos on our website.**
Using the encodings of these video, we test out RoboCLIP in the 3 corresponding Metaworld tasks - Button-Press, Drawer-Open and Door-Open. We follow the same setup as in Section 4.2 by first pretraining methods with their respective reward functions and then finetuning in the deployment environment with target task reward.
We compare the performance of the policy trained with these rewards to GAIL [Ho and Ermon, 2016] and AIRL [Fu et al., 2017] trained using the same single expert demonstration as RoboCLIP on these rewards with state information. These methods are known to be data-hungry, requiring multiple demonstrations to train their reward functions. Consequently, they perform much worse than RoboCLIP, even with 2-3x worse zero-shot task performance, as can be seen from Figure 7.
### Multimodal Task Specification
Using videos to specify a task description is possible when either there is access to a robot for teleoperation as in Section 4.3 or a human can demonstrate a behavior in their own environment as in Section 4.4. When these are not the case, a viable alternative is to utilize multimodal demonstrations. For example, consider a scenario where the required task is to push a drawer to close it, but only a demonstration for pushing a button is available. In this situation, being able to edit the video of the off-task demonstration is useful. This way, one can direct the agent to move its end-effectors to push the drawer instead of the button.
We do this by algebraically modifying the encoding of the video demonstration:
\[\mathbf{z}^{\text{edited}}(\text{push drawer})=\mathbf{z}^{video}(\text{ push button})-\mathbf{z}^{text}(\text{button})+\mathbf{z}^{text}(\text{drawer}) \tag{3}\]
where \(\mathbf{z}^{\text{edited}}(\text{push drawer})\) is the vector used to generate rewards in the Drawer-Close environment, \(\mathbf{z}^{video}(\text{push button})\) is the vector of the encoding of the video of the robot pushing a button, \(\mathbf{z}^{text}(\text{button})\) is the encoding of the string _button_ and \(\mathbf{z}^{text}(\text{drawer})\) is the encoding of the string _drawer_. As can be seen in Figure 8, defining rewards in such a multimodal manner results in a higher zero-shot score than the dense task reward and also pretraining on the string-only task reward.
Figure 8: **Multimodal Task Specification.** We study whether video demonstrations of expert demonstrations can be used to define tasks. We use the latent embedding of a video demonstration of a robot pushing a button and subtract from it the embedding of the text “_red button_” and add to it the embedding of the text “_green drawer_”. This modified latent is used to generate rewards in the Drawer-Close environment. We find that the policy trained using this modified vector outperforms string-only manipulation in the zero-shot setting.
Figure 7: **Using Out-of-Domain Videos for Reward Generation.** A Pretrained Video-and-Language Model is used to generate rewards via the similarity score of the encoding of an episode of interaction of an agent in its environment, \(\mathbf{z}^{v}\) with the encoding of a task specific \(\mathbf{z}^{d}\) in the form of a video of a human or an animated character demonstrating a task in their own environment. The similarity score between the latent vectors is provided as reward to the agent and is used to train online RL methods. The frames below the graphs illustrate the video used for reward generation.
### Finetuning
In harder environments, and with rewards from OOD videos and language, the robot policy sometimes approaches the target object, but fails to complete the task. Thus, we tested whether providing a single demonstration (using observations and actions) was enough to finetune this pretrained policy.
Thus, for this experiment we first (1) pretrain on the RoboCLIP reward from human videos or language descriptions and then (2) finetune using a single demonstration. As seen in Figure 6, we find that this converts each of the partially successful policies into complete success and improves the rewards attained by the policies by \(~{}200\%\). This fine-tuning setup is especially useful in harder tasks like coffee-Push and Faucet-Open and is competitive with state-of-the-art approaches like FISH (Haldar et al., 2023).
### Ablations
Finally, we investigate the effects of various design decisions in RoboCLIP. First, we study the effect of additional video demonstrations on agent performance. We also examine the necessity of using a pre-trained VLM. Recent works like RE3 (Seo et al., 2021; Ulyanov et al., 2018) have shown that randomly initialized networks often contain useful image priors and can be used to supply rewards to agents to encourage exploration. Therefore, we test whether a _randomly initialized_ S3D VLM can supply useful pretraining rewards in the in-domain video demonstration setup as in Section 4.3. Finally, we study our choice of pre-trained VLM. We examine whether a pretrained CLIP (Radford et al., 2021), which encodes single images instead of videos and was trained on a different dataset from S3D, can be used to generate rewards for task completion. In this setup, we record the last image in an episode of interaction of the agent in its environment and feed it to CLIP trained on ImageNet (Russakovsky et al., 2015) (i.e., not trained on videos). We then specify the task in natural language and use the similarity between the embeddings of the textual description of the task and the final image in the episode to generate a reward that is fed to the agent for online RL.
As seen in Figure 9, using a single video demonstration provides the best signal for pre-training. We posit that our method performs worse when conditioned on multiple demonstrations as the linear blending of multiple video embeddings, which is used due to the scalar product, does not necessarily correspond to the embedding of a successful trajectory. Crucially, we also find that using the static image version of CLIP does not provide any useful signal for pretraining. The zero-shot performance is very poor, which we posit is because it does not contain any information about the dynamics of motion and task completion although it contains semantic meaning about objects in the frame. On the other hand, video contrastive learning approaches do contain this information. This is further evidenced by the fact that inspite of poor domain alignment between Franka Kitchen and the VLM, we find that encodings of in-domain video demonstrations are still good for providing a pretraining reward signal to the agent.
## 5 Conclusion
**Summary.** We studied how to distill knowledge contained in large pretrained Video-and-Language-Models into online RL agents by using them to generate rewards. We showed that our method, RoboCLIP, can train robot policies using a single video demonstration or textual description of the task, depending on how well the domain aligns with the VLM. We further investigated alternative ways to use RoboCLIP, such as using out-of-domain videos or multimodal demonstrations. Our results showed RoboCLIP outperforms the baselines in various robotic environments.
**Limitations and Broader Impact.** Since we are using VLMs, the implicit biases within these large models could percolate into RL agents. Addressing such challenges is necessary, especially since it is
Figure 9: Ablations. We study the effects of varying the number of demonstrations provided to the agent can have on downstream rewards. We also study the effects of the training provided to the VLM on the downstream rewards. Finally, we study whether using CLIP trained on static images provides good rewards for pretraining.
unclear what the form of biases in RL agents might look like. Currently, our method also faces the challenge of stable finetuning. We find that in some situations, finetuning on downstream task reward results in instabilities as seen in the language conditioned reward curve in Figure 8. This instability is potentially due to the scale of rewards provided to the agent. Rewards from the VLM are fairly low in absolute value and subsequently, the normalized Q-values in PPO policies are out-of-shape when finetuned on task rewards. In our experiments, this is not a big problem since the RoboCLIP reward is already sufficient to produce policies that complete tasks without any deployment environment finetuning, but this will be essential to solve when deploying this for longer horizon tasks.
Another limitation of our work is that there is no fixed length of pretraining. Our current method involves pretraining for a fixed number of steps and then picking the best model according to the true task reward. This is of course difficult when deploying RoboCLIP in a real-world setup as a true reward function is unavailable and a human must monitor the progress of the agent. We leave this for future work.
This work was supported by DARPA (HR00112190134), C-BRIC (one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA), the Army Research Office (W911NF2020053) and the Office of Naval Research (ONR) (#N00014-21-1-2298 and #N00014-21-1-2685). The authors affirm that the views expressed herein are solely their own, and do not represent the views of the United States government or any agency thereof. Thanks to Karol Hausman, Sergey Levine, Ofir Nachum, Kuang-Huei Lee and many others at Google DeepMind for helpful discussions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.